Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
38,244 | 19,042,369,260 | IssuesEvent | 2021-11-25 00:27:49 | earthly/earthly | https://api.github.com/repos/earthly/earthly | closed | SAVE IMAGE is slow, even when there's no work to be done | type:performance | I was observing that for highly optimized builds the slowest part can be saving images. For example, [here](https://github.com/jazzdan/earthly-save-image)’s a repo where `earth +all` takes 12s if everything is cached (like I run `earth +all` twice). Yet if I comment the `SAVE IMAGE` lines out the total time drops to 2s. This implies that `SAVE IMAGE` is doing a lot of work, even when nothing has changed.
Is there anything I can do speed up SAVE IMAGE in instances like this? I’m surprised that SAVE IMAGE does anything if the image hasn’t changed, is it possible for it do some more sophisticated content negotiation with the layers?
After [talking with](https://earthlycommunity.slack.com/archives/C01DL2928RM/p1605112044036800) @agbell in Slack I hypothesized that it might not be possible for Earthly to what images/layers the host has. This is all conjecture on my part, but:
If Earth is running in a container then it doesn’t know the state of the registry on the host machine, and what layers it has. It’s only option is to export the entire image to the host, which on a Mac could be slow because containers on a mac are actually running in a VM.
Maybe if Earth could mount/be aware of the host docker registry it could just do docker push?
This reminds me of similar problems that are being solved in the Kubernetes local cluster space https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry | True | SAVE IMAGE is slow, even when there's no work to be done - I was observing that for highly optimized builds the slowest part can be saving images. For example, [here](https://github.com/jazzdan/earthly-save-image)’s a repo where `earth +all` takes 12s if everything is cached (like I run `earth +all` twice). Yet if I comment the `SAVE IMAGE` lines out the total time drops to 2s. This implies that `SAVE IMAGE` is doing a lot of work, even when nothing has changed.
Is there anything I can do speed up SAVE IMAGE in instances like this? I’m surprised that SAVE IMAGE does anything if the image hasn’t changed, is it possible for it do some more sophisticated content negotiation with the layers?
After [talking with](https://earthlycommunity.slack.com/archives/C01DL2928RM/p1605112044036800) @agbell in Slack I hypothesized that it might not be possible for Earthly to what images/layers the host has. This is all conjecture on my part, but:
If Earth is running in a container then it doesn’t know the state of the registry on the host machine, and what layers it has. It’s only option is to export the entire image to the host, which on a Mac could be slow because containers on a mac are actually running in a VM.
Maybe if Earth could mount/be aware of the host docker registry it could just do docker push?
This reminds me of similar problems that are being solved in the Kubernetes local cluster space https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry | non_priority | save image is slow even when there s no work to be done i was observing that for highly optimized builds the slowest part can be saving images for example a repo where earth all takes if everything is cached like i run earth all twice yet if i comment the save image lines out the total time drops to this implies that save image is doing a lot of work even when nothing has changed is there anything i can do speed up save image in instances like this i’m surprised that save image does anything if the image hasn’t changed is it possible for it do some more sophisticated content negotiation with the layers after agbell in slack i hypothesized that it might not be possible for earthly to what images layers the host has this is all conjecture on my part but if earth is running in a container then it doesn’t know the state of the registry on the host machine and what layers it has it’s only option is to export the entire image to the host which on a mac could be slow because containers on a mac are actually running in a vm maybe if earth could mount be aware of the host docker registry it could just do docker push this reminds me of similar problems that are being solved in the kubernetes local cluster space | 0 |
163,510 | 12,733,284,737 | IssuesEvent | 2020-06-25 12:01:32 | DiSSCo/ELViS | https://api.github.com/repos/DiSSCo/ELViS | closed | Possibility for requesters and VA Coordinators to filter own/other requests | enhancement resolved to test | #### Description
Since requesters and VA Coordinators now have the possibility to see all requests from others (either other requesters or requests related to other institutions) in the main menu option "Requests", it is necessary to offer them a filter via which they can choose to either see their own requests and requests related to their own institution, or other requests, from other requesters (except the ones still in the draft status ofc) or only related to other institution than their own.
Sprint target for sprint 13 | 1.0 | Possibility for requesters and VA Coordinators to filter own/other requests - #### Description
Since requesters and VA Coordinators now have the possibility to see all requests from others (either other requesters or requests related to other institutions) in the main menu option "Requests", it is necessary to offer them a filter via which they can choose to either see their own requests and requests related to their own institution, or other requests, from other requesters (except the ones still in the draft status ofc) or only related to other institution than their own.
Sprint target for sprint 13 | non_priority | possibility for requesters and va coordinators to filter own other requests description since requesters and va coordinators now have the possibility to see all requests from others either other requesters or requests related to other institutions in the main menu option requests it is necessary to offer them a filter via which they can choose to either see their own requests and requests related to their own institution or other requests from other requesters except the ones still in the draft status ofc or only related to other institution than their own sprint target for sprint | 0 |
69,521 | 22,409,673,274 | IssuesEvent | 2022-06-18 14:27:19 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | The left margin of display name on reply tiles on TimelineCard should be removed | T-Defect | ### Steps to reproduce
1. Open a room
1. Enable a widget
2. Maximize the widget
3. Open a chat panel
4. Send a message
5. Reply to the message
### Outcome
#### What did you expect?
The left margin should not be added to the display name inside the reply tile.

#### What happened instead?
There is the left margin next to the display name.

### Operating system
Debian
### Browser information
Firefox ESR 91
### URL for webapp
localhost
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | The left margin of display name on reply tiles on TimelineCard should be removed - ### Steps to reproduce
1. Open a room
1. Enable a widget
2. Maximize the widget
3. Open a chat panel
4. Send a message
5. Reply to the message
### Outcome
#### What did you expect?
The left margin should not be added to the display name inside the reply tile.

#### What happened instead?
There is the left margin next to the display name.

### Operating system
Debian
### Browser information
Firefox ESR 91
### URL for webapp
localhost
### Application version
develop branch
### Homeserver
_No response_
### Will you send logs?
No | non_priority | the left margin of display name on reply tiles on timelinecard should be removed steps to reproduce open a room enable a widget maximize the widget open a chat panel send a message reply to the message outcome what did you expect the left margin should not be added to the display name inside the reply tile what happened instead there is the left margin next to the display name operating system debian browser information firefox esr url for webapp localhost application version develop branch homeserver no response will you send logs no | 0 |
11,500 | 30,768,013,328 | IssuesEvent | 2023-07-30 14:45:30 | SuperCowPowers/sageworks | https://api.github.com/repos/SuperCowPowers/sageworks | opened | Anomaly Detection: Sliding Window | algorithm data_source architecture athena research | Can we do a 'sliding window' based anomaly detection, meaning that we can't (or don't want to) process all the data so we use a sliding window (1 day, 3 days, 1 week, etc) and we run anomaly detection on just that window of data. | 1.0 | Anomaly Detection: Sliding Window - Can we do a 'sliding window' based anomaly detection, meaning that we can't (or don't want to) process all the data so we use a sliding window (1 day, 3 days, 1 week, etc) and we run anomaly detection on just that window of data. | non_priority | anomaly detection sliding window can we do a sliding window based anomaly detection meaning that we can t or don t want to process all the data so we use a sliding window day days week etc and we run anomaly detection on just that window of data | 0 |
285,756 | 24,694,709,444 | IssuesEvent | 2022-10-19 11:11:41 | wpfoodmanager/wp-food-manager | https://api.github.com/repos/wpfoodmanager/wp-food-manager | closed | Icon size is decrease when food title name is long | In Testing | Icon size is decrease when food title name is long.

| 1.0 | Icon size is decrease when food title name is long - Icon size is decrease when food title name is long.

| non_priority | icon size is decrease when food title name is long icon size is decrease when food title name is long | 0 |
192,345 | 15,343,510,333 | IssuesEvent | 2021-02-27 20:37:04 | wprig/wprig | https://api.github.com/repos/wprig/wprig | closed | Update documentation re: child themes | documentation | State in `README.md` that WP Rig should not be used to build a child theme. See #260. | 1.0 | Update documentation re: child themes - State in `README.md` that WP Rig should not be used to build a child theme. See #260. | non_priority | update documentation re child themes state in readme md that wp rig should not be used to build a child theme see | 0 |
189,315 | 14,497,725,417 | IssuesEvent | 2020-12-11 14:36:38 | rancher/harvester | https://api.github.com/repos/rancher/harvester | reopened | create default admin cause cannot add indexers to running index | area/authentication bug to-test | cannot add indexers to running index of the default admin
```
time="2020-11-02T02:52:31Z" level=info msg="Listening on :8443"
time="2020-11-02T02:52:32Z" level=info msg="Starting harvester.cattle.io/v1alpha1, Kind=VirtualMachineImage controller"
time="2020-11-02T02:52:32Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2020-11-02T02:52:32Z" level=info msg="Active TLS secret serving-cert (ver=1209280) (count 6): map[listener.cattle.io/cn-10.42.0.155:10.42.0.155 listener.cattle.io/cn-10.42.0.95:10.42.0.95 listener.cattle.io/cn-10.42.3.13:10.42.3.13 listener.cattle.io/cn-10.42.3.9:10.42.3.9 listener.cattle.io/cn-10.42.4.11:10.42.4.11 listener.cattle.io/cn-172.16.0.63:172.16.0.63 listener.cattle.io/hash:b9226b4a0f99901e63c93d7eb6ea5b7067e5460e32552bdccd53945329c169fa]"
I1102 02:53:14.056000 7 leaderelection.go:252] successfully acquired lease kube-system/harvester-controllers
time="2020-11-02T02:53:14Z" level=error msg="error syncing 'user-gdj99': handler user-rbac-controller: Index with name auth.harvester.cattle.io/crb-by-role-and-subject does not exist, requeuing"
time="2020-11-02T02:53:14Z" level=error msg="error syncing 'user-gdj99': handler user-rbac-controller: Index with name auth.harvester.cattle.io/crb-by-role-and-subject does not exist, requeuing"
time="2020-11-02T02:53:14Z" level=info msg="Default admin already created, skip create admin step"
panic: cannot add indexers to running index
goroutine 2457 [running]:
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime.Must(...)
/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:171
github.com/rancher/harvester/vendor/github.com/rancher/wrangler-api/pkg/generated/controllers/rbac/v1.(*clusterRoleBindingCache).AddIndexer(0xc002bae1b0, 0x1af2d7a, 0x30, 0x1b7cc18)
/go/src/github.com/rancher/harvester/vendor/github.com/rancher/wrangler-api/pkg/generated/controllers/rbac/v1/clusterrolebinding.go:239 +0x11d
github.com/rancher/harvester/pkg/indexeres.RegisterManagementIndexers(0xc000314380)
/go/src/github.com/rancher/harvester/pkg/indexeres/indexer.go:22 +0xa4
github.com/rancher/harvester/pkg/controller/master.register(0x1d688a0, 0xc0010bca80, 0xc000134000, 0xc000314380, 0x0, 0x0)
/go/src/github.com/rancher/harvester/pkg/controller/master/controller.go:39 +0xe6
github.com/rancher/harvester/pkg/controller/master.Setup.func1(0x1d688a0, 0xc0010bca80)
/go/src/github.com/rancher/harvester/pkg/controller/master/setup.go:14 +0x54
created by github.com/rancher/harvester/vendor/github.com/rancher/wrangler/pkg/leader.run.func1
/go/src/github.com/rancher/harvester/vendor/github.com/rancher/wrangler/pkg/leader/leader.go:58 +0x46
``` | 1.0 | create default admin cause cannot add indexers to running index - cannot add indexers to running index of the default admin
```
time="2020-11-02T02:52:31Z" level=info msg="Listening on :8443"
time="2020-11-02T02:52:32Z" level=info msg="Starting harvester.cattle.io/v1alpha1, Kind=VirtualMachineImage controller"
time="2020-11-02T02:52:32Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2020-11-02T02:52:32Z" level=info msg="Active TLS secret serving-cert (ver=1209280) (count 6): map[listener.cattle.io/cn-10.42.0.155:10.42.0.155 listener.cattle.io/cn-10.42.0.95:10.42.0.95 listener.cattle.io/cn-10.42.3.13:10.42.3.13 listener.cattle.io/cn-10.42.3.9:10.42.3.9 listener.cattle.io/cn-10.42.4.11:10.42.4.11 listener.cattle.io/cn-172.16.0.63:172.16.0.63 listener.cattle.io/hash:b9226b4a0f99901e63c93d7eb6ea5b7067e5460e32552bdccd53945329c169fa]"
I1102 02:53:14.056000 7 leaderelection.go:252] successfully acquired lease kube-system/harvester-controllers
time="2020-11-02T02:53:14Z" level=error msg="error syncing 'user-gdj99': handler user-rbac-controller: Index with name auth.harvester.cattle.io/crb-by-role-and-subject does not exist, requeuing"
time="2020-11-02T02:53:14Z" level=error msg="error syncing 'user-gdj99': handler user-rbac-controller: Index with name auth.harvester.cattle.io/crb-by-role-and-subject does not exist, requeuing"
time="2020-11-02T02:53:14Z" level=info msg="Default admin already created, skip create admin step"
panic: cannot add indexers to running index
goroutine 2457 [running]:
github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime.Must(...)
/go/src/github.com/rancher/harvester/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:171
github.com/rancher/harvester/vendor/github.com/rancher/wrangler-api/pkg/generated/controllers/rbac/v1.(*clusterRoleBindingCache).AddIndexer(0xc002bae1b0, 0x1af2d7a, 0x30, 0x1b7cc18)
/go/src/github.com/rancher/harvester/vendor/github.com/rancher/wrangler-api/pkg/generated/controllers/rbac/v1/clusterrolebinding.go:239 +0x11d
github.com/rancher/harvester/pkg/indexeres.RegisterManagementIndexers(0xc000314380)
/go/src/github.com/rancher/harvester/pkg/indexeres/indexer.go:22 +0xa4
github.com/rancher/harvester/pkg/controller/master.register(0x1d688a0, 0xc0010bca80, 0xc000134000, 0xc000314380, 0x0, 0x0)
/go/src/github.com/rancher/harvester/pkg/controller/master/controller.go:39 +0xe6
github.com/rancher/harvester/pkg/controller/master.Setup.func1(0x1d688a0, 0xc0010bca80)
/go/src/github.com/rancher/harvester/pkg/controller/master/setup.go:14 +0x54
created by github.com/rancher/harvester/vendor/github.com/rancher/wrangler/pkg/leader.run.func1
/go/src/github.com/rancher/harvester/vendor/github.com/rancher/wrangler/pkg/leader/leader.go:58 +0x46
``` | non_priority | create default admin cause cannot add indexers to running index cannot add indexers to running index of the default admin time level info msg listening on time level info msg starting harvester cattle io kind virtualmachineimage controller time level info msg starting kind secret controller time level info msg active tls secret serving cert ver count map leaderelection go successfully acquired lease kube system harvester controllers time level error msg error syncing user handler user rbac controller index with name auth harvester cattle io crb by role and subject does not exist requeuing time level error msg error syncing user handler user rbac controller index with name auth harvester cattle io crb by role and subject does not exist requeuing time level info msg default admin already created skip create admin step panic cannot add indexers to running index goroutine github com rancher harvester vendor io apimachinery pkg util runtime must go src github com rancher harvester vendor io apimachinery pkg util runtime runtime go github com rancher harvester vendor github com rancher wrangler api pkg generated controllers rbac clusterrolebindingcache addindexer go src github com rancher harvester vendor github com rancher wrangler api pkg generated controllers rbac clusterrolebinding go github com rancher harvester pkg indexeres registermanagementindexers go src github com rancher harvester pkg indexeres indexer go github com rancher harvester pkg controller master register go src github com rancher harvester pkg controller master controller go github com rancher harvester pkg controller master setup go src github com rancher harvester pkg controller master setup go created by github com rancher harvester vendor github com rancher wrangler pkg leader run go src github com rancher harvester vendor github com rancher wrangler pkg leader leader go | 0 |
23,155 | 3,771,338,787 | IssuesEvent | 2016-03-16 17:17:37 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | Emit issue when using += or -= operator on dictionary values | defect | Related to forum post:
http://forums.bridge.net/forum/bridge-net-pro/bugs/1783
Live Bridge sample:
http://live.bridge.net/#82f9b2ca69a1505af3f9
### Expected
not sure
### Actual
```javascript
dict.set(0, +1);
```
### Steps To Reproduce
```csharp
public class App
{
[Ready]
public static void Main()
{
var dict = new Dictionary<int, int>();
dict.Add(0, 5);
dict[0] += 1;
Global.alert(dict[0]);
}
}
``` | 1.0 | Emit issue when using += or -= operator on dictionary values - Related to forum post:
http://forums.bridge.net/forum/bridge-net-pro/bugs/1783
Live Bridge sample:
http://live.bridge.net/#82f9b2ca69a1505af3f9
### Expected
not sure
### Actual
```javascript
dict.set(0, +1);
```
### Steps To Reproduce
```csharp
public class App
{
[Ready]
public static void Main()
{
var dict = new Dictionary<int, int>();
dict.Add(0, 5);
dict[0] += 1;
Global.alert(dict[0]);
}
}
``` | non_priority | emit issue when using or operator on dictionary values related to forum post live bridge sample expected not sure actual javascript dict set steps to reproduce csharp public class app public static void main var dict new dictionary dict add dict global alert dict | 0 |
122,028 | 10,210,493,726 | IssuesEvent | 2019-08-14 14:54:15 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | opened | Run flake8 checks in Travis on modified code | Theme: Development Theme: Testing Type: Feature | ### Is your feature request related to a problem? Please describe.
We can currently introduce flake8 errors into our code and have no linting/error checking.
### Describe the solution you'd like
This site shows how to run flake8 tests on the modified code. It's in the context of pre-commit hooks, but could we use the code in our Travis build?
- https://consideratecode.com/2016/10/15/check-code-changes-with-flake8-before-committing/
Code from site:
```sh
git diff --cached -U0 | flake8 --diff
```
It notes that this won't catch _all_ issues, so another check that could be performed would be to perform a full check on master and on the PR, and ensure the outputs are the same (i.e. no new errors).
### Stakeholders
@cclauss @hornc | 1.0 | Run flake8 checks in Travis on modified code - ### Is your feature request related to a problem? Please describe.
We can currently introduce flake8 errors into our code and have no linting/error checking.
### Describe the solution you'd like
This site shows how to run flake8 tests on the modified code. It's in the context of pre-commit hooks, but could we use the code in our Travis build?
- https://consideratecode.com/2016/10/15/check-code-changes-with-flake8-before-committing/
Code from site:
```sh
git diff --cached -U0 | flake8 --diff
```
It notes that this won't catch _all_ issues, so another check that could be performed would be to perform a full check on master and on the PR, and ensure the outputs are the same (i.e. no new errors).
### Stakeholders
@cclauss @hornc | non_priority | run checks in travis on modified code is your feature request related to a problem please describe we can currently introduce errors into our code and have no linting error checking describe the solution you d like this site shows how to run tests on the modified code it s in the context of pre commit hooks but could we use the code in our travis build code from site sh git diff cached diff it notes that this won t catch all issues so another check that could be performed would be to perform a full check on master and on the pr and ensure the outputs are the same i e no new errors stakeholders cclauss hornc | 0 |
212,213 | 16,476,730,576 | IssuesEvent | 2021-05-24 06:38:38 | metoppv/improver | https://api.github.com/repos/metoppv/improver | opened | One page summary diagram per diagnostic | Type:Documentation blue_team | As a X I want Y so that Z
Related issues: #I, #J
Optional extra information text goes here
Acceptance criteria:
* A
* B
* C
| 1.0 | One page summary diagram per diagnostic - As a X I want Y so that Z
Related issues: #I, #J
Optional extra information text goes here
Acceptance criteria:
* A
* B
* C
| non_priority | one page summary diagram per diagnostic as a x i want y so that z related issues i j optional extra information text goes here acceptance criteria a b c | 0 |
86,608 | 15,755,696,280 | IssuesEvent | 2021-03-31 02:14:07 | SmartBear/ready-msazure-plugin | https://api.github.com/repos/SmartBear/ready-msazure-plugin | opened | CVE-2021-21350 (High) detected in xstream-1.3.1.jar | security vulnerability | ## CVE-2021-21350 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: ready-msazure-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/thoughtworks/xstream/1.3.1/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- ready-api-soapui-1.3.0.jar
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350>CVE-2021-21350</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq">https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.3.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.3.0;com.smartbear:ready-api-soapui:1.3.0;com.thoughtworks.xstream:xstream:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.16"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-21350","vulnerabilityDetails":"XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. If you rely on XStream\u0027s default blacklist of the Security Framework, you will have to use at least version 1.4.16.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-21350 (High) detected in xstream-1.3.1.jar - ## CVE-2021-21350 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p></p>
<p>Path to dependency file: ready-msazure-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/thoughtworks/xstream/1.3.1/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- ready-api-soapui-1.3.0.jar
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350>CVE-2021-21350</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq">https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.3.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.3.0;com.smartbear:ready-api-soapui:1.3.0;com.thoughtworks.xstream:xstream:1.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.16"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-21350","vulnerabilityDetails":"XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. If you rely on XStream\u0027s default blacklist of the Security Framework, you will have to use at least version 1.4.16.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar path to dependency file ready msazure plugin pom xml path to vulnerable library home wss scanner repository thoughtworks xstream xstream jar dependency hierarchy ready api soapui pro jar root library ready api soapui jar x xstream jar vulnerable library found in base branch master vulnerability details xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the security framework you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com smartbear ready api soapui pro com smartbear ready api soapui com thoughtworks xstream xstream isminimumfixversionavailable true minimumfixversion com thoughtworks xstream xstream basebranches vulnerabilityidentifier cve vulnerabilitydetails xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream security framework with a whitelist limited to the minimal required types if you rely on xstream default blacklist of the security framework you will have to use at least version vulnerabilityurl | 0 |
47,451 | 7,327,808,040 | IssuesEvent | 2018-03-04 14:33:06 | nbuchwitz/icingaweb2-module-map | https://api.github.com/repos/nbuchwitz/icingaweb2-module-map | opened | Documentation for marker icons | documentation | Provide documentation for marker icons (`e.g. `vars.map_icon = "print"``) | 1.0 | Documentation for marker icons - Provide documentation for marker icons (`e.g. `vars.map_icon = "print"``) | non_priority | documentation for marker icons provide documentation for marker icons e g vars map icon print | 0 |
55,030 | 6,423,317,571 | IssuesEvent | 2017-08-09 10:41:36 | mautic/mautic | https://api.github.com/repos/mautic/mautic | closed | Custom fields value in detail of contact | Bug Ready To Test | What type of report is this:
| Q | A
| ---| ---
| Bug report? | Y
| Feature request? | N
| Enhancement? | N
## Description:
I create new custon fields type boolean. When I edit this coustom fieltds in detail of contacts, it don't save and I must set and save it more times.
Video of this problem is here: https://www.youtube.com/watch?v=O3EJkf8Mdwo&feature=youtu.be
## If a bug:
| Q | A
| --- | ---
| Mautic version | 2.2.1
| PHP version | PHP Version 7.0.10-1~dotdeb+8.1
### Steps to reproduce:
In Video
### Log errors:
No error in logs. | 1.0 | Custom fields value in detail of contact - What type of report is this:
| Q | A
| ---| ---
| Bug report? | Y
| Feature request? | N
| Enhancement? | N
## Description:
I create new custon fields type boolean. When I edit this coustom fieltds in detail of contacts, it don't save and I must set and save it more times.
Video of this problem is here: https://www.youtube.com/watch?v=O3EJkf8Mdwo&feature=youtu.be
## If a bug:
| Q | A
| --- | ---
| Mautic version | 2.2.1
| PHP version | PHP Version 7.0.10-1~dotdeb+8.1
### Steps to reproduce:
In Video
### Log errors:
No error in logs. | non_priority | custom fields value in detail of contact what type of report is this q a bug report y feature request n enhancement n description i create new custon fields type boolean when i edit this coustom fieltds in detail of contacts it don t save and i must set and save it more times video of this problem is here if a bug q a mautic version php version php version dotdeb steps to reproduce in video log errors no error in logs | 0 |
55,986 | 23,657,693,405 | IssuesEvent | 2022-08-26 12:52:05 | microsoft/SynapseML | https://api.github.com/repos/microsoft/SynapseML | closed | HealthCareSDK Returns NullPointerException on Synapse Spark | bug awaiting response area/cognitive-service | **Describe the bug**
When using the HealthCareSDK class in SynapseML, I get a NullPointerException when running on a dataset of 1,000+ rows.
**To Reproduce**
On a medium amount of data (1,000+ rows) with StringType field between 250 and 4,000 characters long, execute the following code:
```
%%configure -f
{
"name": "nerHealthExtract",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12",
"spark.yarn.user.classpath.first": "true"
}
}
df_text_aggregated = spark.read.parquet("path/to/something")
healthcareService = (HealthcareSDK()
.setSubscriptionKey("API_KEY")
.setLocation("centralus")
.setErrorCol("nerHealthError")
.setLanguage("en")
.setOutputCol("nerHealthOutput"))
df_ner = healthcareService.transform(df_text_aggregated)
df_ner.cache()
df_ner.write.mode("overwrite").parquet("path/to/somewhere/else")
```
During the write, I receive the StackTrace below.
**Expected behavior**
I would expect to receive the healthcare output across all rows and NOT a NullPointerException.
**Info (please complete the following information):**
- SynapseML Version: com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT
- Spark Version 3.1
- Spark Platform Synapse Spark
**Stacktrace**
```
Error: An error occurred while calling o1115.parquet.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:218)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:256)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:253)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:214)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:148)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:147)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:995)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:995)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:444)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:416)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:294)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:880)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by : org.apache.spark.SparkException: Job aborted due to stage failure: Task 33 in stage 76.0 failed 4 times, most recent failure: Lost task 33.3 in stage 76.0 (TID 3877) (vm-89521530 executor 1): java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748) Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2263)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2212)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2211)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2211)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1082)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2392)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2381)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:869)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2282)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200) ... 33 more
Caused by : java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Traceback (most recent call last): File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1250, in parquet self._jwrite.parquet(path) File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/py4j/java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco return f(*a, **kw) File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o1115.parquet. : org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:218)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:256)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:253)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:214)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:148)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:147)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:995)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:995)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:444)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:416)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:294)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:880)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by : org.apache.spark.SparkException: Job aborted due to stage failure: Task 33 in stage 76.0 failed 4 times, most recent failure: Lost task 33.3 in stage 76.0 (TID 3877) (vm-89521530 executor 1): java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748) Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2263)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2212)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2211)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2211)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1082)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2392)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2381)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:869)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2282)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200) ... 33 more
Caused by : java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
```
If the bug pertains to a specific feature please tag the appropriate [CODEOWNER](https://github.com/Microsoft/SynapseML/blob/master/CODEOWNERS) for better visibility
**Additional context**
Works fine doing a `.show()` but does not work with the complete dataset.
AB#1817799 | 1.0 | HealthCareSDK Returns NullPointerException on Synapse Spark - **Describe the bug**
When using the HealthCareSDK class in SynapseML, I get a NullPointerException when running on a dataset of 1,000+ rows.
**To Reproduce**
On a medium amount of data (1,000+ rows) with StringType field between 250 and 4,000 characters long, execute the following code:
```
%%configure -f
{
"name": "nerHealthExtract",
"conf": {
"spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT",
"spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
"spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12",
"spark.yarn.user.classpath.first": "true"
}
}
df_text_aggregated = spark.read.parquet("path/to/something")
healthcareService = (HealthcareSDK()
.setSubscriptionKey("API_KEY")
.setLocation("centralus")
.setErrorCol("nerHealthError")
.setLanguage("en")
.setOutputCol("nerHealthOutput"))
df_ner = healthcareService.transform(df_text_aggregated)
df_ner.cache()
df_ner.write.mode("overwrite").parquet("path/to/somewhere/else")
```
During the write, I receive the StackTrace below.
**Expected behavior**
I would expect to receive the healthcare output across all rows and NOT a NullPointerException.
**Info (please complete the following information):**
- SynapseML Version: com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT
- Spark Version 3.1
- Spark Platform Synapse Spark
**Stacktrace**
```
Error: An error occurred while calling o1115.parquet.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:218)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:256)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:253)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:214)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:148)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:147)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:995)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:995)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:444)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:416)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:294)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:880)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by : org.apache.spark.SparkException: Job aborted due to stage failure: Task 33 in stage 76.0 failed 4 times, most recent failure: Lost task 33.3 in stage 76.0 (TID 3877) (vm-89521530 executor 1): java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748) Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2263)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2212)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2211)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2211)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1082)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2392)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2381)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:869)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2282)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200) ... 33 more
Caused by : java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Traceback (most recent call last): File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1250, in parquet self._jwrite.parquet(path) File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/py4j/java_gateway.py", line 1304, in __call__ return_value = get_return_value( File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco return f(*a, **kw) File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o1115.parquet. : org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:218)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:256)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:253)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:214)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:148)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:147)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:995)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:995)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:444)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:416)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:294)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:880)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by : org.apache.spark.SparkException: Job aborted due to stage failure: Task 33 in stage 76.0 failed 4 times, most recent failure: Lost task 33.3 in stage 76.0 (TID 3877) (vm-89521530 executor 1): java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748) Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2263)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2212)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2211)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2211)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1082)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1082)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2392)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2381)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:869)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2282)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200) ... 33 more
Caused by : java.lang.NullPointerException
at com.azure.ai.textanalytics.implementation.Utility.toRecognizeHealthcareEntitiesResults(Utility.java:510)
at com.azure.ai.textanalytics.AnalyzeHealthcareEntityAsyncClient.toTextAnalyticsPagedResponse(AnalyzeHealthcareEntityAsyncClient.java:179)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onSubscribe(MonoFlatMap.java:238)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.signalCached(MonoCacheTime.java:337)
at reactor.core.publisher.MonoCacheTime$CoordinatorSubscriber.onNext(MonoCacheTime.java:354)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
at reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:145)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:150)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
at reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:159)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:213)
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:269)
at reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
at reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:684)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1372)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1235)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1284)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
at reactor.core.publisher.Flux.blockLast(Flux.java:2644)
at com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:94)
at com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.<init>(ContinuablePagedByItemIterable.java:50)
at com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:37)
at com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:106)
at scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:55)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at com.microsoft.azure.synapse.ml.cognitive.HealthcareSDK.invokeTextAnalytics(TextAnalyticsSDK.scala:339)
at com.microsoft.azure.synapse.ml.cognitive.TextAnalyticsSDKBase.$anonfun$transformTextRows$4(TextAnalyticsSDK.scala:128)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
```
If the bug pertains to a specific feature please tag the appropriate [CODEOWNER](https://github.com/Microsoft/SynapseML/blob/master/CODEOWNERS) for better visibility
**Additional context**
Works fine doing a `.show()` but does not work with the complete dataset.
AB#1817799 | non_priority | healthcaresdk returns nullpointerexception on synapse spark describe the bug when using the healthcaresdk class in synapseml i get a nullpointerexception when running on a dataset of rows to reproduce on a medium amount of data rows with stringtype field between and characters long execute the following code configure f name nerhealthextract conf spark jars packages com microsoft azure synapseml snapshot spark jars repositories spark jars excludes org scala lang scala reflect org apache spark spark tags org scalactic scalactic org scalatest scalatest spark yarn user classpath first true df text aggregated spark read parquet path to something healthcareservice healthcaresdk setsubscriptionkey api key setlocation centralus seterrorcol nerhealtherror setlanguage en setoutputcol nerhealthoutput df ner healthcareservice transform df text aggregated df ner cache df ner write mode overwrite parquet path to somewhere else during the write i receive the stacktrace below expected behavior i would expect to receive the healthcare output across all rows and not a nullpointerexception info please complete the following information synapseml version com microsoft azure synapseml snapshot spark version spark platform synapse spark stacktrace error an error occurred while calling parquet org apache spark sparkexception job aborted at org apache spark sql execution datasources fileformatwriter write fileformatwriter scala at org apache spark sql execution datasources insertintohadoopfsrelationcommand run insertintohadoopfsrelationcommand scala at org apache spark sql execution command datawritingcommandexec sideeffectresult lzycompute commands scala at org apache spark sql execution command datawritingcommandexec sideeffectresult commands scala at org apache spark sql execution command datawritingcommandexec doexecute commands scala at org apache spark sql execution sparkplan anonfun execute sparkplan scala at org apache spark sql execution sparkplan anonfun executequery sparkplan scala at org apache spark rdd rddoperationscope withscope rddoperationscope scala at org apache spark sql execution sparkplan executequery sparkplan scala at org apache spark sql execution sparkplan execute sparkplan scala at org apache spark sql execution queryexecution tordd lzycompute queryexecution scala at org apache spark sql execution queryexecution tordd queryexecution scala at org apache spark sql dataframewriter anonfun runcommand dataframewriter scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql execution sqlexecution withsqlconfpropagated sqlexecution scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql sparksession withactive sparksession scala at org apache spark sql execution sqlexecution withnewexecutionid sqlexecution scala at org apache spark sql dataframewriter runcommand dataframewriter scala at org apache spark sql dataframewriter dataframewriter scala at org apache spark sql dataframewriter saveinternal dataframewriter scala at org apache spark sql dataframewriter save dataframewriter scala at org apache spark sql dataframewriter parquet dataframewriter scala at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at reflection methodinvoker invoke methodinvoker java at reflection reflectionengine invoke reflectionengine java at gateway invoke gateway java at commands abstractcommand invokemethod abstractcommand java at commands callcommand execute callcommand java at gatewayconnection run gatewayconnection java at java lang thread run thread java caused by org apache spark sparkexception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid vm executor java lang nullpointerexception at com azure ai textanalytics implementation utility torecognizehealthcareentitiesresults utility java at com azure ai textanalytics analyzehealthcareentityasyncclient totextanalyticspagedresponse analyzehealthcareentityasyncclient java at reactor core publisher fluxmapfuseable mapfuseablesubscriber onnext fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapinner onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher operators multisubscriptionsubscriber set operators java at reactor core publisher operators multisubscriptionsubscriber onsubscribe operators java at reactor core publisher monojust subscribe monojust java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocachetime coordinatorsubscriber signalcached monocachetime java at reactor core publisher monocachetime coordinatorsubscriber onnext monocachetime java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapmain onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monocachetime subscribeorreturn monocachetime java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxcontextwrite contextwritesubscriber onnext fluxcontextwrite java at reactor core publisher fluxdooneach dooneachsubscriber onnext fluxdooneach java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxmap mapsubscriber onnext fluxmap java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher serializedsubscriber onnext serializedsubscriber java at reactor core publisher fluxretrywhen retrywhenmainsubscriber onnext fluxretrywhen java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monoinnerproducerbase complete operators java at reactor core publisher monosingle singlesubscriber oncomplete monosingle java at reactor core publisher monoflatmapmany flatmapmanyinner oncomplete monoflatmapmany java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher fluxdofinally dofinallysubscriber oncomplete fluxdofinally java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocollect collectsubscriber oncomplete monocollect java at reactor core publisher fluxhandle handlesubscriber oncomplete fluxhandle java at reactor core publisher fluxmap mapconditionalsubscriber oncomplete fluxmap java at reactor netty channel fluxreceive oninboundcomplete fluxreceive java at reactor netty channel channeloperations oninboundcomplete channeloperations java at reactor netty channel channeloperations terminate channeloperations java at reactor netty http client httpclientoperations oninboundnext httpclientoperations java at reactor netty channel channeloperationshandler channelread channeloperationshandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext firechannelread combinedchannelduplexhandler java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler ssl sslhandler unwrap sslhandler java at io netty handler ssl sslhandler decodejdkcompatible sslhandler java at io netty handler ssl sslhandler decode sslhandler java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel epoll abstractepollstreamchannel epollstreamunsafe epollinready abstractepollstreamchannel java at io netty channel epoll epolleventloop processready epolleventloop java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java suppressed java lang exception block terminated with an error at reactor core publisher blockingsinglesubscriber blockingget blockingsinglesubscriber java at reactor core publisher flux blocklast flux java at com azure core util paging continuablepagedbyiteratorbase requestpage continuablepagedbyiteratorbase java at com azure core util paging continuablepagedbyitemiterable continuablepagedbyitemiterator continuablepagedbyitemiterable java at com azure core util paging continuablepagedbyitemiterable iterator continuablepagedbyitemiterable java at com azure core util paging continuablepagediterable iterator continuablepagediterable java at scala collection convert wrappers jiterablewrapper iterator wrappers scala at scala collection iterablelike foreach iterablelike scala at scala collection iterablelike foreach iterablelike scala at scala collection abstractiterable foreach iterable scala at scala collection traversablelike flatmap traversablelike scala at scala collection traversablelike flatmap traversablelike scala at scala collection abstracttraversable flatmap traversable scala at com microsoft azure synapse ml cognitive healthcaresdk invoketextanalytics textanalyticssdk scala at com microsoft azure synapse ml cognitive textanalyticssdkbase anonfun transformtextrows textanalyticssdk scala at scala concurrent future anonfun apply future scala at scala util success anonfun map try scala at scala util success map try scala at scala concurrent future anonfun map future scala at scala concurrent impl promise promise scala at scala concurrent impl promise anonfun transform promise scala at scala concurrent impl callbackrunnable run promise scala at java util concurrent forkjointask runnableexecuteaction exec forkjointask java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue runtask forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java driver stacktrace at org apache spark scheduler dagscheduler failjobandindependentstages dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage adapted dagscheduler scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable arraybuffer foreach arraybuffer scala at org apache spark scheduler dagscheduler abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed adapted dagscheduler scala at scala option foreach option scala at org apache spark scheduler dagscheduler handletasksetfailed dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop doonreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark util eventloop anon run eventloop scala at org apache spark scheduler dagscheduler runjob dagscheduler scala at org apache spark sparkcontext runjob sparkcontext scala at org apache spark sql execution datasources fileformatwriter write fileformatwriter scala more caused by java lang nullpointerexception at com azure ai textanalytics implementation utility torecognizehealthcareentitiesresults utility java at com azure ai textanalytics analyzehealthcareentityasyncclient totextanalyticspagedresponse analyzehealthcareentityasyncclient java at reactor core publisher fluxmapfuseable mapfuseablesubscriber onnext fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapinner onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher operators multisubscriptionsubscriber set operators java at reactor core publisher operators multisubscriptionsubscriber onsubscribe operators java at reactor core publisher monojust subscribe monojust java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocachetime coordinatorsubscriber signalcached monocachetime java at reactor core publisher monocachetime coordinatorsubscriber onnext monocachetime java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapmain onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monocachetime subscribeorreturn monocachetime java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxcontextwrite contextwritesubscriber onnext fluxcontextwrite java at reactor core publisher fluxdooneach dooneachsubscriber onnext fluxdooneach java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxmap mapsubscriber onnext fluxmap java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher serializedsubscriber onnext serializedsubscriber java at reactor core publisher fluxretrywhen retrywhenmainsubscriber onnext fluxretrywhen java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monoinnerproducerbase complete operators java at reactor core publisher monosingle singlesubscriber oncomplete monosingle java at reactor core publisher monoflatmapmany flatmapmanyinner oncomplete monoflatmapmany java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher fluxdofinally dofinallysubscriber oncomplete fluxdofinally java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocollect collectsubscriber oncomplete monocollect java at reactor core publisher fluxhandle handlesubscriber oncomplete fluxhandle java at reactor core publisher fluxmap mapconditionalsubscriber oncomplete fluxmap java at reactor netty channel fluxreceive oninboundcomplete fluxreceive java at reactor netty channel channeloperations oninboundcomplete channeloperations java at reactor netty channel channeloperations terminate channeloperations java at reactor netty http client httpclientoperations oninboundnext httpclientoperations java at reactor netty channel channeloperationshandler channelread channeloperationshandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext firechannelread combinedchannelduplexhandler java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler ssl sslhandler unwrap sslhandler java at io netty handler ssl sslhandler decodejdkcompatible sslhandler java at io netty handler ssl sslhandler decode sslhandler java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel epoll abstractepollstreamchannel epollstreamunsafe epollinready abstractepollstreamchannel java at io netty channel epoll epolleventloop processready epolleventloop java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java more suppressed java lang exception block terminated with an error at reactor core publisher blockingsinglesubscriber blockingget blockingsinglesubscriber java at reactor core publisher flux blocklast flux java at com azure core util paging continuablepagedbyiteratorbase requestpage continuablepagedbyiteratorbase java at com azure core util paging continuablepagedbyitemiterable continuablepagedbyitemiterator continuablepagedbyitemiterable java at com azure core util paging continuablepagedbyitemiterable iterator continuablepagedbyitemiterable java at com azure core util paging continuablepagediterable iterator continuablepagediterable java at scala collection convert wrappers jiterablewrapper iterator wrappers scala at scala collection iterablelike foreach iterablelike scala at scala collection iterablelike foreach iterablelike scala at scala collection abstractiterable foreach iterable scala at scala collection traversablelike flatmap traversablelike scala at scala collection traversablelike flatmap traversablelike scala at scala collection abstracttraversable flatmap traversable scala at com microsoft azure synapse ml cognitive healthcaresdk invoketextanalytics textanalyticssdk scala at com microsoft azure synapse ml cognitive textanalyticssdkbase anonfun transformtextrows textanalyticssdk scala at scala concurrent future anonfun apply future scala at scala util success anonfun map try scala at scala util success map try scala at scala concurrent future anonfun map future scala at scala concurrent impl promise promise scala at scala concurrent impl promise anonfun transform promise scala at scala concurrent impl callbackrunnable run promise scala at java util concurrent forkjointask runnableexecuteaction exec forkjointask java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue runtask forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java traceback most recent call last file opt spark python lib pyspark zip pyspark sql readwriter py line in parquet self jwrite parquet path file home trusted service user cluster env env lib site packages java gateway py line in call return value get return value file opt spark python lib pyspark zip pyspark sql utils py line in deco return f a kw file home trusted service user cluster env env lib site packages protocol py line in get return value raise protocol an error occurred while calling parquet org apache spark sparkexception job aborted at org apache spark sql execution datasources fileformatwriter write fileformatwriter scala at org apache spark sql execution datasources insertintohadoopfsrelationcommand run insertintohadoopfsrelationcommand scala at org apache spark sql execution command datawritingcommandexec sideeffectresult lzycompute commands scala at org apache spark sql execution command datawritingcommandexec sideeffectresult commands scala at org apache spark sql execution command datawritingcommandexec doexecute commands scala at org apache spark sql execution sparkplan anonfun execute sparkplan scala at org apache spark sql execution sparkplan anonfun executequery sparkplan scala at org apache spark rdd rddoperationscope withscope rddoperationscope scala at org apache spark sql execution sparkplan executequery sparkplan scala at org apache spark sql execution sparkplan execute sparkplan scala at org apache spark sql execution queryexecution tordd lzycompute queryexecution scala at org apache spark sql execution queryexecution tordd queryexecution scala at org apache spark sql dataframewriter anonfun runcommand dataframewriter scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql execution sqlexecution withsqlconfpropagated sqlexecution scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql sparksession withactive sparksession scala at org apache spark sql execution sqlexecution withnewexecutionid sqlexecution scala at org apache spark sql dataframewriter runcommand dataframewriter scala at org apache spark sql dataframewriter dataframewriter scala at org apache spark sql dataframewriter saveinternal dataframewriter scala at org apache spark sql dataframewriter save dataframewriter scala at org apache spark sql dataframewriter parquet dataframewriter scala at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at reflection methodinvoker invoke methodinvoker java at reflection reflectionengine invoke reflectionengine java at gateway invoke gateway java at commands abstractcommand invokemethod abstractcommand java at commands callcommand execute callcommand java at gatewayconnection run gatewayconnection java at java lang thread run thread java caused by org apache spark sparkexception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid vm executor java lang nullpointerexception at com azure ai textanalytics implementation utility torecognizehealthcareentitiesresults utility java at com azure ai textanalytics analyzehealthcareentityasyncclient totextanalyticspagedresponse analyzehealthcareentityasyncclient java at reactor core publisher fluxmapfuseable mapfuseablesubscriber onnext fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapinner onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher operators multisubscriptionsubscriber set operators java at reactor core publisher operators multisubscriptionsubscriber onsubscribe operators java at reactor core publisher monojust subscribe monojust java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocachetime coordinatorsubscriber signalcached monocachetime java at reactor core publisher monocachetime coordinatorsubscriber onnext monocachetime java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapmain onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monocachetime subscribeorreturn monocachetime java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxcontextwrite contextwritesubscriber onnext fluxcontextwrite java at reactor core publisher fluxdooneach dooneachsubscriber onnext fluxdooneach java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxmap mapsubscriber onnext fluxmap java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher serializedsubscriber onnext serializedsubscriber java at reactor core publisher fluxretrywhen retrywhenmainsubscriber onnext fluxretrywhen java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monoinnerproducerbase complete operators java at reactor core publisher monosingle singlesubscriber oncomplete monosingle java at reactor core publisher monoflatmapmany flatmapmanyinner oncomplete monoflatmapmany java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher fluxdofinally dofinallysubscriber oncomplete fluxdofinally java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocollect collectsubscriber oncomplete monocollect java at reactor core publisher fluxhandle handlesubscriber oncomplete fluxhandle java at reactor core publisher fluxmap mapconditionalsubscriber oncomplete fluxmap java at reactor netty channel fluxreceive oninboundcomplete fluxreceive java at reactor netty channel channeloperations oninboundcomplete channeloperations java at reactor netty channel channeloperations terminate channeloperations java at reactor netty http client httpclientoperations oninboundnext httpclientoperations java at reactor netty channel channeloperationshandler channelread channeloperationshandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext firechannelread combinedchannelduplexhandler java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler ssl sslhandler unwrap sslhandler java at io netty handler ssl sslhandler decodejdkcompatible sslhandler java at io netty handler ssl sslhandler decode sslhandler java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel epoll abstractepollstreamchannel epollstreamunsafe epollinready abstractepollstreamchannel java at io netty channel epoll epolleventloop processready epolleventloop java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java suppressed java lang exception block terminated with an error at reactor core publisher blockingsinglesubscriber blockingget blockingsinglesubscriber java at reactor core publisher flux blocklast flux java at com azure core util paging continuablepagedbyiteratorbase requestpage continuablepagedbyiteratorbase java at com azure core util paging continuablepagedbyitemiterable continuablepagedbyitemiterator continuablepagedbyitemiterable java at com azure core util paging continuablepagedbyitemiterable iterator continuablepagedbyitemiterable java at com azure core util paging continuablepagediterable iterator continuablepagediterable java at scala collection convert wrappers jiterablewrapper iterator wrappers scala at scala collection iterablelike foreach iterablelike scala at scala collection iterablelike foreach iterablelike scala at scala collection abstractiterable foreach iterable scala at scala collection traversablelike flatmap traversablelike scala at scala collection traversablelike flatmap traversablelike scala at scala collection abstracttraversable flatmap traversable scala at com microsoft azure synapse ml cognitive healthcaresdk invoketextanalytics textanalyticssdk scala at com microsoft azure synapse ml cognitive textanalyticssdkbase anonfun transformtextrows textanalyticssdk scala at scala concurrent future anonfun apply future scala at scala util success anonfun map try scala at scala util success map try scala at scala concurrent future anonfun map future scala at scala concurrent impl promise promise scala at scala concurrent impl promise anonfun transform promise scala at scala concurrent impl callbackrunnable run promise scala at java util concurrent forkjointask runnableexecuteaction exec forkjointask java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue runtask forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java driver stacktrace at org apache spark scheduler dagscheduler failjobandindependentstages dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage adapted dagscheduler scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable arraybuffer foreach arraybuffer scala at org apache spark scheduler dagscheduler abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed adapted dagscheduler scala at scala option foreach option scala at org apache spark scheduler dagscheduler handletasksetfailed dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop doonreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark util eventloop anon run eventloop scala at org apache spark scheduler dagscheduler runjob dagscheduler scala at org apache spark sparkcontext runjob sparkcontext scala at org apache spark sql execution datasources fileformatwriter write fileformatwriter scala more caused by java lang nullpointerexception at com azure ai textanalytics implementation utility torecognizehealthcareentitiesresults utility java at com azure ai textanalytics analyzehealthcareentityasyncclient totextanalyticspagedresponse analyzehealthcareentityasyncclient java at reactor core publisher fluxmapfuseable mapfuseablesubscriber onnext fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapinner onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxswitchifempty switchifemptysubscriber onnext fluxswitchifempty java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher operators multisubscriptionsubscriber set operators java at reactor core publisher operators multisubscriptionsubscriber onsubscribe operators java at reactor core publisher monojust subscribe monojust java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocachetime coordinatorsubscriber signalcached monocachetime java at reactor core publisher monocachetime coordinatorsubscriber onnext monocachetime java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapmain onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monodefer subscribe monodefer java at reactor core publisher monocachetime subscribeorreturn monocachetime java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxcontextwrite contextwritesubscriber onnext fluxcontextwrite java at reactor core publisher fluxdooneach dooneachsubscriber onnext fluxdooneach java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher fluxmap mapsubscriber onnext fluxmap java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher serializedsubscriber onnext serializedsubscriber java at reactor core publisher fluxretrywhen retrywhenmainsubscriber onnext fluxretrywhen java at reactor core publisher fluxonerrorresume resumesubscriber onnext fluxonerrorresume java at reactor core publisher operators monoinnerproducerbase complete operators java at reactor core publisher monosingle singlesubscriber oncomplete monosingle java at reactor core publisher monoflatmapmany flatmapmanyinner oncomplete monoflatmapmany java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher fluxdofinally dofinallysubscriber oncomplete fluxdofinally java at reactor core publisher fluxmapfuseable mapfuseablesubscriber oncomplete fluxmapfuseable java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monocollect collectsubscriber oncomplete monocollect java at reactor core publisher fluxhandle handlesubscriber oncomplete fluxhandle java at reactor core publisher fluxmap mapconditionalsubscriber oncomplete fluxmap java at reactor netty channel fluxreceive oninboundcomplete fluxreceive java at reactor netty channel channeloperations oninboundcomplete channeloperations java at reactor netty channel channeloperations terminate channeloperations java at reactor netty http client httpclientoperations oninboundnext httpclientoperations java at reactor netty channel channeloperationshandler channelread channeloperationshandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext firechannelread combinedchannelduplexhandler java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler ssl sslhandler unwrap sslhandler java at io netty handler ssl sslhandler decodejdkcompatible sslhandler java at io netty handler ssl sslhandler decode sslhandler java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel epoll abstractepollstreamchannel epollstreamunsafe epollinready abstractepollstreamchannel java at io netty channel epoll epolleventloop processready epolleventloop java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java more suppressed java lang exception block terminated with an error at reactor core publisher blockingsinglesubscriber blockingget blockingsinglesubscriber java at reactor core publisher flux blocklast flux java at com azure core util paging continuablepagedbyiteratorbase requestpage continuablepagedbyiteratorbase java at com azure core util paging continuablepagedbyitemiterable continuablepagedbyitemiterator continuablepagedbyitemiterable java at com azure core util paging continuablepagedbyitemiterable iterator continuablepagedbyitemiterable java at com azure core util paging continuablepagediterable iterator continuablepagediterable java at scala collection convert wrappers jiterablewrapper iterator wrappers scala at scala collection iterablelike foreach iterablelike scala at scala collection iterablelike foreach iterablelike scala at scala collection abstractiterable foreach iterable scala at scala collection traversablelike flatmap traversablelike scala at scala collection traversablelike flatmap traversablelike scala at scala collection abstracttraversable flatmap traversable scala at com microsoft azure synapse ml cognitive healthcaresdk invoketextanalytics textanalyticssdk scala at com microsoft azure synapse ml cognitive textanalyticssdkbase anonfun transformtextrows textanalyticssdk scala at scala concurrent future anonfun apply future scala at scala util success anonfun map try scala at scala util success map try scala at scala concurrent future anonfun map future scala at scala concurrent impl promise promise scala at scala concurrent impl promise anonfun transform promise scala at scala concurrent impl callbackrunnable run promise scala at java util concurrent forkjointask runnableexecuteaction exec forkjointask java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue runtask forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java if the bug pertains to a specific feature please tag the appropriate for better visibility additional context works fine doing a show but does not work with the complete dataset ab | 0 |
89,747 | 25,894,429,489 | IssuesEvent | 2022-12-14 20:58:06 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Build 471 for 7.17 with status FAILURE | automation ci-reported Team:Elastic-Agent-Data-Plane build-failures |
## :broken_heart: Build Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//pipeline) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//tests) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//changes) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//artifacts) [](http://beats_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-26T10:57:52.042Z&rangeTo=2022-08-26T11:17:52.042Z&transactionName=BUILD+Beats%2Fbeats%2F7.17&transactionType=job&latencyAggregationType=avg&traceId=fe53c62023e8800e68e85fbf3dc87130&transactionId=898fcb427d2fc043)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-08-26T11:07:52.042+0000
* Duration: 85 min 2 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 0 |
| Passed | 18951 |
| Skipped | 1433 |
| Total | 20384 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
> Show only the first 10 steps failures
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6107/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6108/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6003/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6004/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6007/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6010/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/5996/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/5998/log/?start=0">here</a></li>
</ul>
##### `Error signal`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/5537/log/?start=0">here</a></li>
<li>Description: <code>untar: step failed with error Unable to create live FilePath for beats-ci-immutable-ubuntu-1804-aarch64-1661513758379449736</code></l1>
</ul>
##### `Recursively delete the current directory from the workspace`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6112/log/?start=0">here</a></li>
<li>Description: <code>[2022-08-26T11:44:05.517Z] beats-ci-immutable-ubuntu-1804-aarch64-1661514040576211783 was marked off</code></l1>
</ul>
</p>
</details>
| 1.0 | Build 471 for 7.17 with status FAILURE -
## :broken_heart: Build Failed
<!-- BUILD BADGES-->
> _the below badges are clickable and redirect to their specific view in the CI or DOCS_
[](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//pipeline) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//tests) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//changes) [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//artifacts) [](http://beats_null.docs-preview.app.elstc.co/diff) [](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-26T10:57:52.042Z&rangeTo=2022-08-26T11:17:52.042Z&transactionName=BUILD+Beats%2Fbeats%2F7.17&transactionType=job&latencyAggregationType=avg&traceId=fe53c62023e8800e68e85fbf3dc87130&transactionId=898fcb427d2fc043)
<!-- BUILD SUMMARY-->
<details><summary>Expand to view the summary</summary>
<p>
#### Build stats
* Start Time: 2022-08-26T11:07:52.042+0000
* Duration: 85 min 2 sec
#### Test stats :test_tube:
| Test | Results |
| ------------ | :-----------------------------: |
| Failed | 0 |
| Passed | 18951 |
| Skipped | 1433 |
| Total | 20384 |
</p>
</details>
<!-- TEST RESULTS IF ANY-->
<!-- STEPS ERRORS IF ANY -->
### Steps errors [](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F7.17/detail/7.17/471//pipeline)
<details><summary>Expand to view the steps failures</summary>
<p>
> Show only the first 10 steps failures
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6107/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6108/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6003/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6004/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6007/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6010/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/5996/log/?start=0">here</a></li>
</ul>
##### `Checks if running on a Unix-like node`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/5998/log/?start=0">here</a></li>
</ul>
##### `Error signal`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/5537/log/?start=0">here</a></li>
<li>Description: <code>untar: step failed with error Unable to create live FilePath for beats-ci-immutable-ubuntu-1804-aarch64-1661513758379449736</code></l1>
</ul>
##### `Recursively delete the current directory from the workspace`
<ul>
<li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/7.17/runs/471/steps/6112/log/?start=0">here</a></li>
<li>Description: <code>[2022-08-26T11:44:05.517Z] beats-ci-immutable-ubuntu-1804-aarch64-1661514040576211783 was marked off</code></l1>
</ul>
</p>
</details>
| non_priority | build for with status failure broken heart build failed the below badges are clickable and redirect to their specific view in the ci or docs expand to view the summary build stats start time duration min sec test stats test tube test results failed passed skipped total steps errors expand to view the steps failures show only the first steps failures checks if running on a unix like node took min sec view more details a href checks if running on a unix like node took min sec view more details a href checks if running on a unix like node took min sec view more details a href checks if running on a unix like node took min sec view more details a href checks if running on a unix like node took min sec view more details a href checks if running on a unix like node took min sec view more details a href checks if running on a unix like node took min sec view more details a href checks if running on a unix like node took min sec view more details a href error signal took min sec view more details a href description untar step failed with error unable to create live filepath for beats ci immutable ubuntu recursively delete the current directory from the workspace took min sec view more details a href description beats ci immutable ubuntu was marked off | 0 |
135,841 | 19,675,506,664 | IssuesEvent | 2022-01-11 11:55:15 | gnosis/cowswap | https://api.github.com/repos/gnosis/cowswap | opened | Claim: adjust margins in sections in Profile page in a mobile view | 🐞 Bug app:CowSwap ⬇ Low Protofire 🎨 Design | It might seem that margins are different in Profile and Affiliate program sections when open the profile page in a mobile view.
Moreover, there is a lot of useless space at the top of the page.
See the image:

I think, it would be nice to adjust margins to be the same in both sections (reduce it in the Profile section). | 1.0 | Claim: adjust margins in sections in Profile page in a mobile view - It might seem that margins are different in Profile and Affiliate program sections when open the profile page in a mobile view.
Moreover, there is a lot of useless space at the top of the page.
See the image:

I think, it would be nice to adjust margins to be the same in both sections (reduce it in the Profile section). | non_priority | claim adjust margins in sections in profile page in a mobile view it might seem that margins are different in profile and affiliate program sections when open the profile page in a mobile view moreover there is a lot of useless space at the top of the page see the image i think it would be nice to adjust margins to be the same in both sections reduce it in the profile section | 0 |
14,758 | 25,705,754,907 | IssuesEvent | 2022-12-07 00:29:06 | Croquembouche/pyWhat-2022 | https://api.github.com/repos/Croquembouche/pyWhat-2022 | closed | FR 5: Give some intermediate outputs to the user | Product Backlog Functional Requirements | |FR 5: Give some intermediate outputs to the user |
|----------------------|
| **Estimate:** 3 |
| **Priority:** Should have |
|**Story**: As a pyWhat user, I want to have some intermediate outputs when the app is still dealing with the input especially for this large input so that I can know the app is working and not crashed. |
| **Acceptance criteria:** |
|1. The application is able to give some outputs when the key points are done. |
| 1.0 | FR 5: Give some intermediate outputs to the user - |FR 5: Give some intermediate outputs to the user |
|----------------------|
| **Estimate:** 3 |
| **Priority:** Should have |
|**Story**: As a pyWhat user, I want to have some intermediate outputs when the app is still dealing with the input especially for this large input so that I can know the app is working and not crashed. |
| **Acceptance criteria:** |
|1. The application is able to give some outputs when the key points are done. |
| non_priority | fr give some intermediate outputs to the user fr give some intermediate outputs to the user estimate priority should have story as a pywhat user i want to have some intermediate outputs when the app is still dealing with the input especially for this large input so that i can know the app is working and not crashed acceptance criteria the application is able to give some outputs when the key points are done | 0 |
168,079 | 13,058,194,565 | IssuesEvent | 2020-07-30 08:38:50 | Joystream/joystream | https://api.github.com/repos/Joystream/joystream | closed | Network testing: worker application happy case | estimate-12h network-integration-test nicaea working-group | Implement the following scenario:
- run "buy membership" testing scenario, resulting in N member accounts;
- set lead using sudo;
- add N worker openings;
- begin accepting worker application;
- apply on worker openings using N member accounts;
- begin worker applications review;
- fill working openings.
The scenario will result in creation of N workers | 1.0 | Network testing: worker application happy case - Implement the following scenario:
- run "buy membership" testing scenario, resulting in N member accounts;
- set lead using sudo;
- add N worker openings;
- begin accepting worker application;
- apply on worker openings using N member accounts;
- begin worker applications review;
- fill working openings.
The scenario will result in creation of N workers | non_priority | network testing worker application happy case implement the following scenario run buy membership testing scenario resulting in n member accounts set lead using sudo add n worker openings begin accepting worker application apply on worker openings using n member accounts begin worker applications review fill working openings the scenario will result in creation of n workers | 0 |
66,874 | 14,799,008,659 | IssuesEvent | 2021-01-13 01:12:39 | doc-ai/snipe-it | https://api.github.com/repos/doc-ai/snipe-it | opened | CVE-2020-24025 (Medium) detected in node-sass-4.9.0.tgz | security vulnerability | ## CVE-2020-24025 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.9.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.9.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.9.0.tgz</a></p>
<p>Path to dependency file: snipe-it/package.json</p>
<p>Path to vulnerable library: snipe-it/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-2.1.11.tgz (Root Library)
- :x: **node-sass-4.9.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025>CVE-2020-24025</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.9.0","isTransitiveDependency":true,"dependencyTree":"laravel-mix:2.1.11;node-sass:4.9.0","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-24025","vulnerabilityDetails":"Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-24025 (Medium) detected in node-sass-4.9.0.tgz - ## CVE-2020-24025 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.9.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.9.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.9.0.tgz</a></p>
<p>Path to dependency file: snipe-it/package.json</p>
<p>Path to vulnerable library: snipe-it/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-2.1.11.tgz (Root Library)
- :x: **node-sass-4.9.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.
<p>Publish Date: 2021-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025>CVE-2020-24025</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.9.0","isTransitiveDependency":true,"dependencyTree":"laravel-mix:2.1.11;node-sass:4.9.0","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-24025","vulnerabilityDetails":"Certificate validation in node-sass 2.0.0 to 4.14.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24025","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in node sass tgz cve medium severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file snipe it package json path to vulnerable library snipe it node modules node sass package json dependency hierarchy laravel mix tgz root library x node sass tgz vulnerable library vulnerability details certificate validation in node sass to is disabled when requesting binaries even if the user is not specifying an alternative download path publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails certificate validation in node sass to is disabled when requesting binaries even if the user is not specifying an alternative download path vulnerabilityurl | 0 |
15,317 | 5,096,524,727 | IssuesEvent | 2017-01-03 18:28:58 | adamfowleruk/mlcplusplus | https://api.github.com/repos/adamfowleruk/mlcplusplus | opened | CMake modernization and improvements | code quality enhancement installers | · Boost Log pulled in incorrectly via components (should be log log_setup).
· Use of old-style commands (missing target_include_directories, target_compile_definitions, target_compile_options) makes CMake export for install-based config file impossible, as well as having improper separation of concerns.
· Missing CTest integration.
· Missing install-based CMake config file.
· Can use source_group() on windows and folders to organize MSVC project.
· Unclear what point of BUILD_SHARED_LIBS is since resulting library created is always shared. Also, don’t need it on Linux but present anyway.
· Consider modifying usage of cpprest so that Eclipse Unix makefiles are generated on OSX and Linux instead of just makefiles.
· Inclusion of cpprest library done twice… not sure why.
· Consider component-based installer.
· Can also use compiler compatibility/feature detection headers.
· Missing some files in installer – mlclientConfig.h, for example.
· Compiler feature detection compatibility headers.
· Surprised that hardcoded directories don’t get you on Windows…
link_directories(/usr/lib /usr/local/lib /usr/local/opt/openssl/lib ) | 1.0 | CMake modernization and improvements - · Boost Log pulled in incorrectly via components (should be log log_setup).
· Use of old-style commands (missing target_include_directories, target_compile_definitions, target_compile_options) makes CMake export for install-based config file impossible, as well as having improper separation of concerns.
· Missing CTest integration.
· Missing install-based CMake config file.
· Can use source_group() on windows and folders to organize MSVC project.
· Unclear what point of BUILD_SHARED_LIBS is since resulting library created is always shared. Also, don’t need it on Linux but present anyway.
· Consider modifying usage of cpprest so that Eclipse Unix makefiles are generated on OSX and Linux instead of just makefiles.
· Inclusion of cpprest library done twice… not sure why.
· Consider component-based installer.
· Can also use compiler compatibility/feature detection headers.
· Missing some files in installer – mlclientConfig.h, for example.
· Compiler feature detection compatibility headers.
· Surprised that hardcoded directories don’t get you on Windows…
link_directories(/usr/lib /usr/local/lib /usr/local/opt/openssl/lib ) | non_priority | cmake modernization and improvements · boost log pulled in incorrectly via components should be log log setup · use of old style commands missing target include directories target compile definitions target compile options makes cmake export for install based config file impossible as well as having improper separation of concerns · missing ctest integration · missing install based cmake config file · can use source group on windows and folders to organize msvc project · unclear what point of build shared libs is since resulting library created is always shared also don’t need it on linux but present anyway · consider modifying usage of cpprest so that eclipse unix makefiles are generated on osx and linux instead of just makefiles · inclusion of cpprest library done twice… not sure why · consider component based installer · can also use compiler compatibility feature detection headers · missing some files in installer – mlclientconfig h for example · compiler feature detection compatibility headers · surprised that hardcoded directories don’t get you on windows… link directories usr lib usr local lib usr local opt openssl lib | 0 |
100,420 | 12,520,963,944 | IssuesEvent | 2020-06-03 16:42:43 | MTRNord/Daydream | https://api.github.com/repos/MTRNord/Daydream | opened | Redesign | Design enhancement | Content will come in a few hours/days. This is a placeholder
Notes
- [ ] Settings
- [ ] Global Settings
- [ ] Theme Settings
- [ ] Username/Avatar Settings
- [ ] Devices
- [ ] Logout
- [ ] ...
- [ ] Room Settings
- [ ] Creating Rooms
- [ ] Member List
- [ ] .... | 1.0 | Redesign - Content will come in a few hours/days. This is a placeholder
Notes
- [ ] Settings
- [ ] Global Settings
- [ ] Theme Settings
- [ ] Username/Avatar Settings
- [ ] Devices
- [ ] Logout
- [ ] ...
- [ ] Room Settings
- [ ] Creating Rooms
- [ ] Member List
- [ ] .... | non_priority | redesign content will come in a few hours days this is a placeholder notes settings global settings theme settings username avatar settings devices logout room settings creating rooms member list | 0 |
117,761 | 25,193,766,849 | IssuesEvent | 2022-11-12 08:26:14 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Joomla Color Picker unable to use transparent | No Code Attached Yet J3 Issue | ### Steps to reproduce the issue
Test with any color picker in Joomla such as the Protostar Template and try to add the word transparent. Transparent is a CSS color value for background-color and should work. In other frameworks etc custom color pickers are used to get over this issue.
### Expected result
Color value saved as transparent
### Actual result
Defaults to last saved value
### System information (as much as possible)
Joomla 3x 3.7 etc
### Additional comments
Adding this would be part of enabling 3rd party developers to conform to Joomla's API and features.
| 1.0 | Joomla Color Picker unable to use transparent - ### Steps to reproduce the issue
Test with any color picker in Joomla such as the Protostar Template and try to add the word transparent. Transparent is a CSS color value for background-color and should work. In other frameworks etc custom color pickers are used to get over this issue.
### Expected result
Color value saved as transparent
### Actual result
Defaults to last saved value
### System information (as much as possible)
Joomla 3x 3.7 etc
### Additional comments
Adding this would be part of enabling 3rd party developers to conform to Joomla's API and features.
| non_priority | joomla color picker unable to use transparent steps to reproduce the issue test with any color picker in joomla such as the protostar template and try to add the word transparent transparent is a css color value for background color and should work in other frameworks etc custom color pickers are used to get over this issue expected result color value saved as transparent actual result defaults to last saved value system information as much as possible joomla etc additional comments adding this would be part of enabling party developers to conform to joomla s api and features | 0 |
7,872 | 19,725,121,954 | IssuesEvent | 2022-01-13 19:09:40 | kubernetes/enhancements | https://api.github.com/repos/kubernetes/enhancements | closed | [keps] Proposing a `scope` field for KEP metadata | kind/feature sig/architecture lifecycle/rotten area/enhancements | I've mentioned in some of the recent Enhancements subproject meetings, but filing an issue now, as I was reminded by a line of code in https://github.com/kubernetes/enhancements/pull/2280.
As we've used KEPs for some time as a community, we've seen them implemented to capture changes in:
- `kubernetes/kubernetes` (in-tree)
- out-of-tree components
- infrastructure
- policy
For non-`in-tree` enhancements, much of the metadata/enforcement for KEPs may not be relevant to proposing/implementing a change in policy, infrastructure, out-of-tree components.
Suggesting here that we introduce a `scope` field to KEP metadata to allow for scenarios where we might want to skip stricter validation checks.
Proposed values for the `scope` field:
- `in-tree` (assumed if not populated)
- `out-of-tree`
- `policy`
(Somewhat related to https://github.com/kubernetes/community/issues/3795 and https://github.com/kubernetes/sig-release/issues/486.)
---
From @jeremyrickard in https://github.com/kubernetes/enhancements/issues/1783:
> Over the course of the last few releases, we've seen some Enhancement issues/KEPs that are really focused on policy or tooling changes. These often don't _really_ align with a release / milestone like KEPs that represent new features that can graduate stages within the cadence of a release. As we add more validation to `kepval`, there are also things in the KEP template that may or may not be applicable to a given KEP. This seems like a good opportunity to evolve the KEP template and process.
>
> A recent example is the [Increase Kubernetes support window to one year]
> ([#1498 (comment)](https://github.com/kubernetes/enhancements/issues/1498#issuecomment-629402892)) KEP. This doesn't really fit into a release like a feature would, but has been discussed in terms of a KEP. [This](https://github.com/kubernetes/enhancements/issues/1498#issuecomment-629402892) comment suggests that as it doesn't fit into normal graduation criteria, it is just being marked `stable` and is either "delivered" or not "delivered" within a given release.
>
> There is a related issue in k/community raising the question: [Should KEPs be used for process changes?](https://github.com/kubernetes/community/issues/3795), that would be good to discuss as well.
>
> I propose two things to address this:
>
> * As a first step, perhaps we can add a `type` field to the KEP metadata with some values like "feature", "policy", "tooling" or "other" so we can apply appropriate validations. These can be evolved as new types are identified. Perhaps we can also provide multiple KEP templates that are tailored for the type of "enhancement". This would allow proper validation of various KEP types.
>
> * Additionally, we should provide some guidance/documentation about using KEPs for things like policy changes or non-feature tooling changes to address questions like [kubernetes/community#3795](https://github.com/kubernetes/community/issues/3795).
cc: @kubernetes/enhancements @spiffxp
PRR: @johnbelamaric @deads2k @wojtek-t | 1.0 | [keps] Proposing a `scope` field for KEP metadata - I've mentioned in some of the recent Enhancements subproject meetings, but filing an issue now, as I was reminded by a line of code in https://github.com/kubernetes/enhancements/pull/2280.
As we've used KEPs for some time as a community, we've seen them implemented to capture changes in:
- `kubernetes/kubernetes` (in-tree)
- out-of-tree components
- infrastructure
- policy
For non-`in-tree` enhancements, much of the metadata/enforcement for KEPs may not be relevant to proposing/implementing a change in policy, infrastructure, out-of-tree components.
Suggesting here that we introduce a `scope` field to KEP metadata to allow for scenarios where we might want to skip stricter validation checks.
Proposed values for the `scope` field:
- `in-tree` (assumed if not populated)
- `out-of-tree`
- `policy`
(Somewhat related to https://github.com/kubernetes/community/issues/3795 and https://github.com/kubernetes/sig-release/issues/486.)
---
From @jeremyrickard in https://github.com/kubernetes/enhancements/issues/1783:
> Over the course of the last few releases, we've seen some Enhancement issues/KEPs that are really focused on policy or tooling changes. These often don't _really_ align with a release / milestone like KEPs that represent new features that can graduate stages within the cadence of a release. As we add more validation to `kepval`, there are also things in the KEP template that may or may not be applicable to a given KEP. This seems like a good opportunity to evolve the KEP template and process.
>
> A recent example is the [Increase Kubernetes support window to one year]
> ([#1498 (comment)](https://github.com/kubernetes/enhancements/issues/1498#issuecomment-629402892)) KEP. This doesn't really fit into a release like a feature would, but has been discussed in terms of a KEP. [This](https://github.com/kubernetes/enhancements/issues/1498#issuecomment-629402892) comment suggests that as it doesn't fit into normal graduation criteria, it is just being marked `stable` and is either "delivered" or not "delivered" within a given release.
>
> There is a related issue in k/community raising the question: [Should KEPs be used for process changes?](https://github.com/kubernetes/community/issues/3795), that would be good to discuss as well.
>
> I propose two things to address this:
>
> * As a first step, perhaps we can add a `type` field to the KEP metadata with some values like "feature", "policy", "tooling" or "other" so we can apply appropriate validations. These can be evolved as new types are identified. Perhaps we can also provide multiple KEP templates that are tailored for the type of "enhancement". This would allow proper validation of various KEP types.
>
> * Additionally, we should provide some guidance/documentation about using KEPs for things like policy changes or non-feature tooling changes to address questions like [kubernetes/community#3795](https://github.com/kubernetes/community/issues/3795).
cc: @kubernetes/enhancements @spiffxp
PRR: @johnbelamaric @deads2k @wojtek-t | non_priority | proposing a scope field for kep metadata i ve mentioned in some of the recent enhancements subproject meetings but filing an issue now as i was reminded by a line of code in as we ve used keps for some time as a community we ve seen them implemented to capture changes in kubernetes kubernetes in tree out of tree components infrastructure policy for non in tree enhancements much of the metadata enforcement for keps may not be relevant to proposing implementing a change in policy infrastructure out of tree components suggesting here that we introduce a scope field to kep metadata to allow for scenarios where we might want to skip stricter validation checks proposed values for the scope field in tree assumed if not populated out of tree policy somewhat related to and from jeremyrickard in over the course of the last few releases we ve seen some enhancement issues keps that are really focused on policy or tooling changes these often don t really align with a release milestone like keps that represent new features that can graduate stages within the cadence of a release as we add more validation to kepval there are also things in the kep template that may or may not be applicable to a given kep this seems like a good opportunity to evolve the kep template and process a recent example is the kep this doesn t really fit into a release like a feature would but has been discussed in terms of a kep comment suggests that as it doesn t fit into normal graduation criteria it is just being marked stable and is either delivered or not delivered within a given release there is a related issue in k community raising the question that would be good to discuss as well i propose two things to address this as a first step perhaps we can add a type field to the kep metadata with some values like feature policy tooling or other so we can apply appropriate validations these can be evolved as new types are identified perhaps we can also provide multiple kep templates that are tailored for the type of enhancement this would allow proper validation of various kep types additionally we should provide some guidance documentation about using keps for things like policy changes or non feature tooling changes to address questions like cc kubernetes enhancements spiffxp prr johnbelamaric wojtek t | 0 |
337,969 | 24,564,519,842 | IssuesEvent | 2022-10-13 00:45:44 | BenjaminnHuang/fa22-cse110-lab3 | https://api.github.com/repos/BenjaminnHuang/fa22-cse110-lab3 | opened | colors do not match | documentation pending | # what are the style issues?
The color combinations on the website are not well-looking.
# why need to be fixed?
For a better looking website
| 1.0 | colors do not match - # what are the style issues?
The color combinations on the website are not well-looking.
# why need to be fixed?
For a better looking website
| non_priority | colors do not match what are the style issues the color combinations on the website are not well looking why need to be fixed for a better looking website | 0 |
26,174 | 5,229,642,564 | IssuesEvent | 2017-01-29 07:07:14 | matplotlib/matplotlib | https://api.github.com/repos/matplotlib/matplotlib | opened | Restore `interpolation_none_vs_nearest` example somewhere else in the docs | Documentation | The `interpolation_none_vs_nearest` example was removed in #7952 as @NelleV pointed out that this really doesn't belong in a gallery (no one will really find that info there). I generally agree with that sentiment, but still believe this information should be kept somewhere; @jenshnielsen seemed to agree with this point as well.
Having skimmed (very quickly) through the docs, I believe one place this could go is at the end of the image tutorial (http://matplotlib.org/devdocs/users/image_tutorial.html), which already contains a discussion about interpolation methods. | 1.0 | Restore `interpolation_none_vs_nearest` example somewhere else in the docs - The `interpolation_none_vs_nearest` example was removed in #7952 as @NelleV pointed out that this really doesn't belong in a gallery (no one will really find that info there). I generally agree with that sentiment, but still believe this information should be kept somewhere; @jenshnielsen seemed to agree with this point as well.
Having skimmed (very quickly) through the docs, I believe one place this could go is at the end of the image tutorial (http://matplotlib.org/devdocs/users/image_tutorial.html), which already contains a discussion about interpolation methods. | non_priority | restore interpolation none vs nearest example somewhere else in the docs the interpolation none vs nearest example was removed in as nellev pointed out that this really doesn t belong in a gallery no one will really find that info there i generally agree with that sentiment but still believe this information should be kept somewhere jenshnielsen seemed to agree with this point as well having skimmed very quickly through the docs i believe one place this could go is at the end of the image tutorial which already contains a discussion about interpolation methods | 0 |
82,820 | 7,853,451,150 | IssuesEvent | 2018-06-20 17:27:57 | freedomofpress/securedrop | https://api.github.com/repos/freedomofpress/securedrop | closed | [admin CLI integration testing] securedrop-admin update | goals: more tests | ## Description
We should have integration tests for the `securedrop-admin update` command. This command has a lot of unit tests, but the tests do a lot of mocking out of subprocess calls, I recommend examining those first.
Parent ticket: #3341 | 1.0 | [admin CLI integration testing] securedrop-admin update - ## Description
We should have integration tests for the `securedrop-admin update` command. This command has a lot of unit tests, but the tests do a lot of mocking out of subprocess calls, I recommend examining those first.
Parent ticket: #3341 | non_priority | securedrop admin update description we should have integration tests for the securedrop admin update command this command has a lot of unit tests but the tests do a lot of mocking out of subprocess calls i recommend examining those first parent ticket | 0 |
14,383 | 3,832,993,958 | IssuesEvent | 2016-04-01 00:08:30 | chef/chef-manage-issues | https://api.github.com/repos/chef/chef-manage-issues | closed | Create CONTRIBUTING.md | documentation | @micgo said:
> all projects should have a CONTRIB.md for example: https://github.com/chef/chef-manage has no info on how to contribute. our public repos like chef are great! https://github.com/chef/chef/blob/master/CONTRIBUTING.md | 1.0 | Create CONTRIBUTING.md - @micgo said:
> all projects should have a CONTRIB.md for example: https://github.com/chef/chef-manage has no info on how to contribute. our public repos like chef are great! https://github.com/chef/chef/blob/master/CONTRIBUTING.md | non_priority | create contributing md micgo said all projects should have a contrib md for example has no info on how to contribute our public repos like chef are great | 0 |
187,321 | 14,427,551,308 | IssuesEvent | 2020-12-06 04:45:40 | kalexmills/github-vet-tests-dec2020 | https://api.github.com/repos/kalexmills/github-vet-tests-dec2020 | closed | pbolla0818/oci_terraform: oci/cloud_guard_responder_recipe_test.go; 16 LoC | fresh small test |
Found a possible issue in [pbolla0818/oci_terraform](https://www.github.com/pbolla0818/oci_terraform) at [oci/cloud_guard_responder_recipe_test.go](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to responderRecipeId is reassigned at line 346
[Click here to see the code in its original context.](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, responderRecipeId := range responderRecipeIds {
if ok := SweeperDefaultResourceId[responderRecipeId]; !ok {
deleteResponderRecipeRequest := oci_cloud_guard.DeleteResponderRecipeRequest{}
deleteResponderRecipeRequest.ResponderRecipeId = &responderRecipeId
deleteResponderRecipeRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "cloud_guard")
_, error := cloudGuardClient.DeleteResponderRecipe(context.Background(), deleteResponderRecipeRequest)
if error != nil {
fmt.Printf("Error deleting ResponderRecipe %s %s, It is possible that the resource is already deleted. Please verify manually \n", responderRecipeId, error)
continue
}
waitTillCondition(testAccProvider, &responderRecipeId, responderRecipeSweepWaitCondition, time.Duration(3*time.Minute),
responderRecipeSweepResponseFetchOperation, "cloud_guard", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c233d54c5fe32f12c234d6dceefba0a9b4ab3022
| 1.0 | pbolla0818/oci_terraform: oci/cloud_guard_responder_recipe_test.go; 16 LoC -
Found a possible issue in [pbolla0818/oci_terraform](https://www.github.com/pbolla0818/oci_terraform) at [oci/cloud_guard_responder_recipe_test.go](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to responderRecipeId is reassigned at line 346
[Click here to see the code in its original context.](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, responderRecipeId := range responderRecipeIds {
if ok := SweeperDefaultResourceId[responderRecipeId]; !ok {
deleteResponderRecipeRequest := oci_cloud_guard.DeleteResponderRecipeRequest{}
deleteResponderRecipeRequest.ResponderRecipeId = &responderRecipeId
deleteResponderRecipeRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "cloud_guard")
_, error := cloudGuardClient.DeleteResponderRecipe(context.Background(), deleteResponderRecipeRequest)
if error != nil {
fmt.Printf("Error deleting ResponderRecipe %s %s, It is possible that the resource is already deleted. Please verify manually \n", responderRecipeId, error)
continue
}
waitTillCondition(testAccProvider, &responderRecipeId, responderRecipeSweepWaitCondition, time.Duration(3*time.Minute),
responderRecipeSweepResponseFetchOperation, "cloud_guard", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c233d54c5fe32f12c234d6dceefba0a9b4ab3022
| non_priority | oci terraform oci cloud guard responder recipe test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to responderrecipeid is reassigned at line click here to show the line s of go which triggered the analyzer go for responderrecipeid range responderrecipeids if ok sweeperdefaultresourceid ok deleteresponderreciperequest oci cloud guard deleteresponderreciperequest deleteresponderreciperequest responderrecipeid responderrecipeid deleteresponderreciperequest requestmetadata retrypolicy getretrypolicy true cloud guard error cloudguardclient deleteresponderrecipe context background deleteresponderreciperequest if error nil fmt printf error deleting responderrecipe s s it is possible that the resource is already deleted please verify manually n responderrecipeid error continue waittillcondition testaccprovider responderrecipeid responderrecipesweepwaitcondition time duration time minute responderrecipesweepresponsefetchoperation cloud guard true leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
224,019 | 24,766,775,222 | IssuesEvent | 2022-10-22 16:25:07 | hapifhir/hapi-fhir | https://api.github.com/repos/hapifhir/hapi-fhir | reopened | CVE-2022-25857 (High) detected in snakeyaml-1.30.jar | security vulnerability | ## CVE-2022-25857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.30.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p>
<p>Path to dependency file: /hapi-fhir-server-openapi/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-2.7.4.jar (Root Library)
- :x: **snakeyaml-1.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hapifhir/hapi-fhir/commit/b59f2d05a7d0fd10c7b03bb6f0ebf97757172a71">b59f2d05a7d0fd10c7b03bb6f0ebf97757172a71</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package org.yaml:snakeyaml from 0 and before 1.31 are vulnerable to Denial of Service (DoS) due missing to nested depth limitation for collections.
<p>Publish Date: 2022-08-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857>CVE-2022-25857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857</a></p>
<p>Release Date: 2022-08-30</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-25857 (High) detected in snakeyaml-1.30.jar - ## CVE-2022-25857 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.30.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p>
<p>Path to dependency file: /hapi-fhir-server-openapi/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-2.7.4.jar (Root Library)
- :x: **snakeyaml-1.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hapifhir/hapi-fhir/commit/b59f2d05a7d0fd10c7b03bb6f0ebf97757172a71">b59f2d05a7d0fd10c7b03bb6f0ebf97757172a71</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package org.yaml:snakeyaml from 0 and before 1.31 are vulnerable to Denial of Service (DoS) due missing to nested depth limitation for collections.
<p>Publish Date: 2022-08-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857>CVE-2022-25857</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857</a></p>
<p>Release Date: 2022-08-30</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in snakeyaml jar cve high severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file hapi fhir server openapi pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter jar root library x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details the package org yaml snakeyaml from and before are vulnerable to denial of service dos due missing to nested depth limitation for collections publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend | 0 |
50,195 | 10,467,596,647 | IssuesEvent | 2019-09-22 06:57:17 | wolf-leo/wolfcode-comments | https://api.github.com/repos/wolf-leo/wolfcode-comments | opened | 数十万PhpStudy用户被植入后门,快来检测你是否已沦为“肉鸡” | Gitalk https://www.wolfcode.com.cn/info/161/ | https://www.wolfcode.com.cn/info/161/
北京时间9月20日,杭州公安发布《杭州警方通报打击涉网违法犯罪暨‘净网2019’专项行动战果》一文,文章曝光了国内知名PHP调试环境程序集成包“PhpStudy软件”遭到黑客篡改并植入“后门”。截至案发,近百万PHP用户中超过67万用户已被黑客控制,并大肆盗取账号密码、聊天记录、设备码类等敏感数据多达10万多组,非法牟利600多万元。 | 1.0 | 数十万PhpStudy用户被植入后门,快来检测你是否已沦为“肉鸡” - https://www.wolfcode.com.cn/info/161/
北京时间9月20日,杭州公安发布《杭州警方通报打击涉网违法犯罪暨‘净网2019’专项行动战果》一文,文章曝光了国内知名PHP调试环境程序集成包“PhpStudy软件”遭到黑客篡改并植入“后门”。截至案发,近百万PHP用户中超过67万用户已被黑客控制,并大肆盗取账号密码、聊天记录、设备码类等敏感数据多达10万多组,非法牟利600多万元。 | non_priority | 数十万phpstudy用户被植入后门,快来检测你是否已沦为 ldquo 肉鸡 rdquo ,杭州公安发布《杭州警方通报打击涉网违法犯罪暨‘ ’专项行动战果》一文,文章曝光了国内知名php调试环境程序集成包“phpstudy软件”遭到黑客篡改并植入“后门”。截至案发, ,并大肆盗取账号密码、聊天记录、 , 。 | 0 |
251,838 | 21,525,465,603 | IssuesEvent | 2022-04-28 17:58:16 | damccorm/test-migration-target | https://api.github.com/repos/damccorm/test-migration-target | opened | SpannerChangeStreamErrorTest.testUnavailableExceptionRetries flaky | bug io-java-gcp test-failures P2 | Example failures:
* https://ci-beam.apache.org/job/beam_PreCommit_Java_Commit/21726/testReport/junit/org.apache.beam.sdk.io.gcp.spanner.changestreams/SpannerChangeStreamErrorTest/testUnavailableExceptionRetries/
* https://ci-beam.apache.org/job/beam_PreCommit_Java_Commit/21723/testReport/junit/org.apache.beam.sdk.io.gcp.spanner.changestreams/SpannerChangeStreamErrorTest/testUnavailableExceptionRetries/
{noformat}
java.lang.AssertionError:
Expected: a value greater than <1>
but: <0> was less than <1>
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
at org.apache.beam.sdk.io.gcp.spanner.changestreams.SpannerChangeStreamErrorTest.testUnavailableExceptionRetries(SpannerChangeStreamErrorTest.java:166)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:323)
at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
{noformat}
Imported from Jira [BEAM-14152](https://issues.apache.org/jira/browse/BEAM-14152). Original Jira may contain additional context.
Reported by: lcwik. Jira was originally assigned to pabloem. | 1.0 | SpannerChangeStreamErrorTest.testUnavailableExceptionRetries flaky - Example failures:
* https://ci-beam.apache.org/job/beam_PreCommit_Java_Commit/21726/testReport/junit/org.apache.beam.sdk.io.gcp.spanner.changestreams/SpannerChangeStreamErrorTest/testUnavailableExceptionRetries/
* https://ci-beam.apache.org/job/beam_PreCommit_Java_Commit/21723/testReport/junit/org.apache.beam.sdk.io.gcp.spanner.changestreams/SpannerChangeStreamErrorTest/testUnavailableExceptionRetries/
{noformat}
java.lang.AssertionError:
Expected: a value greater than <1>
but: <0> was less than <1>
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
at org.apache.beam.sdk.io.gcp.spanner.changestreams.SpannerChangeStreamErrorTest.testUnavailableExceptionRetries(SpannerChangeStreamErrorTest.java:166)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:323)
at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
{noformat}
Imported from Jira [BEAM-14152](https://issues.apache.org/jira/browse/BEAM-14152). Original Jira may contain additional context.
Reported by: lcwik. Jira was originally assigned to pabloem. | non_priority | spannerchangestreamerrortest testunavailableexceptionretries flaky example failures noformat java lang assertionerror expected a value greater than but was less than at org hamcrest matcherassert assertthat matcherassert java at org hamcrest matcherassert assertthat matcherassert java at org apache beam sdk io gcp spanner changestreams spannerchangestreamerrortest testunavailableexceptionretries spannerchangestreamerrortest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org apache beam sdk testing testpipeline evaluate testpipeline java at org junit rules expectedexception expectedexceptionstatement evaluate expectedexception java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java noformat imported from jira original jira may contain additional context reported by lcwik jira was originally assigned to pabloem | 0 |
168,223 | 26,617,617,617 | IssuesEvent | 2023-01-24 08:44:17 | webstudio-is/webstudio-designer | https://api.github.com/repos/webstudio-is/webstudio-designer | closed | Fix font-size parsing in transform-figma-tokens.ts | type:bug complexity:low area:design system prio:1 | https://github.com/webstudio-is/webstudio-designer/actions/runs/3991611760/jobs/6846587517
> "Invalid enum value. Expected 'Thin' | 'Hairline' | 'Extra Light' | 'Ultra Light' | 'Light' | 'Normal' | 'Regular' | 'Medium' | 'Semi Bold' | 'Demi Bold' | 'Bold' | 'Extra Bold' | 'Ultra Bold' | 'Black' | 'Heavy' | 'Extra Black' | 'Ultra Black', received 'ExtraLight'"
We expect 'Extra Light' but get 'ExtraLight' from Figma | 1.0 | Fix font-size parsing in transform-figma-tokens.ts - https://github.com/webstudio-is/webstudio-designer/actions/runs/3991611760/jobs/6846587517
> "Invalid enum value. Expected 'Thin' | 'Hairline' | 'Extra Light' | 'Ultra Light' | 'Light' | 'Normal' | 'Regular' | 'Medium' | 'Semi Bold' | 'Demi Bold' | 'Bold' | 'Extra Bold' | 'Ultra Bold' | 'Black' | 'Heavy' | 'Extra Black' | 'Ultra Black', received 'ExtraLight'"
We expect 'Extra Light' but get 'ExtraLight' from Figma | non_priority | fix font size parsing in transform figma tokens ts invalid enum value expected thin hairline extra light ultra light light normal regular medium semi bold demi bold bold extra bold ultra bold black heavy extra black ultra black received extralight we expect extra light but get extralight from figma | 0 |
67,193 | 8,100,932,310 | IssuesEvent | 2018-08-12 06:48:09 | leominov/it-events-ekb | https://api.github.com/repos/leominov/it-events-ekb | closed | Design Update 1.1, 11 августа | approved area/design period/2018.08 | [Design Update 1.1](https://eventskbkontur.timepad.ru/event/778216/)
Краткое описание события.
**Где**: Офис СКБ Контур, Малопрудная, 5
**Когда**: 11 августа, 16:00
**Регистрация**: https://eventskbkontur.timepad.ru/event/778216/
**Взять с собой**: – | 1.0 | Design Update 1.1, 11 августа - [Design Update 1.1](https://eventskbkontur.timepad.ru/event/778216/)
Краткое описание события.
**Где**: Офис СКБ Контур, Малопрудная, 5
**Когда**: 11 августа, 16:00
**Регистрация**: https://eventskbkontur.timepad.ru/event/778216/
**Взять с собой**: – | non_priority | design update августа краткое описание события где офис скб контур малопрудная когда августа регистрация взять с собой – | 0 |
229,161 | 17,515,238,094 | IssuesEvent | 2021-08-11 05:31:49 | news-reply-sentiment-analysis/news-reply-sentiment-analysis | https://api.github.com/repos/news-reply-sentiment-analysis/news-reply-sentiment-analysis | closed | 기존에 있던 이슈는 복사되지 않네요. | documentation help wanted | ## 이런
@waltererz 님. 현재 백엔드 팀이 구성되어 있습니다. 프론트엔드도 이처럼 구성해주시기 부탁드립니다.
포함되는 인원은 저와 @SUMIN-WEE 님 그리고 @waltererz 님이 포함되면 될 것 같습니다.
백엔드와 이 `organization`의 경우 제가 썸네일을 만들었는데요. 관심 있으시다면, [미리캔버스](https://www.miricanvas.com/)에서 만들어보시는 것도 좋겠습니다. 혹은 만들면 좋겠지만, 어려우시다면 제가 원하시는 디자인으로 만들겠습니다.
@SUMIN-WEE 님. 기획팀을 별도로 마련하지 않고 있는데요. 기획을 진행하실 때 필요한 자료를 정리해주셨으면 합니다.
제가 구체적으로 오더를 드리진 않은 상황이라 명료하지 않을 것 같은데요. 우선 아래 리스트를 참고해주세요.
- 랜딩 페이지 구성. 어플리케이션 소개 등
- 다른 이슈에서 언급한 로그인 기능 정리.
- `@.@` 제가 까먹고 있는 무언가..
다른 의견 코멘트로 공유 부탁드립니다. 👨🏻💻
---
### 덧붙이는 말
- 미리캔버스에는 이미 구현된 템플릿들이 존재합니다. 해당 템플릿을 활용하시면 좋을 것 같습니다.
- 또한, color scheme은 알고 계시듯이 `2 color scheme` 같은 형식으로 검색하면 많은 자료가 있습니다. 저의 경우 [designwizard](https://www.designwizard.com/blog/design-trends/colour-combination)에서 참조했습니다. | 1.0 | 기존에 있던 이슈는 복사되지 않네요. - ## 이런
@waltererz 님. 현재 백엔드 팀이 구성되어 있습니다. 프론트엔드도 이처럼 구성해주시기 부탁드립니다.
포함되는 인원은 저와 @SUMIN-WEE 님 그리고 @waltererz 님이 포함되면 될 것 같습니다.
백엔드와 이 `organization`의 경우 제가 썸네일을 만들었는데요. 관심 있으시다면, [미리캔버스](https://www.miricanvas.com/)에서 만들어보시는 것도 좋겠습니다. 혹은 만들면 좋겠지만, 어려우시다면 제가 원하시는 디자인으로 만들겠습니다.
@SUMIN-WEE 님. 기획팀을 별도로 마련하지 않고 있는데요. 기획을 진행하실 때 필요한 자료를 정리해주셨으면 합니다.
제가 구체적으로 오더를 드리진 않은 상황이라 명료하지 않을 것 같은데요. 우선 아래 리스트를 참고해주세요.
- 랜딩 페이지 구성. 어플리케이션 소개 등
- 다른 이슈에서 언급한 로그인 기능 정리.
- `@.@` 제가 까먹고 있는 무언가..
다른 의견 코멘트로 공유 부탁드립니다. 👨🏻💻
---
### 덧붙이는 말
- 미리캔버스에는 이미 구현된 템플릿들이 존재합니다. 해당 템플릿을 활용하시면 좋을 것 같습니다.
- 또한, color scheme은 알고 계시듯이 `2 color scheme` 같은 형식으로 검색하면 많은 자료가 있습니다. 저의 경우 [designwizard](https://www.designwizard.com/blog/design-trends/colour-combination)에서 참조했습니다. | non_priority | 기존에 있던 이슈는 복사되지 않네요 이런 waltererz 님 현재 백엔드 팀이 구성되어 있습니다 프론트엔드도 이처럼 구성해주시기 부탁드립니다 포함되는 인원은 저와 sumin wee 님 그리고 waltererz 님이 포함되면 될 것 같습니다 백엔드와 이 organization 의 경우 제가 썸네일을 만들었는데요 관심 있으시다면 만들어보시는 것도 좋겠습니다 혹은 만들면 좋겠지만 어려우시다면 제가 원하시는 디자인으로 만들겠습니다 sumin wee 님 기획팀을 별도로 마련하지 않고 있는데요 기획을 진행하실 때 필요한 자료를 정리해주셨으면 합니다 제가 구체적으로 오더를 드리진 않은 상황이라 명료하지 않을 것 같은데요 우선 아래 리스트를 참고해주세요 랜딩 페이지 구성 어플리케이션 소개 등 다른 이슈에서 언급한 로그인 기능 정리 제가 까먹고 있는 무언가 다른 의견 코멘트로 공유 부탁드립니다 👨🏻💻 덧붙이는 말 미리캔버스에는 이미 구현된 템플릿들이 존재합니다 해당 템플릿을 활용하시면 좋을 것 같습니다 또한 color scheme은 알고 계시듯이 color scheme 같은 형식으로 검색하면 많은 자료가 있습니다 저의 경우 참조했습니다 | 0 |
9,012 | 8,499,301,677 | IssuesEvent | 2018-10-29 16:49:37 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | azurerm_storage_blob dynamic source content | duplicate enhancement service/storage | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Currently the azurerm_storage_blob resource only allows for a source file or source uri. This is fine for an initial deployment, but TF doesn't detect if the source file changes. What we'd like to do is update the source file and have TF update the target Blob.
If there is appetite for this functionality i'm happy to try and give building it a go, but don't want to waste energy. Alternatively if there is a solution that exists today that i'm just not aware of then i'd love to be pointed in that direction.
### New or Affected Resource(s)
* azurerm_storage_blob
### Potential Terraform Configuration
```hcl
data "template_file" "test_file" {
template = "${file("files/testFile.txt")}"
}
resource "azurerm_storage_blob" "testsb" {
name = "testFile.txt"
resource_group_name = "${azurerm_resource_group.test.name}"
storage_account_name = "${azurerm_storage_account.test.name}"
storage_container_name = "${azurerm_storage_container.test.name}"
source_data = "${data.template_file.test_file.rendered}"
type = "blob"
size = 5120
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #0000
| 1.0 | azurerm_storage_blob dynamic source content - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Currently the azurerm_storage_blob resource only allows for a source file or source uri. This is fine for an initial deployment, but TF doesn't detect if the source file changes. What we'd like to do is update the source file and have TF update the target Blob.
If there is appetite for this functionality i'm happy to try and give building it a go, but don't want to waste energy. Alternatively if there is a solution that exists today that i'm just not aware of then i'd love to be pointed in that direction.
### New or Affected Resource(s)
* azurerm_storage_blob
### Potential Terraform Configuration
```hcl
data "template_file" "test_file" {
template = "${file("files/testFile.txt")}"
}
resource "azurerm_storage_blob" "testsb" {
name = "testFile.txt"
resource_group_name = "${azurerm_resource_group.test.name}"
storage_account_name = "${azurerm_storage_account.test.name}"
storage_container_name = "${azurerm_storage_container.test.name}"
source_data = "${data.template_file.test_file.rendered}"
type = "blob"
size = 5120
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
--->
* #0000
| non_priority | azurerm storage blob dynamic source content community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description currently the azurerm storage blob resource only allows for a source file or source uri this is fine for an initial deployment but tf doesn t detect if the source file changes what we d like to do is update the source file and have tf update the target blob if there is appetite for this functionality i m happy to try and give building it a go but don t want to waste energy alternatively if there is a solution that exists today that i m just not aware of then i d love to be pointed in that direction new or affected resource s azurerm storage blob potential terraform configuration hcl data template file test file template file files testfile txt resource azurerm storage blob testsb name testfile txt resource group name azurerm resource group test name storage account name azurerm storage account test name storage container name azurerm storage container test name source data data template file test file rendered type blob size references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example | 0 |
1,053 | 3,024,877,729 | IssuesEvent | 2015-08-03 01:48:08 | catapult-project/catapult | https://api.github.com/repos/catapult-project/catapult | closed | ./run_tests, run_d8_tests, run_py_tests refactoring | Infrastructure | <a href="https://github.com/natduca"><img src="https://avatars.githubusercontent.com/u/412396?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [natduca](https://github.com/natduca)**
_Friday Jul 03, 2015 at 10:17 GMT_
_Originally opened as https://github.com/google/trace-viewer/issues/1083_
----
Right now, we have:
- ./run_tests
- run_d8_tests
- run_py_tests
Lets have run_tests be a simple script that calls run_py_tests and run_d8_tests, where we take the
existing run_tests and rename to run_py_tests.
| 1.0 | ./run_tests, run_d8_tests, run_py_tests refactoring - <a href="https://github.com/natduca"><img src="https://avatars.githubusercontent.com/u/412396?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [natduca](https://github.com/natduca)**
_Friday Jul 03, 2015 at 10:17 GMT_
_Originally opened as https://github.com/google/trace-viewer/issues/1083_
----
Right now, we have:
- ./run_tests
- run_d8_tests
- run_py_tests
Lets have run_tests be a simple script that calls run_py_tests and run_d8_tests, where we take the
existing run_tests and rename to run_py_tests.
| non_priority | run tests run tests run py tests refactoring issue by friday jul at gmt originally opened as right now we have run tests run tests run py tests lets have run tests be a simple script that calls run py tests and run tests where we take the existing run tests and rename to run py tests | 0 |
31,189 | 4,697,480,901 | IssuesEvent | 2016-10-12 09:30:33 | redmatrix/hubzilla | https://api.github.com/repos/redmatrix/hubzilla | closed | /util/config not working | retest please | ```diff
diff --git a/util/config b/util/config
index 38d2fed..8da8d92 100755
--- a/util/config
+++ b/util/config
@@ -71,7 +71,7 @@ if($argc == 2) {
}
if($argc == 1) {
- $r = q("select * from config where 1");
+ $r = q("select * from config");
if($r) {
foreach($r as $rr) {
echo "config[{$rr['cat']}][{$rr['k']}] = " . printable_config($rr['v']) . "\n";
``` | 1.0 | /util/config not working - ```diff
diff --git a/util/config b/util/config
index 38d2fed..8da8d92 100755
--- a/util/config
+++ b/util/config
@@ -71,7 +71,7 @@ if($argc == 2) {
}
if($argc == 1) {
- $r = q("select * from config where 1");
+ $r = q("select * from config");
if($r) {
foreach($r as $rr) {
echo "config[{$rr['cat']}][{$rr['k']}] = " . printable_config($rr['v']) . "\n";
``` | non_priority | util config not working diff diff git a util config b util config index a util config b util config if argc if argc r q select from config where r q select from config if r foreach r as rr echo config printable config rr n | 0 |
140,715 | 12,946,265,543 | IssuesEvent | 2020-07-18 18:21:42 | v8-riscv/v8 | https://api.github.com/repos/v8-riscv/v8 | opened | Add instructions on cross-compiled build to the wiki | documentation | Please add the build instructions [here](https://github.com/v8-riscv/v8/wiki/Cross-compiled-Build).
Since this is work in progress, please also update the latest testing status for cross-compiled build on https://github.com/v8-riscv/v8/wiki/Testing-Status | 1.0 | Add instructions on cross-compiled build to the wiki - Please add the build instructions [here](https://github.com/v8-riscv/v8/wiki/Cross-compiled-Build).
Since this is work in progress, please also update the latest testing status for cross-compiled build on https://github.com/v8-riscv/v8/wiki/Testing-Status | non_priority | add instructions on cross compiled build to the wiki please add the build instructions since this is work in progress please also update the latest testing status for cross compiled build on | 0 |
83,398 | 10,326,626,163 | IssuesEvent | 2019-09-02 03:04:09 | RebecaM94/Proyecto_Integrador | https://api.github.com/repos/RebecaM94/Proyecto_Integrador | opened | Estimación de recursos | documentation | Siguiendo el formato del plan de desarrollo de software, listar todos los estimados de disponibilidad de recursos, así como las asunciones que se tienen. | 1.0 | Estimación de recursos - Siguiendo el formato del plan de desarrollo de software, listar todos los estimados de disponibilidad de recursos, así como las asunciones que se tienen. | non_priority | estimación de recursos siguiendo el formato del plan de desarrollo de software listar todos los estimados de disponibilidad de recursos así como las asunciones que se tienen | 0 |
32,443 | 7,531,164,927 | IssuesEvent | 2018-04-15 01:37:42 | GMLC-TDC/HELICS-src | https://api.github.com/repos/GMLC-TDC/HELICS-src | opened | Used named tests instead of numbers in travis and appveyor | Code Improvement testing | We are getting enough tests now that it is probably advisable to use names in the ctest calls instead of numbers. The numbers are likely to be inconsistent and increasingly difficult to maintain so we should move to named tests for less brittleness in the CI execution.
This will involve a few minor changes to the Travis and appveyor yml files and probably some renaming of the actual tests. | 1.0 | Used named tests instead of numbers in travis and appveyor - We are getting enough tests now that it is probably advisable to use names in the ctest calls instead of numbers. The numbers are likely to be inconsistent and increasingly difficult to maintain so we should move to named tests for less brittleness in the CI execution.
This will involve a few minor changes to the Travis and appveyor yml files and probably some renaming of the actual tests. | non_priority | used named tests instead of numbers in travis and appveyor we are getting enough tests now that it is probably advisable to use names in the ctest calls instead of numbers the numbers are likely to be inconsistent and increasingly difficult to maintain so we should move to named tests for less brittleness in the ci execution this will involve a few minor changes to the travis and appveyor yml files and probably some renaming of the actual tests | 0 |
351,472 | 32,001,987,954 | IssuesEvent | 2023-09-21 12:50:35 | UA-1023-TAQC/SpaceToStudyTA | https://api.github.com/repos/UA-1023-TAQC/SpaceToStudyTA | closed | [Guest's home page] Check the welcoming block UI Test 7 #1066 | issue Guest test case | https://github.com/ita-social-projects/SpaceToStudy-Client/issues/1066#issue-1871137133
# Verify the welcoming block UI Test 7 #1066
### Priority
Medium
## Description
Verify UI of general information about Space2Study platform
### Precondition
The site is opened.
The user is not logged in.
## Test Steps
| Step No. | Step description | Input data | Expected result |
|-------------|:-------------|:-----------|:-----|
| 1. | The Home page is opened | | The list with four phrases from the right of the static world map, the static world map image below the “Get started for free” button are displayed |
| 2. |Resize window screen to the mobile size . | | All UI controls are visible on the screen |
| 3. |Resize window screen to the tablet size . | | All UI controls are visible on the screen |
| 1.0 | [Guest's home page] Check the welcoming block UI Test 7 #1066 - https://github.com/ita-social-projects/SpaceToStudy-Client/issues/1066#issue-1871137133
# Verify the welcoming block UI Test 7 #1066
### Priority
Medium
## Description
Verify UI of general information about Space2Study platform
### Precondition
The site is opened.
The user is not logged in.
## Test Steps
| Step No. | Step description | Input data | Expected result |
|-------------|:-------------|:-----------|:-----|
| 1. | The Home page is opened | | The list with four phrases from the right of the static world map, the static world map image below the “Get started for free” button are displayed |
| 2. |Resize window screen to the mobile size . | | All UI controls are visible on the screen |
| 3. |Resize window screen to the tablet size . | | All UI controls are visible on the screen |
| non_priority | check the welcoming block ui test verify the welcoming block ui test priority medium description verify ui of general information about platform precondition the site is opened the user is not logged in test steps step no step description input data expected result the home page is opened the list with four phrases from the right of the static world map the static world map image below the “get started for free” button are displayed resize window screen to the mobile size all ui controls are visible on the screen resize window screen to the tablet size all ui controls are visible on the screen | 0 |
34,903 | 9,488,713,244 | IssuesEvent | 2019-04-22 20:19:59 | melt-umn/silver | https://api.github.com/repos/melt-umn/silver | opened | Changing a production into a function causes a runtime cast error without a clean build | BuildBug bug | This compiles successfully but causes an error to be raised of the form
"Pwhatever cannot be cast to Nwhatever".
I am guessing this is a symptom of a known issue, but wanted to make a note just the same. | 1.0 | Changing a production into a function causes a runtime cast error without a clean build - This compiles successfully but causes an error to be raised of the form
"Pwhatever cannot be cast to Nwhatever".
I am guessing this is a symptom of a known issue, but wanted to make a note just the same. | non_priority | changing a production into a function causes a runtime cast error without a clean build this compiles successfully but causes an error to be raised of the form pwhatever cannot be cast to nwhatever i am guessing this is a symptom of a known issue but wanted to make a note just the same | 0 |
2,274 | 2,673,789,986 | IssuesEvent | 2015-03-24 21:12:50 | sul-dlss/spotlight | https://api.github.com/repos/sul-dlss/spotlight | closed | Reorganize Github wiki pages | Documentation in progress | Based on new content we're adding, we need some reorganization of the wiki page structure so the documentation structure makes sense to people. Will add a new sidebar and move some content around as part of this.
(Image upload just to reference URL in wiki page)


| 1.0 | Reorganize Github wiki pages - Based on new content we're adding, we need some reorganization of the wiki page structure so the documentation structure makes sense to people. Will add a new sidebar and move some content around as part of this.
(Image upload just to reference URL in wiki page)


| non_priority | reorganize github wiki pages based on new content we re adding we need some reorganization of the wiki page structure so the documentation structure makes sense to people will add a new sidebar and move some content around as part of this image upload just to reference url in wiki page | 0 |
354,155 | 25,152,815,159 | IssuesEvent | 2022-11-10 11:17:34 | astarte-platform/astarte | https://api.github.com/repos/astarte-platform/astarte | opened | Documentation: use consistent Astarte URL notation | good first issue documentation minor | The Astarte documentation currently uses two notations for the base Astarte API URL:
- `<astarte base API URL>` (e.g.[here](https://docs.astarte-platform.org/latest/030-manage_interfaces.html))
- `api.<your astarte domain>`(e.g. [here](https://docs.astarte-platform.org/latest/050-query_device.html))
Commit to a single, consistent notation across all documentation. | 1.0 | Documentation: use consistent Astarte URL notation - The Astarte documentation currently uses two notations for the base Astarte API URL:
- `<astarte base API URL>` (e.g.[here](https://docs.astarte-platform.org/latest/030-manage_interfaces.html))
- `api.<your astarte domain>`(e.g. [here](https://docs.astarte-platform.org/latest/050-query_device.html))
Commit to a single, consistent notation across all documentation. | non_priority | documentation use consistent astarte url notation the astarte documentation currently uses two notations for the base astarte api url e g api e g commit to a single consistent notation across all documentation | 0 |
156,766 | 24,625,627,941 | IssuesEvent | 2022-10-16 13:34:46 | dotnet/efcore | https://api.github.com/repos/dotnet/efcore | closed | ModelBuilder: KeyDiscoveryConvention should throw for multiple matching properties | closed-by-design | ```C#
public class Parent
{
public int ParentId { get; set; }
public string Id { get; set; }
}
```
Generates following model
```
Model:
EntityType: Parent
Properties:
Id (string) Required PK AfterSave:Throw ValueGenerated.OnAdd 0 0 0 -1 0
Annotations:
Relational:TypeMapping: Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerStringTypeMapping
ParentId (int) Required 1 1 -1 -1 -1
Annotations:
Relational:TypeMapping: Microsoft.EntityFrameworkCore.Storage.IntTypeMapping
Keys:
Id PK
Annotations:
ConstructorBinding: Microsoft.EntityFrameworkCore.Metadata.Internal.DirectConstructorBinding
Relational:TableName: SetC
Annotations:
ProductVersion: 2.2.0-preview3-35433
Relational:MaxIdentifierLength: 128
SqlServer:ValueGenerationStrategy: IdentityColumn
```
We preferred `Id` over `{ClassName}Id`.
We should not discover PK by convention and let model validator throw. | 1.0 | ModelBuilder: KeyDiscoveryConvention should throw for multiple matching properties - ```C#
public class Parent
{
public int ParentId { get; set; }
public string Id { get; set; }
}
```
Generates following model
```
Model:
EntityType: Parent
Properties:
Id (string) Required PK AfterSave:Throw ValueGenerated.OnAdd 0 0 0 -1 0
Annotations:
Relational:TypeMapping: Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerStringTypeMapping
ParentId (int) Required 1 1 -1 -1 -1
Annotations:
Relational:TypeMapping: Microsoft.EntityFrameworkCore.Storage.IntTypeMapping
Keys:
Id PK
Annotations:
ConstructorBinding: Microsoft.EntityFrameworkCore.Metadata.Internal.DirectConstructorBinding
Relational:TableName: SetC
Annotations:
ProductVersion: 2.2.0-preview3-35433
Relational:MaxIdentifierLength: 128
SqlServer:ValueGenerationStrategy: IdentityColumn
```
We preferred `Id` over `{ClassName}Id`.
We should not discover PK by convention and let model validator throw. | non_priority | modelbuilder keydiscoveryconvention should throw for multiple matching properties c public class parent public int parentid get set public string id get set generates following model model entitytype parent properties id string required pk aftersave throw valuegenerated onadd annotations relational typemapping microsoft entityframeworkcore sqlserver storage internal sqlserverstringtypemapping parentid int required annotations relational typemapping microsoft entityframeworkcore storage inttypemapping keys id pk annotations constructorbinding microsoft entityframeworkcore metadata internal directconstructorbinding relational tablename setc annotations productversion relational maxidentifierlength sqlserver valuegenerationstrategy identitycolumn we preferred id over classname id we should not discover pk by convention and let model validator throw | 0 |
82,706 | 16,017,241,595 | IssuesEvent | 2021-04-20 17:34:09 | microsoft/vscode-jupyter | https://api.github.com/repos/microsoft/vscode-jupyter | closed | Version 2021.6.755784270 does not install on latest VSC release 1.55.1 | bug info-needed upstream-vscode | Version 2021.6.755784270 does not install on latest VSC release 1.55.1. Currently version 2021.6.755784270 is pushed "https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter" as the latest stable release. This renders other extension like Pylance unusable for VSC installations that are offline (without access to preview versions). The latest working version for the stable release of VSC 1.55.1 is 2021.5.745244803. Why is 2021.6.755784270 pushed as the latest supported version when it breaks on the latest stable version of VSC? I would have expected 2021.5.745244803 to have been the latest on "https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter" and the 2021.6.755784270 (which requires a non-stable insider VSC) to remain in a dev channel until ready for stable/production release of VSC to match it. | 1.0 | Version 2021.6.755784270 does not install on latest VSC release 1.55.1 - Version 2021.6.755784270 does not install on latest VSC release 1.55.1. Currently version 2021.6.755784270 is pushed "https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter" as the latest stable release. This renders other extension like Pylance unusable for VSC installations that are offline (without access to preview versions). The latest working version for the stable release of VSC 1.55.1 is 2021.5.745244803. Why is 2021.6.755784270 pushed as the latest supported version when it breaks on the latest stable version of VSC? I would have expected 2021.5.745244803 to have been the latest on "https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter" and the 2021.6.755784270 (which requires a non-stable insider VSC) to remain in a dev channel until ready for stable/production release of VSC to match it. | non_priority | version does not install on latest vsc release version does not install on latest vsc release currently version is pushed as the latest stable release this renders other extension like pylance unusable for vsc installations that are offline without access to preview versions the latest working version for the stable release of vsc is why is pushed as the latest supported version when it breaks on the latest stable version of vsc i would have expected to have been the latest on and the which requires a non stable insider vsc to remain in a dev channel until ready for stable production release of vsc to match it | 0 |
56,429 | 15,086,740,381 | IssuesEvent | 2021-02-05 20:51:17 | jccastillo0007/eFacturaT | https://api.github.com/repos/jccastillo0007/eFacturaT | closed | El complemento de pago, marca error al momento de intentar timbrarlo | bug defect | Para este cliente IUAA871108GM8, marcó error al momento de emitir el complemento.
Finkok arrojó el error..
738 : El schema http://www.sat.gob.mx/Pagos no está definido
Te mandé un correo con el XML que se formó y mostró en el catalina.out.
También mandé un XML que timbré para la TIA. Ambos son pagos, pero el de la tia, como que tiene una estructura distinta.
Échale un vistazo. | 1.0 | El complemento de pago, marca error al momento de intentar timbrarlo - Para este cliente IUAA871108GM8, marcó error al momento de emitir el complemento.
Finkok arrojó el error..
738 : El schema http://www.sat.gob.mx/Pagos no está definido
Te mandé un correo con el XML que se formó y mostró en el catalina.out.
También mandé un XML que timbré para la TIA. Ambos son pagos, pero el de la tia, como que tiene una estructura distinta.
Échale un vistazo. | non_priority | el complemento de pago marca error al momento de intentar timbrarlo para este cliente marcó error al momento de emitir el complemento finkok arrojó el error el schema no está definido te mandé un correo con el xml que se formó y mostró en el catalina out también mandé un xml que timbré para la tia ambos son pagos pero el de la tia como que tiene una estructura distinta échale un vistazo | 0 |
251,619 | 27,191,024,655 | IssuesEvent | 2023-02-19 20:00:22 | WFS-Mend/vtrade-serverless | https://api.github.com/repos/WFS-Mend/vtrade-serverless | opened | validator-10.2.0.tgz: 1 vulnerabilities (highest severity is: 7.5) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>validator-10.2.0.tgz</b></p></summary>
<p>String validation and sanitization</p>
<p>Library home page: <a href="https://registry.npmjs.org/validator/-/validator-10.2.0.tgz">https://registry.npmjs.org/validator/-/validator-10.2.0.tgz</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/WFS-Mend/vtrade-serverless/commit/c52b98a9d17fb727f467104ad99db0dca5b6e0f9">c52b98a9d17fb727f467104ad99db0dca5b6e0f9</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (validator version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-3765](https://www.mend.io/vulnerability-database/CVE-2021-3765) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | validator-10.2.0.tgz | Direct | 13.7.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3765</summary>
### Vulnerable Library - <b>validator-10.2.0.tgz</b></p>
<p>String validation and sanitization</p>
<p>Library home page: <a href="https://registry.npmjs.org/validator/-/validator-10.2.0.tgz">https://registry.npmjs.org/validator/-/validator-10.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **validator-10.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/WFS-Mend/vtrade-serverless/commit/c52b98a9d17fb727f467104ad99db0dca5b6e0f9">c52b98a9d17fb727f467104ad99db0dca5b6e0f9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
validator.js is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-11-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3765>CVE-2021-3765</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-qgmg-gppg-76g5">https://github.com/advisories/GHSA-qgmg-gppg-76g5</a></p>
<p>Release Date: 2021-11-02</p>
<p>Fix Resolution: 13.7.0</p>
</p>
<p></p>
</details> | True | validator-10.2.0.tgz: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>validator-10.2.0.tgz</b></p></summary>
<p>String validation and sanitization</p>
<p>Library home page: <a href="https://registry.npmjs.org/validator/-/validator-10.2.0.tgz">https://registry.npmjs.org/validator/-/validator-10.2.0.tgz</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/WFS-Mend/vtrade-serverless/commit/c52b98a9d17fb727f467104ad99db0dca5b6e0f9">c52b98a9d17fb727f467104ad99db0dca5b6e0f9</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (validator version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-3765](https://www.mend.io/vulnerability-database/CVE-2021-3765) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | validator-10.2.0.tgz | Direct | 13.7.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3765</summary>
### Vulnerable Library - <b>validator-10.2.0.tgz</b></p>
<p>String validation and sanitization</p>
<p>Library home page: <a href="https://registry.npmjs.org/validator/-/validator-10.2.0.tgz">https://registry.npmjs.org/validator/-/validator-10.2.0.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **validator-10.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/WFS-Mend/vtrade-serverless/commit/c52b98a9d17fb727f467104ad99db0dca5b6e0f9">c52b98a9d17fb727f467104ad99db0dca5b6e0f9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
validator.js is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-11-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3765>CVE-2021-3765</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-qgmg-gppg-76g5">https://github.com/advisories/GHSA-qgmg-gppg-76g5</a></p>
<p>Release Date: 2021-11-02</p>
<p>Fix Resolution: 13.7.0</p>
</p>
<p></p>
</details> | non_priority | validator tgz vulnerabilities highest severity is vulnerable library validator tgz string validation and sanitization library home page a href found in head commit a href vulnerabilities cve severity cvss dependency type fixed in validator version remediation available high validator tgz direct details cve vulnerable library validator tgz string validation and sanitization library home page a href dependency hierarchy x validator tgz vulnerable library found in head commit a href found in base branch master vulnerability details validator js is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
441,763 | 30,798,495,423 | IssuesEvent | 2023-07-31 22:08:52 | earthcube/geocodes_documentation | https://api.github.com/repos/earthcube/geocodes_documentation | closed | update docs https://earthcube.github.io/ landing page, where geocodes is running | documentation good first issue | This needs to point at the
https://earthcube.github.io/geocodes_documentation/
earhtcube.org
Geocodes stack information in the geocodes docs needs to be update | 1.0 | update docs https://earthcube.github.io/ landing page, where geocodes is running - This needs to point at the
https://earthcube.github.io/geocodes_documentation/
earhtcube.org
Geocodes stack information in the geocodes docs needs to be update | non_priority | update docs landing page where geocodes is running this needs to point at the earhtcube org geocodes stack information in the geocodes docs needs to be update | 0 |
111,932 | 17,049,554,928 | IssuesEvent | 2021-07-06 07:12:57 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [Security Solution] The isolation is logged as failed even though it is successful | Team: SecuritySolution Team:Onboarding and Lifecycle Mgt bug v7.14.0 | **Description:**
The isolation is logged as failed even though it is successful
**Build Details:**
```
VERSION: 7.14.0 BC1
BUILD: 42292
COMMIT: 071a74e02f82b79a4a10026b5c9e02d593112fd4
ARTIFACT: https://staging.elastic.co/7.14.0-8eba2f5f/summary-7.14.0.html
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in.
2. Endpoint should be deployed
**Steps to Reproduce:**
1. Navigate to the Endpoint Tab
2. Isolate the host
3. After some time, release the host
4. Observe that the activity logs show the isolation status as failure
**Impacted Test case:**
N/A
**Actual Result:**
The isolation is logged as failed even though it is successful
**Expected Result:**
The isolation is logged as successful if it is successful
**What's working:**
N/A
**What's not working:**
N/A
**Screen Recording:**



**Logs:**
N/A | True | [Security Solution] The isolation is logged as failed even though it is successful - **Description:**
The isolation is logged as failed even though it is successful
**Build Details:**
```
VERSION: 7.14.0 BC1
BUILD: 42292
COMMIT: 071a74e02f82b79a4a10026b5c9e02d593112fd4
ARTIFACT: https://staging.elastic.co/7.14.0-8eba2f5f/summary-7.14.0.html
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in.
2. Endpoint should be deployed
**Steps to Reproduce:**
1. Navigate to the Endpoint Tab
2. Isolate the host
3. After some time, release the host
4. Observe that the activity logs show the isolation status as failure
**Impacted Test case:**
N/A
**Actual Result:**
The isolation is logged as failed even though it is successful
**Expected Result:**
The isolation is logged as successful if it is successful
**What's working:**
N/A
**What's not working:**
N/A
**Screen Recording:**



**Logs:**
N/A | non_priority | the isolation is logged as failed even though it is successful description the isolation is logged as failed even though it is successful build details version build commit artifact browser details all preconditions kibana user should be logged in endpoint should be deployed steps to reproduce navigate to the endpoint tab isolate the host after some time release the host observe that the activity logs show the isolation status as failure impacted test case n a actual result the isolation is logged as failed even though it is successful expected result the isolation is logged as successful if it is successful what s working n a what s not working n a screen recording logs n a | 0 |
450,684 | 31,986,628,492 | IssuesEvent | 2023-09-21 00:14:36 | onflow/docs | https://api.github.com/repos/onflow/docs | closed | [BUG] Staking Requirements Broken Links | documentation | ### Need to redirect these which are all not found currently
Following are not found and being linked to from
- https://flow.com/flow-tokenomics/technical-overview#staking-requirements
- https://flow.com/flow-tokenomics/technical-overview#node-economics
- [x] https://flow.com/flow-tokenomics/technical-overview#staking-requirements
- [x] https://developers.flow.com/references/run-and-secure/staking/epoch-terminology
- [x] https://developers.flow.com/concepts/start-here/storage
- [x] https://developers.flow.com/references/run-and-secure/staking/schedule
- [x] https://developers.flow.com/references/run-and-secure/staking/stake-slashing
- [x] https://developers.flow.com/concepts/start-here/storage
- [x] https://developers.flow.com/references/run-and-secure/nodes/node-operation
| 1.0 | [BUG] Staking Requirements Broken Links - ### Need to redirect these which are all not found currently
Following are not found and being linked to from
- https://flow.com/flow-tokenomics/technical-overview#staking-requirements
- https://flow.com/flow-tokenomics/technical-overview#node-economics
- [x] https://flow.com/flow-tokenomics/technical-overview#staking-requirements
- [x] https://developers.flow.com/references/run-and-secure/staking/epoch-terminology
- [x] https://developers.flow.com/concepts/start-here/storage
- [x] https://developers.flow.com/references/run-and-secure/staking/schedule
- [x] https://developers.flow.com/references/run-and-secure/staking/stake-slashing
- [x] https://developers.flow.com/concepts/start-here/storage
- [x] https://developers.flow.com/references/run-and-secure/nodes/node-operation
| non_priority | staking requirements broken links need to redirect these which are all not found currently following are not found and being linked to from | 0 |
58,841 | 14,493,084,405 | IssuesEvent | 2020-12-11 08:02:31 | ARM-software/armnn | https://api.github.com/repos/ARM-software/armnn | closed | armnn make Error while building for x86_64 | Build issue | I am facing below issue when i try to make. Please help.
ARMNN : branches/armnn_20_11
https://github.com/ARM-software/armnn.git
>armnn/build$ make
[ 1%] Built target pipeCommon
[ 3%] Built target armnnUtils
[ 3%] Built target armnnAclCommon
[ 4%] Built target armnnBackendsCommon
[ 4%] Built target armnnNeonBackend
[ 5%] Built target armnnRefBackend
[ 14%] Built target armnnRefBackendWorkloads
[ 15%] Built target armnnClBackend
[ 21%] Built target armnnClBackendWorkloads
[ 21%] Built target fmt
[ 35%] Built target armnn
[ 36%] Built target timelineDecoderJson
[ 37%] Built target gatordMockService
[ 38%] Building CXX object CMakeFiles/armnnTfParser.dir/src/armnnTfParser/TfParser.cpp.o
armnn/src/armnnTfParser/TfParser.cpp: In member function ‘virtual armnn::INetworkPtr armnnTfParser::TfParser::CreateNetworkFromBinaryFile(const char*, const std::map<std::__cxx11::basic_string<char>, armnn::TensorShape>&, const std::vector<std::__cxx11::basic_string<char> >&)’:
armnn/src/armnnTfParser/TfParser.cpp:3567:43: error: no matching function for call to ‘google::protobuf::io::CodedInputStream::SetTotalBytesLimit(int)’
codedStream.SetTotalBytesLimit(INT_MAX);
^
In file included from tensorflow-protobuf/tensorflow/core/framework/graph.pb.h:22:0,
from armnn/src/armnnTfParser/TfParser.cpp:24:
google/x86_64_pb_install/include/google/protobuf/io/coded_stream.h:402:8: note: candidate: void google::protobuf::io::CodedInputStream::SetTotalBytesLimit(int, int)
void SetTotalBytesLimit(int total_bytes_limit, int warning_threshold);
^
google/x86_64_pb_install/include/google/protobuf/io/coded_stream.h:402:8: note: candidate expects 2 arguments, 1 provided
CMakeFiles/armnnTfParser.dir/build.make:62: recipe for target 'CMakeFiles/armnnTfParser.dir/src/armnnTfParser/TfParser.cpp.o' failed
make[2]: *** [CMakeFiles/armnnTfParser.dir/src/armnnTfParser/TfParser.cpp.o] Error 1
CMakeFiles/Makefile2:316: recipe for target 'CMakeFiles/armnnTfParser.dir/all' failed
make[1]: *** [CMakeFiles/armnnTfParser.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
| 1.0 | armnn make Error while building for x86_64 - I am facing below issue when i try to make. Please help.
ARMNN : branches/armnn_20_11
https://github.com/ARM-software/armnn.git
>armnn/build$ make
[ 1%] Built target pipeCommon
[ 3%] Built target armnnUtils
[ 3%] Built target armnnAclCommon
[ 4%] Built target armnnBackendsCommon
[ 4%] Built target armnnNeonBackend
[ 5%] Built target armnnRefBackend
[ 14%] Built target armnnRefBackendWorkloads
[ 15%] Built target armnnClBackend
[ 21%] Built target armnnClBackendWorkloads
[ 21%] Built target fmt
[ 35%] Built target armnn
[ 36%] Built target timelineDecoderJson
[ 37%] Built target gatordMockService
[ 38%] Building CXX object CMakeFiles/armnnTfParser.dir/src/armnnTfParser/TfParser.cpp.o
armnn/src/armnnTfParser/TfParser.cpp: In member function ‘virtual armnn::INetworkPtr armnnTfParser::TfParser::CreateNetworkFromBinaryFile(const char*, const std::map<std::__cxx11::basic_string<char>, armnn::TensorShape>&, const std::vector<std::__cxx11::basic_string<char> >&)’:
armnn/src/armnnTfParser/TfParser.cpp:3567:43: error: no matching function for call to ‘google::protobuf::io::CodedInputStream::SetTotalBytesLimit(int)’
codedStream.SetTotalBytesLimit(INT_MAX);
^
In file included from tensorflow-protobuf/tensorflow/core/framework/graph.pb.h:22:0,
from armnn/src/armnnTfParser/TfParser.cpp:24:
google/x86_64_pb_install/include/google/protobuf/io/coded_stream.h:402:8: note: candidate: void google::protobuf::io::CodedInputStream::SetTotalBytesLimit(int, int)
void SetTotalBytesLimit(int total_bytes_limit, int warning_threshold);
^
google/x86_64_pb_install/include/google/protobuf/io/coded_stream.h:402:8: note: candidate expects 2 arguments, 1 provided
CMakeFiles/armnnTfParser.dir/build.make:62: recipe for target 'CMakeFiles/armnnTfParser.dir/src/armnnTfParser/TfParser.cpp.o' failed
make[2]: *** [CMakeFiles/armnnTfParser.dir/src/armnnTfParser/TfParser.cpp.o] Error 1
CMakeFiles/Makefile2:316: recipe for target 'CMakeFiles/armnnTfParser.dir/all' failed
make[1]: *** [CMakeFiles/armnnTfParser.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
| non_priority | armnn make error while building for i am facing below issue when i try to make please help armnn branches armnn armnn build make built target pipecommon built target armnnutils built target armnnaclcommon built target armnnbackendscommon built target armnnneonbackend built target armnnrefbackend built target armnnrefbackendworkloads built target armnnclbackend built target armnnclbackendworkloads built target fmt built target armnn built target timelinedecoderjson built target gatordmockservice building cxx object cmakefiles armnntfparser dir src armnntfparser tfparser cpp o armnn src armnntfparser tfparser cpp in member function ‘virtual armnn inetworkptr armnntfparser tfparser createnetworkfrombinaryfile const char const std map armnn tensorshape const std vector ’ armnn src armnntfparser tfparser cpp error no matching function for call to ‘google protobuf io codedinputstream settotalbyteslimit int ’ codedstream settotalbyteslimit int max in file included from tensorflow protobuf tensorflow core framework graph pb h from armnn src armnntfparser tfparser cpp google pb install include google protobuf io coded stream h note candidate void google protobuf io codedinputstream settotalbyteslimit int int void settotalbyteslimit int total bytes limit int warning threshold google pb install include google protobuf io coded stream h note candidate expects arguments provided cmakefiles armnntfparser dir build make recipe for target cmakefiles armnntfparser dir src armnntfparser tfparser cpp o failed make error cmakefiles recipe for target cmakefiles armnntfparser dir all failed make error makefile recipe for target all failed make error | 0 |
330,979 | 28,499,157,195 | IssuesEvent | 2023-04-18 16:02:42 | claviska/doxicity | https://api.github.com/repos/claviska/doxicity | opened | Testing | help wanted testing | I'd love some help here. I haven't setup any testing whatsoever yet, but I really like [Web Test Runner](https://modern-web.dev/docs/test-runner/overview/) and will probably go with that when the time comes. I'm happy to help get things setup!
If you're interested in helping to get this setup, please comment below before submitting a PR so we can get on the same page.
| 1.0 | Testing - I'd love some help here. I haven't setup any testing whatsoever yet, but I really like [Web Test Runner](https://modern-web.dev/docs/test-runner/overview/) and will probably go with that when the time comes. I'm happy to help get things setup!
If you're interested in helping to get this setup, please comment below before submitting a PR so we can get on the same page.
| non_priority | testing i d love some help here i haven t setup any testing whatsoever yet but i really like and will probably go with that when the time comes i m happy to help get things setup if you re interested in helping to get this setup please comment below before submitting a pr so we can get on the same page | 0 |
192,239 | 14,612,154,211 | IssuesEvent | 2020-12-22 05:26:24 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | openebs/node-disk-manager: cmd/ndm_daemonset/probe/addhandler_test.go; 3 LoC | fresh test tiny |
Found a possible issue in [openebs/node-disk-manager](https://www.github.com/openebs/node-disk-manager) at [cmd/ndm_daemonset/probe/addhandler_test.go](https://github.com/openebs/node-disk-manager/blob/85c8faa7a27fef7a86d7053c4c7b0ea12c2110b2/cmd/ndm_daemonset/probe/addhandler_test.go#L373-L375)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to bdAPI at line 374 may start a goroutine
[Click here to see the code in its original context.](https://github.com/openebs/node-disk-manager/blob/85c8faa7a27fef7a86d7053c4c7b0ea12c2110b2/cmd/ndm_daemonset/probe/addhandler_test.go#L373-L375)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, bdAPI := range tt.bdAPIList.Items {
cl.Create(context.TODO(), &bdAPI)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 85c8faa7a27fef7a86d7053c4c7b0ea12c2110b2
| 1.0 | openebs/node-disk-manager: cmd/ndm_daemonset/probe/addhandler_test.go; 3 LoC -
Found a possible issue in [openebs/node-disk-manager](https://www.github.com/openebs/node-disk-manager) at [cmd/ndm_daemonset/probe/addhandler_test.go](https://github.com/openebs/node-disk-manager/blob/85c8faa7a27fef7a86d7053c4c7b0ea12c2110b2/cmd/ndm_daemonset/probe/addhandler_test.go#L373-L375)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to bdAPI at line 374 may start a goroutine
[Click here to see the code in its original context.](https://github.com/openebs/node-disk-manager/blob/85c8faa7a27fef7a86d7053c4c7b0ea12c2110b2/cmd/ndm_daemonset/probe/addhandler_test.go#L373-L375)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, bdAPI := range tt.bdAPIList.Items {
cl.Create(context.TODO(), &bdAPI)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 85c8faa7a27fef7a86d7053c4c7b0ea12c2110b2
| non_priority | openebs node disk manager cmd ndm daemonset probe addhandler test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to bdapi at line may start a goroutine click here to show the line s of go which triggered the analyzer go for bdapi range tt bdapilist items cl create context todo bdapi leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
180,504 | 30,511,337,101 | IssuesEvent | 2023-07-18 21:06:47 | department-of-veterans-affairs/vets-design-system-documentation | https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation | opened | Normalizing Header font with USWDS settings - Design | component-update vsp-design-system-team | ## Description
Compare the VA Bitter font to the USWDS Merriweather font to normalize the font using the USWDS normalization settings. Present results and provide recommendations for creating type settings for size and line height.
If this is a pattern or component already in existence, conduct a small-scale audit (3-5 examples) to ensure there aren't design issues that need to be addressed. Also, check the Design System Team backlog for outstanding design issues. If you find any, link to them in a comment on this ticket. Please address any outstanding issues with this design and link to this issue from the original issue. If not, indicate that in the original issue.
## Details
[Using UWDS font](https://designsystem.digital.gov/design-tokens/typesetting/overview/)
[Design check-in notes](https://vfs.atlassian.net/wiki/spaces/DST/pages/2699362869/2023-07-13+Meeting+notes)
## Tasks
- [ ] Conduct small audit if necessary (if component already exists and we are building a new version)
- [ ] Review DST backlog for outstanding design issues with this component, if necessary (if this component already exists and we are building a new version)
- [ ] Create designs for component
- [ ] Review designs with PO and/or DSC
- [ ] Review designs with an accessibility specialist
- [ ] Review designs with DST members (Carol can help schedule this)
- [ ] Address any comments from reviews, if necessary
- [ ] Comment on this ticket with any accessibility considerations engineers may need to know
- [ ] Comment on this ticket with content specifications (e.g. labels and error messages)
- [ ] Comment on this ticket with a link to the designs and post in DST Slack channel
## Acceptance Criteria
- [ ] Component design is complete and has been reviewed
- [ ] Accessibility considerations have been added to this ticket, if necessary
- [ ] Content specifications have been added to this ticket, if necessary
- [ ] Link to design has been added to this ticket and shared in Slack
| 1.0 | Normalizing Header font with USWDS settings - Design - ## Description
Compare the VA Bitter font to the USWDS Merriweather font to normalize the font using the USWDS normalization settings. Present results and provide recommendations for creating type settings for size and line height.
If this is a pattern or component already in existence, conduct a small-scale audit (3-5 examples) to ensure there aren't design issues that need to be addressed. Also, check the Design System Team backlog for outstanding design issues. If you find any, link to them in a comment on this ticket. Please address any outstanding issues with this design and link to this issue from the original issue. If not, indicate that in the original issue.
## Details
[Using UWDS font](https://designsystem.digital.gov/design-tokens/typesetting/overview/)
[Design check-in notes](https://vfs.atlassian.net/wiki/spaces/DST/pages/2699362869/2023-07-13+Meeting+notes)
## Tasks
- [ ] Conduct small audit if necessary (if component already exists and we are building a new version)
- [ ] Review DST backlog for outstanding design issues with this component, if necessary (if this component already exists and we are building a new version)
- [ ] Create designs for component
- [ ] Review designs with PO and/or DSC
- [ ] Review designs with an accessibility specialist
- [ ] Review designs with DST members (Carol can help schedule this)
- [ ] Address any comments from reviews, if necessary
- [ ] Comment on this ticket with any accessibility considerations engineers may need to know
- [ ] Comment on this ticket with content specifications (e.g. labels and error messages)
- [ ] Comment on this ticket with a link to the designs and post in DST Slack channel
## Acceptance Criteria
- [ ] Component design is complete and has been reviewed
- [ ] Accessibility considerations have been added to this ticket, if necessary
- [ ] Content specifications have been added to this ticket, if necessary
- [ ] Link to design has been added to this ticket and shared in Slack
| non_priority | normalizing header font with uswds settings design description compare the va bitter font to the uswds merriweather font to normalize the font using the uswds normalization settings present results and provide recommendations for creating type settings for size and line height if this is a pattern or component already in existence conduct a small scale audit examples to ensure there aren t design issues that need to be addressed also check the design system team backlog for outstanding design issues if you find any link to them in a comment on this ticket please address any outstanding issues with this design and link to this issue from the original issue if not indicate that in the original issue details tasks conduct small audit if necessary if component already exists and we are building a new version review dst backlog for outstanding design issues with this component if necessary if this component already exists and we are building a new version create designs for component review designs with po and or dsc review designs with an accessibility specialist review designs with dst members carol can help schedule this address any comments from reviews if necessary comment on this ticket with any accessibility considerations engineers may need to know comment on this ticket with content specifications e g labels and error messages comment on this ticket with a link to the designs and post in dst slack channel acceptance criteria component design is complete and has been reviewed accessibility considerations have been added to this ticket if necessary content specifications have been added to this ticket if necessary link to design has been added to this ticket and shared in slack | 0 |
56,675 | 14,078,479,959 | IssuesEvent | 2020-11-04 13:37:52 | themagicalmammal/android_kernel_samsung_s5neolte | https://api.github.com/repos/themagicalmammal/android_kernel_samsung_s5neolte | opened | CVE-2019-19965 (Medium) detected in linuxv4.5 | security vulnerability | ## CVE-2019-19965 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.5</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/kusumi/linux.git>https://github.com/kusumi/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_s5neolte/commit/f978d7dbb980bbe5267a625da958c4226e1a8ae0">f978d7dbb980bbe5267a625da958c4226e1a8ae0</a></p>
<p>Found in base branch: <b>cosmic-experimental-1.6</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_s5neolte/drivers/scsi/libsas/sas_discover.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_s5neolte/drivers/scsi/libsas/sas_discover.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_s5neolte/drivers/scsi/libsas/sas_discover.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel through 5.4.6, there is a NULL pointer dereference in drivers/scsi/libsas/sas_discover.c because of mishandling of port disconnection during discovery, related to a PHY down race condition, aka CID-f70267f379b5.
<p>Publish Date: 2019-12-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19965>CVE-2019-19965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19965">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19965</a></p>
<p>Release Date: 2019-12-25</p>
<p>Fix Resolution: v5.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19965 (Medium) detected in linuxv4.5 - ## CVE-2019-19965 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.5</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/kusumi/linux.git>https://github.com/kusumi/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_s5neolte/commit/f978d7dbb980bbe5267a625da958c4226e1a8ae0">f978d7dbb980bbe5267a625da958c4226e1a8ae0</a></p>
<p>Found in base branch: <b>cosmic-experimental-1.6</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_s5neolte/drivers/scsi/libsas/sas_discover.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_s5neolte/drivers/scsi/libsas/sas_discover.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>android_kernel_samsung_s5neolte/drivers/scsi/libsas/sas_discover.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel through 5.4.6, there is a NULL pointer dereference in drivers/scsi/libsas/sas_discover.c because of mishandling of port disconnection during discovery, related to a PHY down race condition, aka CID-f70267f379b5.
<p>Publish Date: 2019-12-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19965>CVE-2019-19965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19965">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19965</a></p>
<p>Release Date: 2019-12-25</p>
<p>Fix Resolution: v5.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch cosmic experimental vulnerable source files android kernel samsung drivers scsi libsas sas discover c android kernel samsung drivers scsi libsas sas discover c android kernel samsung drivers scsi libsas sas discover c vulnerability details in the linux kernel through there is a null pointer dereference in drivers scsi libsas sas discover c because of mishandling of port disconnection during discovery related to a phy down race condition aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
190,256 | 22,047,351,399 | IssuesEvent | 2022-05-30 04:20:15 | pazhanivel07/linux-4.19.72 | https://api.github.com/repos/pazhanivel07/linux-4.19.72 | closed | CVE-2019-19074 (High) detected in linuxlinux-4.19.83 - autoclosed | security vulnerability | ## CVE-2019-19074 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/wireless/ath/ath9k/wmi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the ath9k_wmi_cmd() function in drivers/net/wireless/ath/ath9k/wmi.c in the Linux kernel through 5.3.11 allows attackers to cause a denial of service (memory consumption), aka CID-728c1e2a05e4.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19074>CVE-2019-19074</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19074">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19074</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.4-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19074 (High) detected in linuxlinux-4.19.83 - autoclosed - ## CVE-2019-19074 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/wireless/ath/ath9k/wmi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the ath9k_wmi_cmd() function in drivers/net/wireless/ath/ath9k/wmi.c in the Linux kernel through 5.3.11 allows attackers to cause a denial of service (memory consumption), aka CID-728c1e2a05e4.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19074>CVE-2019-19074</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19074">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19074</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.4-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files drivers net wireless ath wmi c vulnerability details a memory leak in the wmi cmd function in drivers net wireless ath wmi c in the linux kernel through allows attackers to cause a denial of service memory consumption aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
179,590 | 30,270,164,712 | IssuesEvent | 2023-07-07 14:45:59 | YU000jp/logseq-plugin-panel-coloring | https://api.github.com/repos/YU000jp/logseq-plugin-panel-coloring | closed | (Admonition Panel) redesign | type: enhancement type: design | ERROR: type should be string, got "https://github.com/henices/logseq-flow-nord#custom-bullets\r\n ```\r\n .ls-block[data-refs-self*='\"INFO\"'] > .flex.flex-row.pr-2 .bullet-container .bullet:before {\r\n content: \"\\edcd\" !important;\r\n font-family: 'tabler-icons240';\r\n color: hsl(var(--cl-primary), 0.95);\r\n background-color: hsl(var(--cl-primary), 0.15);\r\n border-radius: 50%;\r\n }\r\n```" | 1.0 | (Admonition Panel) redesign - https://github.com/henices/logseq-flow-nord#custom-bullets
```
.ls-block[data-refs-self*='"INFO"'] > .flex.flex-row.pr-2 .bullet-container .bullet:before {
content: "\edcd" !important;
font-family: 'tabler-icons240';
color: hsl(var(--cl-primary), 0.95);
background-color: hsl(var(--cl-primary), 0.15);
border-radius: 50%;
}
``` | non_priority | admonition panel redesign ls block flex flex row pr bullet container bullet before content edcd important font family tabler color hsl var cl primary background color hsl var cl primary border radius | 0 |
9,814 | 3,321,819,269 | IssuesEvent | 2015-11-09 11:06:30 | interactivethings/catalog | https://api.github.com/repos/interactivethings/catalog | opened | Specimen Naming | documentation | What we discussed:
- **UISpec**: Split in "Image" and "Video" specimens
- **Project**: "Iframe"? | 1.0 | Specimen Naming - What we discussed:
- **UISpec**: Split in "Image" and "Video" specimens
- **Project**: "Iframe"? | non_priority | specimen naming what we discussed uispec split in image and video specimens project iframe | 0 |
126,210 | 26,803,089,914 | IssuesEvent | 2023-02-01 16:21:08 | iree-org/iree | https://api.github.com/repos/iree-org/iree | closed | Add verifiers to NVGPU ops | help wanted codegen/nvvm | ### Request description
The ops [mma.sync](https://github.com/llvm/llvm-project/blob/40f35cef894a4f899d1a0a31dd9600b9ce5e769b/mlir/include/mlir/Dialect/NVGPU/NVGPU.td#L82) and [ldmatrix](https://github.com/llvm/llvm-project/blob/40f35cef894a4f899d1a0a31dd9600b9ce5e769b/mlir/include/mlir/Dialect/NVGPU/NVGPU.td#L54) don't have a verifier.
We should decide what makes sense to allow and add a verifier to make sure the type matches what the op should expect.
### What component(s) does this issue relate to?
MLIR
### Additional context
_No response_ | 1.0 | Add verifiers to NVGPU ops - ### Request description
The ops [mma.sync](https://github.com/llvm/llvm-project/blob/40f35cef894a4f899d1a0a31dd9600b9ce5e769b/mlir/include/mlir/Dialect/NVGPU/NVGPU.td#L82) and [ldmatrix](https://github.com/llvm/llvm-project/blob/40f35cef894a4f899d1a0a31dd9600b9ce5e769b/mlir/include/mlir/Dialect/NVGPU/NVGPU.td#L54) don't have a verifier.
We should decide what makes sense to allow and add a verifier to make sure the type matches what the op should expect.
### What component(s) does this issue relate to?
MLIR
### Additional context
_No response_ | non_priority | add verifiers to nvgpu ops request description the ops and don t have a verifier we should decide what makes sense to allow and add a verifier to make sure the type matches what the op should expect what component s does this issue relate to mlir additional context no response | 0 |
2,793 | 4,005,129,598 | IssuesEvent | 2016-05-12 10:09:14 | owncloud/core | https://api.github.com/repos/owncloud/core | closed | Invalidate browser session token when maximum session lifetime is reached | 3 - To Review security | Once #24189 is merged, browser logins will create session tokens stored in the database. Currently users are [logged out if the maximum session lifetime is reached](https://github.com/owncloud/core/blob/14c34919774484d095d26ad2a7246fc897dc2d41/lib/base.php#L433-L439). At that point, we should also invalidate a session token if one exists. | True | Invalidate browser session token when maximum session lifetime is reached - Once #24189 is merged, browser logins will create session tokens stored in the database. Currently users are [logged out if the maximum session lifetime is reached](https://github.com/owncloud/core/blob/14c34919774484d095d26ad2a7246fc897dc2d41/lib/base.php#L433-L439). At that point, we should also invalidate a session token if one exists. | non_priority | invalidate browser session token when maximum session lifetime is reached once is merged browser logins will create session tokens stored in the database currently users are at that point we should also invalidate a session token if one exists | 0 |
15,927 | 20,144,845,078 | IssuesEvent | 2022-02-09 05:52:44 | CMPT756-A5-Org-Patel-Dhruv/MYC756PROJECT | https://api.github.com/repos/CMPT756-A5-Org-Patel-Dhruv/MYC756PROJECT | opened | Update VISA_Balance column with average values | preprocessing | Write a suitable python code in a jupyter notebook to impute the 0 VISA_Balance column values and with Avg values.

| 1.0 | Update VISA_Balance column with average values - Write a suitable python code in a jupyter notebook to impute the 0 VISA_Balance column values and with Avg values.

| non_priority | update visa balance column with average values write a suitable python code in a jupyter notebook to impute the visa balance column values and with avg values | 0 |
109,646 | 13,796,278,615 | IssuesEvent | 2020-10-09 19:32:43 | CohenLabPrinceton/pvp | https://api.github.com/repos/CohenLabPrinceton/pvp | closed | Optimizing PID control coefficients and performance | design | This is to track progress on the dinky, and find the optimal settings to control hardware. | 1.0 | Optimizing PID control coefficients and performance - This is to track progress on the dinky, and find the optimal settings to control hardware. | non_priority | optimizing pid control coefficients and performance this is to track progress on the dinky and find the optimal settings to control hardware | 0 |
86,824 | 10,518,792,237 | IssuesEvent | 2019-09-29 13:32:17 | Germanaz0/rinho-ski | https://api.github.com/repos/Germanaz0/rinho-ski | closed | [Docs] Update docs | documentation | * Update this README file with your comments about your work; what was done, what wasn't, features added & known bugs.
* Provide a way for us to view the completed code and run it, either locally or through a cloud provider
| 1.0 | [Docs] Update docs - * Update this README file with your comments about your work; what was done, what wasn't, features added & known bugs.
* Provide a way for us to view the completed code and run it, either locally or through a cloud provider
| non_priority | update docs update this readme file with your comments about your work what was done what wasn t features added known bugs provide a way for us to view the completed code and run it either locally or through a cloud provider | 0 |
4,955 | 3,898,573,947 | IssuesEvent | 2016-04-17 06:02:00 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 14413481: CAEmitterCell birthrate inaccurate in iOS 7 | classification:ui/usability reproducible:yes status:open | #### Description
Summary:
Far too many emitter cells are being emitted when the emitterMode is set to kCAEmitterLayerOutline in iOS 7.
Steps to Reproduce:
Create an CAEmitterLayer / CAEmitterCell with the emitterMode set to kCAEmitterLayerOutline and note the difference in the number of particles emitted on iOS 7 versus iOS 6.
Expected Results:
Expect the number of particles emitted to be the same or nearly the same on iOS 7 as on iOS 6.
Actual Results:
Lots more particles are emitted on iOS 7. Really a lot.
Regression:
Problem does not occur on iOS 6.
Notes:
Note that I am also using an emitter layer elsewhere in the same application with the emitterMode left at the default value and the birthrate in that case seems correct.
See also attached screenshots comparing explosions on iOS 6 versus iOS 7.
-
Product Version: 7.0 (11A4414e)
Created: 2013-07-11T12:32:39.742902
Originated: 2013-07-11T00:00:00
Open Radar Link: http://www.openradar.me/14413481 | True | 14413481: CAEmitterCell birthrate inaccurate in iOS 7 - #### Description
Summary:
Far too many emitter cells are being emitted when the emitterMode is set to kCAEmitterLayerOutline in iOS 7.
Steps to Reproduce:
Create an CAEmitterLayer / CAEmitterCell with the emitterMode set to kCAEmitterLayerOutline and note the difference in the number of particles emitted on iOS 7 versus iOS 6.
Expected Results:
Expect the number of particles emitted to be the same or nearly the same on iOS 7 as on iOS 6.
Actual Results:
Lots more particles are emitted on iOS 7. Really a lot.
Regression:
Problem does not occur on iOS 6.
Notes:
Note that I am also using an emitter layer elsewhere in the same application with the emitterMode left at the default value and the birthrate in that case seems correct.
See also attached screenshots comparing explosions on iOS 6 versus iOS 7.
-
Product Version: 7.0 (11A4414e)
Created: 2013-07-11T12:32:39.742902
Originated: 2013-07-11T00:00:00
Open Radar Link: http://www.openradar.me/14413481 | non_priority | caemittercell birthrate inaccurate in ios description summary far too many emitter cells are being emitted when the emittermode is set to kcaemitterlayeroutline in ios steps to reproduce create an caemitterlayer caemittercell with the emittermode set to kcaemitterlayeroutline and note the difference in the number of particles emitted on ios versus ios expected results expect the number of particles emitted to be the same or nearly the same on ios as on ios actual results lots more particles are emitted on ios really a lot regression problem does not occur on ios notes note that i am also using an emitter layer elsewhere in the same application with the emittermode left at the default value and the birthrate in that case seems correct see also attached screenshots comparing explosions on ios versus ios product version created originated open radar link | 0 |
180,042 | 14,737,324,819 | IssuesEvent | 2021-01-07 01:31:02 | microsoft/pxt-arcade | https://api.github.com/repos/microsoft/pxt-arcade | closed | The program can't run in "on created sprite of kind" s help | bug documentation next-release p2 sidedocs | **Describe the bug**
The program can't run in "on created sprite of kind" s help
**Steps to reproduce the behavior**
1.Navigate to https://arcade.makecode.com/beta#
2.Drag **on created sprite of kind** in **Sprites**
3.Right-Click----->help
4.Run the program in the help doc
**Actual behavior**
The program can't run in "**on created sprite of kind**" s help

**Additional context**
1.OS: Windows(rs6)
2.arcade version: 1.3.16
3.Microsoft MakeCode version: 6.6.16 | 1.0 | The program can't run in "on created sprite of kind" s help - **Describe the bug**
The program can't run in "on created sprite of kind" s help
**Steps to reproduce the behavior**
1.Navigate to https://arcade.makecode.com/beta#
2.Drag **on created sprite of kind** in **Sprites**
3.Right-Click----->help
4.Run the program in the help doc
**Actual behavior**
The program can't run in "**on created sprite of kind**" s help

**Additional context**
1.OS: Windows(rs6)
2.arcade version: 1.3.16
3.Microsoft MakeCode version: 6.6.16 | non_priority | the program can t run in on created sprite of kind s help describe the bug the program can t run in on created sprite of kind s help steps to reproduce the behavior navigate to drag on created sprite of kind in sprites right click help run the program in the help doc actual behavior the program can t run in on created sprite of kind s help additional context os windows arcade version microsoft makecode version | 0 |
166,394 | 12,952,783,372 | IssuesEvent | 2020-07-19 21:54:32 | ether/etherpad-lite | https://api.github.com/repos/ether/etherpad-lite | closed | Security regression on password (PASSWORD_HIDDEN) from commit 5c5b99fc9ad33054fb0291b92084e00ae1e634ef | Admin Serious Bug Waiting on Testing security | On Etherpad 1.8.4 with Docker + postgresql, admin password can be used only once and is then replaced by "PASSWORD_HIDDEN" (on login form, password that is indeed checked, not only in cache).
This seems to be a regression from 5c5b99fc9ad33054fb0291b92084e00ae1e634ef => https://github.com/anttiviljami/etherpad-lite/commit/5c5b99fc9ad33054fb0291b92084e00ae1e634ef
See : #3421 | 1.0 | Security regression on password (PASSWORD_HIDDEN) from commit 5c5b99fc9ad33054fb0291b92084e00ae1e634ef - On Etherpad 1.8.4 with Docker + postgresql, admin password can be used only once and is then replaced by "PASSWORD_HIDDEN" (on login form, password that is indeed checked, not only in cache).
This seems to be a regression from 5c5b99fc9ad33054fb0291b92084e00ae1e634ef => https://github.com/anttiviljami/etherpad-lite/commit/5c5b99fc9ad33054fb0291b92084e00ae1e634ef
See : #3421 | non_priority | security regression on password password hidden from commit on etherpad with docker postgresql admin password can be used only once and is then replaced by password hidden on login form password that is indeed checked not only in cache this seems to be a regression from see | 0 |
355,645 | 25,175,987,784 | IssuesEvent | 2022-11-11 09:18:34 | songivan00/pe | https://api.github.com/repos/songivan00/pe | opened | Target audience not stated in the UG | severity.VeryLow type.DocumentationBug | Target audience was not specified clearly in the UG.

<!--session: 1668152972894-739c19b4-2776-4769-8c08-e38b03e8c9d1-->
<!--Version: Web v3.4.4--> | 1.0 | Target audience not stated in the UG - Target audience was not specified clearly in the UG.

<!--session: 1668152972894-739c19b4-2776-4769-8c08-e38b03e8c9d1-->
<!--Version: Web v3.4.4--> | non_priority | target audience not stated in the ug target audience was not specified clearly in the ug | 0 |
16,232 | 5,231,749,446 | IssuesEvent | 2017-01-30 05:14:03 | wkretschmer/CirclePuzzles | https://api.github.com/repos/wkretschmer/CirclePuzzles | opened | Investigate numerical stability | code cleanup enhancement | Some things to consider:
- Geometric computations that lose stability near edge cases.
- Edge cases in `BigDecimalMath` computations. One thing I've seen fail before is `atan2` when `x` is very close to zero.
- Implementing a Taylor series for cosine.
- Taking sine and cosine modulo 2*pi for better Taylor convergence. | 1.0 | Investigate numerical stability - Some things to consider:
- Geometric computations that lose stability near edge cases.
- Edge cases in `BigDecimalMath` computations. One thing I've seen fail before is `atan2` when `x` is very close to zero.
- Implementing a Taylor series for cosine.
- Taking sine and cosine modulo 2*pi for better Taylor convergence. | non_priority | investigate numerical stability some things to consider geometric computations that lose stability near edge cases edge cases in bigdecimalmath computations one thing i ve seen fail before is when x is very close to zero implementing a taylor series for cosine taking sine and cosine modulo pi for better taylor convergence | 0 |
406,038 | 27,547,404,077 | IssuesEvent | 2023-03-07 12:48:04 | AhamSammich/lets-play-koikoi | https://api.github.com/repos/AhamSammich/lets-play-koikoi | closed | Create a How-To-Play page | documentation enhancement | Provide a brief overview of gameplay and controls. Incorporate some animated gifs to demonstrate features. | 1.0 | Create a How-To-Play page - Provide a brief overview of gameplay and controls. Incorporate some animated gifs to demonstrate features. | non_priority | create a how to play page provide a brief overview of gameplay and controls incorporate some animated gifs to demonstrate features | 0 |
427,221 | 29,803,482,322 | IssuesEvent | 2023-06-16 09:51:20 | SidorovAI-224/ControlWorks | https://api.github.com/repos/SidorovAI-224/ControlWorks | opened | Control Work 1 | documentation | 1. Як взаємодіє кібернетичний суб'єкт з об'єктом?
Основною формою взаємодії є обмін інформацією між суб'єктом та об'єктом. Кібернетичний суб'єкт може спостерігати об'єкт, отримувати від нього дані та інформацію. Також суб'єкт може впливати на об'єкт шляхом передачі сигналів, команд, даних або виконання певних дій. Взаємодія кібернетичного суб'єкта з об'єктом може бути автоматизованою, коли процес взаємодії відбувається за певними програмованими алгоритмами, або контрольованою, коли людина взаємодіє з об'єктом за допомогою інтерфейсу та засобів управління. В цифровому середовищі кібернетичний суб'єкт може взаємодіяти з об'єктом через мережу Інтернет за допомогою комунікаційних протоколів, веб-сервісів, програмного забезпечення та інших технологій.
2. Як працював перший системний монітор?
Багато ручних дій оператора призводили до затримок у часі і простоюванні комп'ютера.
З метою виключення затримки були розроблені перші спеціальні керуючі програми - монітори для автоматичного переходу між завданнями. Системний монітор - це прообраз ОС, який працював у пакетному режимі автоматизованої обробки завдань, що означає, що завдання об'єднувалися в пакети. Перші системні монітори були реалізовані на ранніх мейнфреймах та мінікомп'ютерах в 60-70-х роках минулого століття. Вони надавали базову інформацію про стан системи, таку як завантаження процесора, використання пам'яті, введення/виведення даних та інше. Інформація відображалася у вигляді чисел, таблиць або текстових повідомлень. | 1.0 | Control Work 1 - 1. Як взаємодіє кібернетичний суб'єкт з об'єктом?
Основною формою взаємодії є обмін інформацією між суб'єктом та об'єктом. Кібернетичний суб'єкт може спостерігати об'єкт, отримувати від нього дані та інформацію. Також суб'єкт може впливати на об'єкт шляхом передачі сигналів, команд, даних або виконання певних дій. Взаємодія кібернетичного суб'єкта з об'єктом може бути автоматизованою, коли процес взаємодії відбувається за певними програмованими алгоритмами, або контрольованою, коли людина взаємодіє з об'єктом за допомогою інтерфейсу та засобів управління. В цифровому середовищі кібернетичний суб'єкт може взаємодіяти з об'єктом через мережу Інтернет за допомогою комунікаційних протоколів, веб-сервісів, програмного забезпечення та інших технологій.
2. Як працював перший системний монітор?
Багато ручних дій оператора призводили до затримок у часі і простоюванні комп'ютера.
З метою виключення затримки були розроблені перші спеціальні керуючі програми - монітори для автоматичного переходу між завданнями. Системний монітор - це прообраз ОС, який працював у пакетному режимі автоматизованої обробки завдань, що означає, що завдання об'єднувалися в пакети. Перші системні монітори були реалізовані на ранніх мейнфреймах та мінікомп'ютерах в 60-70-х роках минулого століття. Вони надавали базову інформацію про стан системи, таку як завантаження процесора, використання пам'яті, введення/виведення даних та інше. Інформація відображалася у вигляді чисел, таблиць або текстових повідомлень. | non_priority | control work як взаємодіє кібернетичний суб єкт з об єктом основною формою взаємодії є обмін інформацією між суб єктом та об єктом кібернетичний суб єкт може спостерігати об єкт отримувати від нього дані та інформацію також суб єкт може впливати на об єкт шляхом передачі сигналів команд даних або виконання певних дій взаємодія кібернетичного суб єкта з об єктом може бути автоматизованою коли процес взаємодії відбувається за певними програмованими алгоритмами або контрольованою коли людина взаємодіє з об єктом за допомогою інтерфейсу та засобів управління в цифровому середовищі кібернетичний суб єкт може взаємодіяти з об єктом через мережу інтернет за допомогою комунікаційних протоколів веб сервісів програмного забезпечення та інших технологій як працював перший системний монітор багато ручних дій оператора призводили до затримок у часі і простоюванні комп ютера з метою виключення затримки були розроблені перші спеціальні керуючі програми монітори для автоматичного переходу між завданнями системний монітор це прообраз ос який працював у пакетному режимі автоматизованої обробки завдань що означає що завдання об єднувалися в пакети перші системні монітори були реалізовані на ранніх мейнфреймах та мінікомп ютерах в х роках минулого століття вони надавали базову інформацію про стан системи таку як завантаження процесора використання пам яті введення виведення даних та інше інформація відображалася у вигляді чисел таблиць або текстових повідомлень | 0 |
420,890 | 28,302,920,876 | IssuesEvent | 2023-04-10 08:05:36 | risingwavelabs/risingwave-docs | https://api.github.com/repos/risingwavelabs/risingwave-docs | closed | Document the optional parameter `offset` in tumble and hop time window functions | documentation | ### Related code PR
https://github.com/risingwavelabs/risingwave/pull/8490
### Which part(s) of the docs might be affected or should be updated? And how?
SQL -> Functions -> Time window functions
### Reference
_No response_ | 1.0 | Document the optional parameter `offset` in tumble and hop time window functions - ### Related code PR
https://github.com/risingwavelabs/risingwave/pull/8490
### Which part(s) of the docs might be affected or should be updated? And how?
SQL -> Functions -> Time window functions
### Reference
_No response_ | non_priority | document the optional parameter offset in tumble and hop time window functions related code pr which part s of the docs might be affected or should be updated and how sql functions time window functions reference no response | 0 |
15,139 | 3,927,169,028 | IssuesEvent | 2016-04-23 11:28:21 | MarlinFirmware/Marlin | https://api.github.com/repos/MarlinFirmware/Marlin | closed | Synchronized vs unsynchronized commands | Documentation Issue Inactive | Hello!
Is there any documentation about what commands are synchronized/buffered and which ones are executed immediately?
I see a lot of confusion about that point. The [documentation for M400](http://www.marlinfirmware.org/index.php/M400) states:
This command should rarely be needed since non-movement commands should already wait,
but M400 can be useful as a workaround for badly-behaved commands.
But looking at the code most of the non-movement commands such as `M104`, `M106`, `M42`, `M280` don't call `st_synchronize()` thus appear to be executed immediately (well, as soon as the command buffer is processed but not waiting the motion queue to be finished).
In this GitHub issue tracker I found several conflicting statements about `M106` being synchronized and not.
What's the situation? Can this be documented clearly? Thank you! :) | 1.0 | Synchronized vs unsynchronized commands - Hello!
Is there any documentation about what commands are synchronized/buffered and which ones are executed immediately?
I see a lot of confusion about that point. The [documentation for M400](http://www.marlinfirmware.org/index.php/M400) states:
This command should rarely be needed since non-movement commands should already wait,
but M400 can be useful as a workaround for badly-behaved commands.
But looking at the code most of the non-movement commands such as `M104`, `M106`, `M42`, `M280` don't call `st_synchronize()` thus appear to be executed immediately (well, as soon as the command buffer is processed but not waiting the motion queue to be finished).
In this GitHub issue tracker I found several conflicting statements about `M106` being synchronized and not.
What's the situation? Can this be documented clearly? Thank you! :) | non_priority | synchronized vs unsynchronized commands hello is there any documentation about what commands are synchronized buffered and which ones are executed immediately i see a lot of confusion about that point the states this command should rarely be needed since non movement commands should already wait but can be useful as a workaround for badly behaved commands but looking at the code most of the non movement commands such as don t call st synchronize thus appear to be executed immediately well as soon as the command buffer is processed but not waiting the motion queue to be finished in this github issue tracker i found several conflicting statements about being synchronized and not what s the situation can this be documented clearly thank you | 0 |
98,579 | 30,011,688,862 | IssuesEvent | 2023-06-26 15:39:39 | project-chip/connectedhomeip | https://api.github.com/repos/project-chip/connectedhomeip | opened | [Build] Test suite on linux failed | build issue needs triage | ### Build issue(s)
https://github.com/project-chip/connectedhomeip/actions/runs/5376979026/jobs/9754861432?pr=27472
```
ERROR 12:08:59.759 - CHIP_REPL_YAML_TESTER OUT: [1687781339.759214][41516:41526] CHIP:CTL: Unknown filter type; all matches will fail
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Advertise commission parameter vendorID=65521 productID=32769 discriminator=3840/15 cm=1
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 387FE5E77EFE6E10._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 6EA6A637A5520000.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 6EA6A637A5520000.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _V65521._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _T65535._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _S15._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _L3840._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _CM._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 387FE5E77EFE6E10._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: CHIP minimal mDNS configured as 'Commissionable node device'; instance name: 387FE5E77EFE6E10.
ERROR 12:08:59.760 - CHIP_REPL_YAML_TESTER OUT: [1687781339.760566][41516:41526] CHIP:CTL: Unknown filter type; all matches will fail
ERROR 12:08:59.761 - APP OUT : CHIP:DIS: mDNS service published: _matterc._udp
ERROR 12:08:59.761 - CHIP_REPL_YAML_TESTER OUT: [1687781339.761332][41516:41526] CHIP:CTL: Unknown filter type; all matches will fail
ERROR 12:09:05.694 - CHIP_REPL_YAML_TESTER ERR: WARNING:root:Test step failure in Wait for the commissioned device to be retrieved
ERROR 12:09:05.694 - CHIP_REPL_YAML_TESTER ERR: WARNING:root:PostProcessCheckStatus.ERROR: The test expects no error but the "FAILURE" error occured.
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: Traceback (most recent call last):
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/__w/connectedhomeip/connectedhomeip/scripts/tests/chiptest/yamltest_with_chip_repl_tester.py", line 146, in main
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: asyncio.run(execute_test(yaml, runner))
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: return loop.run_until_complete(main)
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: return future.result()
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/__w/connectedhomeip/connectedhomeip/scripts/tests/chiptest/yamltest_with_chip_repl_tester.py", line 84, in execute_test
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: raise Exception(f'Test step failed {test_step.label}')
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: Exception: Test step failed Wait for the commissioned device to be retrieved
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT:
```
### Platform
_No response_
### Anything else?
_No response_ | 1.0 | [Build] Test suite on linux failed - ### Build issue(s)
https://github.com/project-chip/connectedhomeip/actions/runs/5376979026/jobs/9754861432?pr=27472
```
ERROR 12:08:59.759 - CHIP_REPL_YAML_TESTER OUT: [1687781339.759214][41516:41526] CHIP:CTL: Unknown filter type; all matches will fail
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Advertise commission parameter vendorID=65521 productID=32769 discriminator=3840/15 cm=1
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 387FE5E77EFE6E10._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 6EA6A637A5520000.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 6EA6A637A5520000.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _V65521._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _T65535._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _S15._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _L3840._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with _CM._sub._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: Responding with 387FE5E77EFE6E10._matterc._udp.local
ERROR 12:08:59.759 - APP OUT : CHIP:DIS: CHIP minimal mDNS configured as 'Commissionable node device'; instance name: 387FE5E77EFE6E10.
ERROR 12:08:59.760 - CHIP_REPL_YAML_TESTER OUT: [1687781339.760566][41516:41526] CHIP:CTL: Unknown filter type; all matches will fail
ERROR 12:08:59.761 - APP OUT : CHIP:DIS: mDNS service published: _matterc._udp
ERROR 12:08:59.761 - CHIP_REPL_YAML_TESTER OUT: [1687781339.761332][41516:41526] CHIP:CTL: Unknown filter type; all matches will fail
ERROR 12:09:05.694 - CHIP_REPL_YAML_TESTER ERR: WARNING:root:Test step failure in Wait for the commissioned device to be retrieved
ERROR 12:09:05.694 - CHIP_REPL_YAML_TESTER ERR: WARNING:root:PostProcessCheckStatus.ERROR: The test expects no error but the "FAILURE" error occured.
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: Traceback (most recent call last):
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/__w/connectedhomeip/connectedhomeip/scripts/tests/chiptest/yamltest_with_chip_repl_tester.py", line 146, in main
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: asyncio.run(execute_test(yaml, runner))
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: return loop.run_until_complete(main)
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: return future.result()
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: File "/__w/connectedhomeip/connectedhomeip/scripts/tests/chiptest/yamltest_with_chip_repl_tester.py", line 84, in execute_test
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: raise Exception(f'Test step failed {test_step.label}')
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT: Exception: Test step failed Wait for the commissioned device to be retrieved
ERROR 12:09:05.697 - CHIP_REPL_YAML_TESTER OUT:
```
### Platform
_No response_
### Anything else?
_No response_ | non_priority | test suite on linux failed build issue s error chip repl yaml tester out chip ctl unknown filter type all matches will fail error app out chip dis advertise commission parameter vendorid productid discriminator cm error app out chip dis responding with matterc udp local error app out chip dis responding with matterc udp local error app out chip dis responding with local error app out chip dis responding with local error app out chip dis responding with sub matterc udp local error app out chip dis responding with sub matterc udp local error app out chip dis responding with sub matterc udp local error app out chip dis responding with sub matterc udp local error app out chip dis responding with cm sub matterc udp local error app out chip dis responding with matterc udp local error app out chip dis chip minimal mdns configured as commissionable node device instance name error chip repl yaml tester out chip ctl unknown filter type all matches will fail error app out chip dis mdns service published matterc udp error chip repl yaml tester out chip ctl unknown filter type all matches will fail error chip repl yaml tester err warning root test step failure in wait for the commissioned device to be retrieved error chip repl yaml tester err warning root postprocesscheckstatus error the test expects no error but the failure error occured error chip repl yaml tester out traceback most recent call last error chip repl yaml tester out file w connectedhomeip connectedhomeip scripts tests chiptest yamltest with chip repl tester py line in main error chip repl yaml tester out asyncio run execute test yaml runner error chip repl yaml tester out file usr lib asyncio runners py line in run error chip repl yaml tester out return loop run until complete main error chip repl yaml tester out file usr lib asyncio base events py line in run until complete error chip repl yaml tester out return future result error chip repl yaml tester out file w connectedhomeip connectedhomeip scripts tests chiptest yamltest with chip repl tester py line in execute test error chip repl yaml tester out raise exception f test step failed test step label error chip repl yaml tester out exception test step failed wait for the commissioned device to be retrieved error chip repl yaml tester out platform no response anything else no response | 0 |
245,515 | 18,786,006,503 | IssuesEvent | 2021-11-08 12:13:06 | vshn/signalilo | https://api.github.com/repos/vshn/signalilo | closed | Open source Icinga2 setup documentation | documentation | I'm currently working to get Signalilo working.
But i need to know how to configure Icinga 2 Hosts and Services. But there is only a link to your internal wiki.
https://wiki.vshn.net/display/VT/Icinga2+passive+checks
I will try now with this presentation: https://www.slideshare.net/icinga/signalilo-visualizing-prometheus-alerts-in-icinga2-icinga-camp-zurich-2019 | 1.0 | Open source Icinga2 setup documentation - I'm currently working to get Signalilo working.
But i need to know how to configure Icinga 2 Hosts and Services. But there is only a link to your internal wiki.
https://wiki.vshn.net/display/VT/Icinga2+passive+checks
I will try now with this presentation: https://www.slideshare.net/icinga/signalilo-visualizing-prometheus-alerts-in-icinga2-icinga-camp-zurich-2019 | non_priority | open source setup documentation i m currently working to get signalilo working but i need to know how to configure icinga hosts and services but there is only a link to your internal wiki i will try now with this presentation | 0 |
46,209 | 7,243,018,265 | IssuesEvent | 2018-02-14 10:10:11 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | kube-apiserver crashes when configured via insecure connection | kind/documentation kind/support lifecycle/rotten sig/api-machinery | I've set up kubernetes v1.2.4 (also tried v1.3.0-beta.2) on AWS according to http://kubernetes.io/docs/getting-started-guides/aws/.
Then I changed on master node configuration of kube-apiserver to be able to access it via http by adding to `/etc/kubernetes/manifests/kube-apiserver.manifest`:
```
--cors-allowed-origins=.*
--insecure-bind-address=0.0.0.0
--insecure-port=8888
```
After this changes I can make requests via http, but a few minutes later server stops responding and I see in `kube-apiserver.log` exceptions like this:
```
I0628 19:38:18.459293 7 handlers.go:152] GET /api/v1/watch/services?resourceVersion=50104&timeoutSeconds=378: (611.892µs) 410
goroutine 349 [running]:
k8s.io/kubernetes/pkg/httplog.(*respLogger).recordStatus(0xc209296150, 0x19a)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/httplog/log.go:214 +0xa6
k8s.io/kubernetes/pkg/httplog.(*respLogger).WriteHeader(0xc209296150, 0x19a)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/httplog/log.go:193 +0x32
k8s.io/kubernetes/pkg/apiserver/metrics.(*responseWriterDelegator).WriteHeader(0xc208d5bbc0, 0x19a)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/metrics/metrics.go:117 +0x53
k8s.io/kubernetes/pkg/apiserver.writeNegotiated(0x7f5b6e2cf368, 0xc2083e13e0, 0x0, 0x0, 0x1d4da80, 0x2, 0x7f5b6e1246d8, 0xc2085ebf28, 0xc2092420d0, 0x19a, ...)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/apiserver.go:429 +0x174
k8s.io/kubernetes/pkg/apiserver.errorNegotiated(0x7f5b6e2c27b8, 0xc209298180, 0x7f5b6e2cf368, 0xc2083e13e0, 0x0, 0x0, 0x1d4da80, 0x2, 0x7f5b6e1246d8, 0xc2085ebf28, ...)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/apiserver.go:441 +0xdd
k8s.io/kubernetes/pkg/apiserver.(*RequestScope).err(0xc2085244e0, 0x7f5b6e2c27b8, 0xc209298180, 0x7f5b6e1246d8, 0xc2085ebf28, 0xc2092420d0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/resthandler.go:84 +0x11f
k8s.io/kubernetes/pkg/apiserver.func·027(0xc208d5bb30, 0xc20923f6e0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/resthandler.go:295 +0x1055
k8s.io/kubernetes/pkg/apiserver/metrics.func·001(0xc208d5bb30, 0xc20923f6e0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/metrics/metrics.go:101 +0x269
github.com/emicklei/go-restful.(*Container).dispatch(0xc2083dcc60, 0x7f5b6e124380, 0xc209296150, 0xc2092420d0)
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/emicklei/go-restful/container.go:249 +0xf5e
github.com/emicklei/go-restful.*Container.(github.com/emicklei/go-re [[kubelet/v1.2.4 (linux/amd64) kubernetes/3eed1e3] 172.20.0.184:56673]
```
Logs are for v1.2.4.
But a few minutes later when kube-apiserver restarted, I can get responses again for a minute, and the same outage repeats again. The instance CPU Load is less then 10%.
| 1.0 | kube-apiserver crashes when configured via insecure connection - I've set up kubernetes v1.2.4 (also tried v1.3.0-beta.2) on AWS according to http://kubernetes.io/docs/getting-started-guides/aws/.
Then I changed on master node configuration of kube-apiserver to be able to access it via http by adding to `/etc/kubernetes/manifests/kube-apiserver.manifest`:
```
--cors-allowed-origins=.*
--insecure-bind-address=0.0.0.0
--insecure-port=8888
```
After this changes I can make requests via http, but a few minutes later server stops responding and I see in `kube-apiserver.log` exceptions like this:
```
I0628 19:38:18.459293 7 handlers.go:152] GET /api/v1/watch/services?resourceVersion=50104&timeoutSeconds=378: (611.892µs) 410
goroutine 349 [running]:
k8s.io/kubernetes/pkg/httplog.(*respLogger).recordStatus(0xc209296150, 0x19a)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/httplog/log.go:214 +0xa6
k8s.io/kubernetes/pkg/httplog.(*respLogger).WriteHeader(0xc209296150, 0x19a)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/httplog/log.go:193 +0x32
k8s.io/kubernetes/pkg/apiserver/metrics.(*responseWriterDelegator).WriteHeader(0xc208d5bbc0, 0x19a)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/metrics/metrics.go:117 +0x53
k8s.io/kubernetes/pkg/apiserver.writeNegotiated(0x7f5b6e2cf368, 0xc2083e13e0, 0x0, 0x0, 0x1d4da80, 0x2, 0x7f5b6e1246d8, 0xc2085ebf28, 0xc2092420d0, 0x19a, ...)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/apiserver.go:429 +0x174
k8s.io/kubernetes/pkg/apiserver.errorNegotiated(0x7f5b6e2c27b8, 0xc209298180, 0x7f5b6e2cf368, 0xc2083e13e0, 0x0, 0x0, 0x1d4da80, 0x2, 0x7f5b6e1246d8, 0xc2085ebf28, ...)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/apiserver.go:441 +0xdd
k8s.io/kubernetes/pkg/apiserver.(*RequestScope).err(0xc2085244e0, 0x7f5b6e2c27b8, 0xc209298180, 0x7f5b6e1246d8, 0xc2085ebf28, 0xc2092420d0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/resthandler.go:84 +0x11f
k8s.io/kubernetes/pkg/apiserver.func·027(0xc208d5bb30, 0xc20923f6e0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/resthandler.go:295 +0x1055
k8s.io/kubernetes/pkg/apiserver/metrics.func·001(0xc208d5bb30, 0xc20923f6e0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/apiserver/metrics/metrics.go:101 +0x269
github.com/emicklei/go-restful.(*Container).dispatch(0xc2083dcc60, 0x7f5b6e124380, 0xc209296150, 0xc2092420d0)
/go/src/k8s.io/kubernetes/Godeps/_workspace/src/github.com/emicklei/go-restful/container.go:249 +0xf5e
github.com/emicklei/go-restful.*Container.(github.com/emicklei/go-re [[kubelet/v1.2.4 (linux/amd64) kubernetes/3eed1e3] 172.20.0.184:56673]
```
Logs are for v1.2.4.
But a few minutes later when kube-apiserver restarted, I can get responses again for a minute, and the same outage repeats again. The instance CPU Load is less then 10%.
| non_priority | kube apiserver crashes when configured via insecure connection i ve set up kubernetes also tried beta on aws according to then i changed on master node configuration of kube apiserver to be able to access it via http by adding to etc kubernetes manifests kube apiserver manifest cors allowed origins insecure bind address insecure port after this changes i can make requests via http but a few minutes later server stops responding and i see in kube apiserver log exceptions like this handlers go get api watch services resourceversion timeoutseconds goroutine io kubernetes pkg httplog resplogger recordstatus go src io kubernetes output dockerized go src io kubernetes pkg httplog log go io kubernetes pkg httplog resplogger writeheader go src io kubernetes output dockerized go src io kubernetes pkg httplog log go io kubernetes pkg apiserver metrics responsewriterdelegator writeheader go src io kubernetes output dockerized go src io kubernetes pkg apiserver metrics metrics go io kubernetes pkg apiserver writenegotiated go src io kubernetes output dockerized go src io kubernetes pkg apiserver apiserver go io kubernetes pkg apiserver errornegotiated go src io kubernetes output dockerized go src io kubernetes pkg apiserver apiserver go io kubernetes pkg apiserver requestscope err go src io kubernetes output dockerized go src io kubernetes pkg apiserver resthandler go io kubernetes pkg apiserver func· go src io kubernetes output dockerized go src io kubernetes pkg apiserver resthandler go io kubernetes pkg apiserver metrics func· go src io kubernetes output dockerized go src io kubernetes pkg apiserver metrics metrics go github com emicklei go restful container dispatch go src io kubernetes godeps workspace src github com emicklei go restful container go github com emicklei go restful container github com emicklei go re logs are for but a few minutes later when kube apiserver restarted i can get responses again for a minute and the same outage repeats again the instance cpu load is less then | 0 |
57,193 | 11,724,888,678 | IssuesEvent | 2020-03-10 11:52:54 | sbrl/Pepperminty-Wiki | https://api.github.com/repos/sbrl/Pepperminty-Wiki | opened | New shell: We forgot to add to the readline history | Area: Code bug | We forgot to add to the readline history. We did this in the bktreetest.php test, but forgot in the actual shell itself.....! | 1.0 | New shell: We forgot to add to the readline history - We forgot to add to the readline history. We did this in the bktreetest.php test, but forgot in the actual shell itself.....! | non_priority | new shell we forgot to add to the readline history we forgot to add to the readline history we did this in the bktreetest php test but forgot in the actual shell itself | 0 |
47,510 | 19,663,346,786 | IssuesEvent | 2022-01-10 19:26:40 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | aws_fsx_openzfs_file_system runtime error: invalid memory address or nil pointer dereference | bug crash service/fsx | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
```
Terraform v1.0.11
on darwin_arm64
+ provider registry.terraform.io/gavinbunney/kubectl v1.13.1
+ provider registry.terraform.io/hashicorp/aws v3.71.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/helm v2.1.2
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.7.1
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0
+ provider registry.terraform.io/terraform-aws-modules/http v2.4.1
Your version of Terraform is out of date! The latest version
is 1.1.3. You can update by downloading from https://www.terraform.io/downloads.html
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_fsx_openzfs_file_system
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
resource "aws_fsx_openzfs_file_system" "fs" {
storage_capacity = 1024
subnet_ids = [module.vpc.private_subnets[0]]
deployment_type = "SINGLE_AZ_1"
throughput_capacity = 64
security_group_ids = [aws_security_group.allow_nfs.id]
root_volume_configuration {
data_compression_type = "ZSTD"
}
automatic_backup_retention_days = 0
tags = local.default_tags
}
```
### Panic Output
https://gist.github.com/XciD/5fca7321430fdc6f75f390271d743753
### Expected Behavior
Create a OpenZFS Filesystem
### Actual Behavior
Create the OpenZFS Filesytem but crash at the end. FS is created, re-apply will create a new FS
### Steps to Reproduce
1. `terraform apply`
### Important Factoids
Relates #22234. | 1.0 | aws_fsx_openzfs_file_system runtime error: invalid memory address or nil pointer dereference - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
```
Terraform v1.0.11
on darwin_arm64
+ provider registry.terraform.io/gavinbunney/kubectl v1.13.1
+ provider registry.terraform.io/hashicorp/aws v3.71.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/helm v2.1.2
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.7.1
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0
+ provider registry.terraform.io/terraform-aws-modules/http v2.4.1
Your version of Terraform is out of date! The latest version
is 1.1.3. You can update by downloading from https://www.terraform.io/downloads.html
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_fsx_openzfs_file_system
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
resource "aws_fsx_openzfs_file_system" "fs" {
storage_capacity = 1024
subnet_ids = [module.vpc.private_subnets[0]]
deployment_type = "SINGLE_AZ_1"
throughput_capacity = 64
security_group_ids = [aws_security_group.allow_nfs.id]
root_volume_configuration {
data_compression_type = "ZSTD"
}
automatic_backup_retention_days = 0
tags = local.default_tags
}
```
### Panic Output
https://gist.github.com/XciD/5fca7321430fdc6f75f390271d743753
### Expected Behavior
Create a OpenZFS Filesystem
### Actual Behavior
Create the OpenZFS Filesytem but crash at the end. FS is created, re-apply will create a new FS
### Steps to Reproduce
1. `terraform apply`
### Important Factoids
Relates #22234. | non_priority | aws fsx openzfs file system runtime error invalid memory address or nil pointer dereference please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version terraform on darwin provider registry terraform io gavinbunney kubectl provider registry terraform io hashicorp aws provider registry terraform io hashicorp cloudinit provider registry terraform io hashicorp helm provider registry terraform io hashicorp http provider registry terraform io hashicorp kubernetes provider registry terraform io hashicorp local provider registry terraform io hashicorp null provider registry terraform io hashicorp tls provider registry terraform io terraform aws modules http your version of terraform is out of date the latest version is you can update by downloading from affected resource s aws fsx openzfs file system terraform configuration files please include all terraform configurations required to reproduce the bug bug reports without a functional reproduction may be closed without investigation hcl resource aws fsx openzfs file system fs storage capacity subnet ids deployment type single az throughput capacity security group ids root volume configuration data compression type zstd automatic backup retention days tags local default tags panic output expected behavior create a openzfs filesystem actual behavior create the openzfs filesytem but crash at the end fs is created re apply will create a new fs steps to reproduce terraform apply important factoids relates | 0 |
45,653 | 11,712,159,985 | IssuesEvent | 2020-03-09 07:38:03 | ShaikASK/Testing | https://api.github.com/repos/ShaikASK/Testing | opened | Imitated Hire /Accepted Hire /Completed Hire deleted using postman is still displayed in the frontend | Defect New Hire P2 Release#7 Build#02 |
Steps To Replicate :
1.Launch the URL
2.Launch the Postman application
3.Try to delete initiated /Accepted/Completed hire
Experienced Behavior : Observed that success message is displayed as “hire deleted successfully” upon deleting initiated /Accepted/Completed hire from backend using postman but deleted hire is still displayed in the frontend
Expected Behavior : Ensure that deleting initiated /Accepted/Completed hire should not be allowed
| 1.0 | Imitated Hire /Accepted Hire /Completed Hire deleted using postman is still displayed in the frontend -
Steps To Replicate :
1.Launch the URL
2.Launch the Postman application
3.Try to delete initiated /Accepted/Completed hire
Experienced Behavior : Observed that success message is displayed as “hire deleted successfully” upon deleting initiated /Accepted/Completed hire from backend using postman but deleted hire is still displayed in the frontend
Expected Behavior : Ensure that deleting initiated /Accepted/Completed hire should not be allowed
| non_priority | imitated hire accepted hire completed hire deleted using postman is still displayed in the frontend steps to replicate launch the url launch the postman application try to delete initiated accepted completed hire experienced behavior observed that success message is displayed as “hire deleted successfully” upon deleting initiated accepted completed hire from backend using postman but deleted hire is still displayed in the frontend expected behavior ensure that deleting initiated accepted completed hire should not be allowed | 0 |
278,715 | 30,702,390,576 | IssuesEvent | 2023-07-27 01:26:03 | Nivaskumark/CVE-2020-0074-frameworks_base | https://api.github.com/repos/Nivaskumark/CVE-2020-0074-frameworks_base | reopened | CVE-2019-2232 (High) detected in baseandroid-11.0.0_r39 | Mend: dependency security vulnerability | ## CVE-2019-2232 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-11.0.0_r39</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0074-frameworks_base/commit/f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027">f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/core/java/android/text/TextLine.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In handleRun of TextLine.java, there is a possible application crash due to improper input validation. This could lead to remote denial of service when processing Unicode with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.0 Android-8.1 Android-9 Android-10Android ID: A-140632678
<p>Publish Date: 2019-12-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-2232>CVE-2019-2232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232</a></p>
<p>Release Date: 2019-12-06</p>
<p>Fix Resolution: android-8.0.0_r41;android-8.1.0_r71;android-9.0.0_r51;android-10.0.0_r17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-2232 (High) detected in baseandroid-11.0.0_r39 - ## CVE-2019-2232 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-11.0.0_r39</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/CVE-2020-0074-frameworks_base/commit/f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027">f63c00c11df9fe4c62ee2ed7d5f72e3a7ebec027</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/core/java/android/text/TextLine.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In handleRun of TextLine.java, there is a possible application crash due to improper input validation. This could lead to remote denial of service when processing Unicode with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-8.0 Android-8.1 Android-9 Android-10Android ID: A-140632678
<p>Publish Date: 2019-12-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-2232>CVE-2019-2232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-2232</a></p>
<p>Release Date: 2019-12-06</p>
<p>Fix Resolution: android-8.0.0_r41;android-8.1.0_r71;android-9.0.0_r51;android-10.0.0_r17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in baseandroid cve high severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in head commit a href found in base branch master vulnerable source files core java android text textline java vulnerability details in handlerun of textline java there is a possible application crash due to improper input validation this could lead to remote denial of service when processing unicode with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android android android android step up your open source security game with mend | 0 |
79,905 | 10,144,845,771 | IssuesEvent | 2019-08-05 00:52:37 | whereispolaris/liri-node-app | https://api.github.com/repos/whereispolaris/liri-node-app | closed | Update README | documentation | Need to update README with details on how to install the app locally.
- Download instructions
- Environment Setup Commands (npm modules)
- Available commands in app (movie-this, spotify-this, etc).
- Add Video Demo | 1.0 | Update README - Need to update README with details on how to install the app locally.
- Download instructions
- Environment Setup Commands (npm modules)
- Available commands in app (movie-this, spotify-this, etc).
- Add Video Demo | non_priority | update readme need to update readme with details on how to install the app locally download instructions environment setup commands npm modules available commands in app movie this spotify this etc add video demo | 0 |
259,515 | 27,632,614,686 | IssuesEvent | 2023-03-10 12:05:41 | MatBenfield/news | https://api.github.com/repos/MatBenfield/news | closed | [SecurityWeek] Watch Sessions: Ransomware Resilience & Recovery Summit | SecurityWeek Stale |
Watch sessions from SecurityWeek’s Ransomware Resilience & Recovery Summit, a virtual event designed to help businesses to plan, prepare, and recover from a ransomware incident.
The post [Watch Sessions: Ransomware Resilience & Recovery Summit](https://www.securityweek.com/virtual-event-tomorrow-ransomware-resilience-recovery-summit/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/virtual-event-tomorrow-ransomware-resilience-recovery-summit/>
| True | [SecurityWeek] Watch Sessions: Ransomware Resilience & Recovery Summit -
Watch sessions from SecurityWeek’s Ransomware Resilience & Recovery Summit, a virtual event designed to help businesses to plan, prepare, and recover from a ransomware incident.
The post [Watch Sessions: Ransomware Resilience & Recovery Summit](https://www.securityweek.com/virtual-event-tomorrow-ransomware-resilience-recovery-summit/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/virtual-event-tomorrow-ransomware-resilience-recovery-summit/>
| non_priority | watch sessions ransomware resilience recovery summit watch sessions from securityweek’s ransomware resilience recovery summit a virtual event designed to help businesses to plan prepare and recover from a ransomware incident the post appeared first on | 0 |
219,806 | 17,112,962,980 | IssuesEvent | 2021-07-10 18:20:53 | x0b/rcx | https://api.github.com/repos/x0b/rcx | closed | Unable to sync with nextcloud (The Good Host provider) | Needs Retest 🐞 Bug 🔧 Configuration | <!--
If you just have a question around RCX usage, you might find something in the documentation:
https://x0b.github.io/docs/
If you have a question regarding rclone functionality (e.g. config files), the forum is a good place to ask:
https://forum.rclone.org/
If you still think you have found a bug, please fill out the following questions before submitting your issue. Thank you :)
-->
#### What version of RCX are you using (About -> App version)?
1.11.4, Rclone v1.51.0
#### What is your Android version, phone model and manufacturer?
LineageOS 17.1 using Android 10 (QQ3A.200805.001), ONE A3008 (OnePlus 2), OnePlus
#### Which steps are required to reproduce this issue?
<!--
Example:
1. Open a remote
1. Select "Delete" on a folder
2. RCX crashes
Please also enable rclone logging (Settings > Logging > Log Rclone errors). You're going to need the log for the last question.
-->
0. Get free account from https://thegood.cloud/en/individuals and verify your e-mail
1. Create the new remote with following:
Type: WebDAV
Remote name: The Good Host
url of the http host to connect to: (either of)
- use02.thegood.cloud
- https://use02.thegood.cloud
- https://use02.thegood.cloud/remote.php/dav/files/kreyren@fsfe.org/
User name: <YOUR_USERNAME> e.g. kreyren@fsfe.org
Password: <YOUR_PASSWORD> e.g. `6GAj#%zn9~hCuemAcfsx`VshAvcZ#8LCf~rTNg~5PYd@9PfeGSop8FkUDPLismbkfEw9JK2JHa#zvdYFL8%ZbQg@TYax`~%WVjAcv8#GLF`x8wRWfk`%dcvZsQ7j2wGK` (generated from KeepAssXC)
- Name of the webdav service: Nextcloud
2. Expect failure to auth
#### What are the contents of ```Android/data/io.github.x0b.rcx/files/logs/log.txt```?
<details><summary><code>log.txt</code> (click to expand) </summary><br><pre>
<!-- Paste the log below this line. Remove anything that contains private/personal information -->
blank
<!-- Keep everything after this line -->
</pre></details>
#### Additional informations
App installed from f-droid | 1.0 | Unable to sync with nextcloud (The Good Host provider) - <!--
If you just have a question around RCX usage, you might find something in the documentation:
https://x0b.github.io/docs/
If you have a question regarding rclone functionality (e.g. config files), the forum is a good place to ask:
https://forum.rclone.org/
If you still think you have found a bug, please fill out the following questions before submitting your issue. Thank you :)
-->
#### What version of RCX are you using (About -> App version)?
1.11.4, Rclone v1.51.0
#### What is your Android version, phone model and manufacturer?
LineageOS 17.1 using Android 10 (QQ3A.200805.001), ONE A3008 (OnePlus 2), OnePlus
#### Which steps are required to reproduce this issue?
<!--
Example:
1. Open a remote
1. Select "Delete" on a folder
2. RCX crashes
Please also enable rclone logging (Settings > Logging > Log Rclone errors). You're going to need the log for the last question.
-->
0. Get free account from https://thegood.cloud/en/individuals and verify your e-mail
1. Create the new remote with following:
Type: WebDAV
Remote name: The Good Host
url of the http host to connect to: (either of)
- use02.thegood.cloud
- https://use02.thegood.cloud
- https://use02.thegood.cloud/remote.php/dav/files/kreyren@fsfe.org/
User name: <YOUR_USERNAME> e.g. kreyren@fsfe.org
Password: <YOUR_PASSWORD> e.g. `6GAj#%zn9~hCuemAcfsx`VshAvcZ#8LCf~rTNg~5PYd@9PfeGSop8FkUDPLismbkfEw9JK2JHa#zvdYFL8%ZbQg@TYax`~%WVjAcv8#GLF`x8wRWfk`%dcvZsQ7j2wGK` (generated from KeepAssXC)
- Name of the webdav service: Nextcloud
2. Expect failure to auth
#### What are the contents of ```Android/data/io.github.x0b.rcx/files/logs/log.txt```?
<details><summary><code>log.txt</code> (click to expand) </summary><br><pre>
<!-- Paste the log below this line. Remove anything that contains private/personal information -->
blank
<!-- Keep everything after this line -->
</pre></details>
#### Additional informations
App installed from f-droid | non_priority | unable to sync with nextcloud the good host provider if you just have a question around rcx usage you might find something in the documentation if you have a question regarding rclone functionality e g config files the forum is a good place to ask if you still think you have found a bug please fill out the following questions before submitting your issue thank you what version of rcx are you using about app version rclone what is your android version phone model and manufacturer lineageos using android one oneplus oneplus which steps are required to reproduce this issue example open a remote select delete on a folder rcx crashes please also enable rclone logging settings logging log rclone errors you re going to need the log for the last question get free account from and verify your e mail create the new remote with following type webdav remote name the good host url of the http host to connect to either of thegood cloud user name e g kreyren fsfe org password e g hcuemacfsx vshavcz rtng zbqg tyax glf generated from keepassxc name of the webdav service nextcloud expect failure to auth what are the contents of android data io github rcx files logs log txt log txt click to expand blank additional informations app installed from f droid | 0 |
92,487 | 8,365,420,700 | IssuesEvent | 2018-10-04 05:01:45 | jemaineosia/rfoverdose | https://api.github.com/repos/jemaineosia/rfoverdose | closed | Fix tower at the NPC | Fixed. Need to test. | remove current towers on the NPC and replace with
gtbb015 C0070010 Advanced Dark Hedge Hawk(Bellato14)
gtcc007 C007001D Advanced Plasma Lance(Cora7)
gtaa007 C0070032 Advanced Linear Canon(Accretia7) | 1.0 | Fix tower at the NPC - remove current towers on the NPC and replace with
gtbb015 C0070010 Advanced Dark Hedge Hawk(Bellato14)
gtcc007 C007001D Advanced Plasma Lance(Cora7)
gtaa007 C0070032 Advanced Linear Canon(Accretia7) | non_priority | fix tower at the npc remove current towers on the npc and replace with advanced dark hedge hawk advanced plasma lance advanced linear canon | 0 |
108,078 | 11,580,008,146 | IssuesEvent | 2020-02-21 19:11:32 | plotly/plotly.py | https://api.github.com/repos/plotly/plotly.py | opened | external Orca server: pio.orca.status should have a more helpful output | documentation | As reported from user @hiramf (https://github.com/plotly/orca/issues/279#issuecomment-5897900690), when using an external orca server, the output of `pio.orca.status` is not very user-friendly:
```ipython
>>> pio.orca.status
orca status
-----------
state: unvalidated
executable: None
version: None
port: None
pid: None
command: None
```
It should at least display the URL of the server and ideally ping it to tell the user it's working/accessible. | 1.0 | external Orca server: pio.orca.status should have a more helpful output - As reported from user @hiramf (https://github.com/plotly/orca/issues/279#issuecomment-5897900690), when using an external orca server, the output of `pio.orca.status` is not very user-friendly:
```ipython
>>> pio.orca.status
orca status
-----------
state: unvalidated
executable: None
version: None
port: None
pid: None
command: None
```
It should at least display the URL of the server and ideally ping it to tell the user it's working/accessible. | non_priority | external orca server pio orca status should have a more helpful output as reported from user hiramf when using an external orca server the output of pio orca status is not very user friendly ipython pio orca status orca status state unvalidated executable none version none port none pid none command none it should at least display the url of the server and ideally ping it to tell the user it s working accessible | 0 |
244,327 | 18,754,697,215 | IssuesEvent | 2021-11-05 09:13:54 | SAP/ui5-webcomponents | https://api.github.com/repos/SAP/ui5-webcomponents | closed | docs: create Accessibility page within Documentation | documentation ACC | ### **Issue Description**
The Documentation lacks information regarding the Accessibility support the UI5 Web Components provide.
See https://github.com/SAP/ui5-webcomponents/issues/3647
### Points to describe
Create a Accessibility page, under Documentation, to provide information about:
- the standard we follow (Web Content Accessibility Guidelines 2.1 Level AA)
- Screen Readers we test on (Jaws + Chrome)
- Keyboard Handling we provide
- the High Contrast Themes we have (Black and White)
- the contrast ratio on all themes for better experience.
- other?
### **Issue Type**
- [ ] Documentation is unclear
- [ ] Documentation is incorrect
- [x] Documentation is missing
- [ ] Other
| 1.0 | docs: create Accessibility page within Documentation - ### **Issue Description**
The Documentation lacks information regarding the Accessibility support the UI5 Web Components provide.
See https://github.com/SAP/ui5-webcomponents/issues/3647
### Points to describe
Create a Accessibility page, under Documentation, to provide information about:
- the standard we follow (Web Content Accessibility Guidelines 2.1 Level AA)
- Screen Readers we test on (Jaws + Chrome)
- Keyboard Handling we provide
- the High Contrast Themes we have (Black and White)
- the contrast ratio on all themes for better experience.
- other?
### **Issue Type**
- [ ] Documentation is unclear
- [ ] Documentation is incorrect
- [x] Documentation is missing
- [ ] Other
| non_priority | docs create accessibility page within documentation issue description the documentation lacks information regarding the accessibility support the web components provide see points to describe create a accessibility page under documentation to provide information about the standard we follow web content accessibility guidelines level aa screen readers we test on jaws chrome keyboard handling we provide the high contrast themes we have black and white the contrast ratio on all themes for better experience other issue type documentation is unclear documentation is incorrect documentation is missing other | 0 |
23,737 | 6,478,434,329 | IssuesEvent | 2017-08-18 07:53:00 | Microsoft/pxt | https://api.github.com/repos/Microsoft/pxt | closed | [Screen Reader-Header Control-Projects]: Proper role is not defined for the 'New Project', 'Import File' and 'Untitled' elements showing under Project dialog box. | A11yBlocking A11yMAS accessibility Closed Fixed HCL HCL-MakeCode MAS4.1.2 Win10-Edge | **Pre-Requisite:** Turn on Narrator(Win + Ctrl + Enter)
**User Experience:**
Users who depends on Screen Reader will get confuse if Name /Role/Control type property is not defined for any control of the web page.
**Test Environment:**
OS: RS2 Version 1703(OS Build 15063.483)
Platform: Edge.
Tool used:-IE Developer (F12)
**Repro Steps:-**
1. Navigate to https://makecode.microbit.org/acc
2. Navigate to the Microbit section element and select Code control given on it.
3. Navigate to the Projects control lying in the header section on the page and select it.
4. Navigate to various controls lying on the page opened.
5. Verify that proper role is defined for all the elements showing on Project dialog box using F12.
**Actual Result:-**
Proper role is not defined for the 'New Project' ,'Import File' and 'Untitled' elements showing under Project dialog box.
**Expected Result:-**
Proper role like button or link should be defined for the 'New Project' ,'Import File' and 'Untitled' elements showing under Project dialog box.
**MAS Reference -**
https://microsoft.sharepoint.com/teams/msenable/_layouts/15/WopiFrame.aspx?sourcedoc={248054a6-5e68-4771-9e1e-242fb5025730}
**Suggested fix:-**
1- Using aria-label to give an invisible name to element, where name/title property is not used
e.g.
'Close' button can have only ‘X’, to make it more meaningful aria-label attribute is used with ‘Close’ text to help AT’s without showing the content on UI
<div id="box">
This is a pop-up box.
< button aria-label="Close" onclick="document.getElementById('box').style.display='none';" class="closebutton"> X < /button>
</div>
2- Using ARIA role to expose the control type of user interface component
e.g.
< div role="toolbar" tabindex="0" id="customToolbar" >
< img src="img/btn1.gif" role="button" tabindex="-1" alt="Home" id="b1" title="Home" >
3- Using ARIA attributes to expose the element’s state
e.g.
< div role="button" id="customToolbar" aria-checked=”” aria-disabled=”” aria-haspopup=”” aria-expanded=”” aria-selected=”” >
Reference:
https://www.w3.org/TR/WCAG20-TECHS/aria.html
https://www.w3.org/TR/WCAG20-TECHS/ARIA16.html
https://www.w3.org/TR/WCAG20-TECHS/ARIA4.html
https://www.w3.org/TR/WCAG20-TECHS/ARIA5.html
**Please refer the attachment**

[MAS4.1.2_Projects.zip](https://github.com/Microsoft/pxt/files/1176066/MAS4.1.2_Projects.zip)
| 1.0 | [Screen Reader-Header Control-Projects]: Proper role is not defined for the 'New Project', 'Import File' and 'Untitled' elements showing under Project dialog box. - **Pre-Requisite:** Turn on Narrator(Win + Ctrl + Enter)
**User Experience:**
Users who depends on Screen Reader will get confuse if Name /Role/Control type property is not defined for any control of the web page.
**Test Environment:**
OS: RS2 Version 1703(OS Build 15063.483)
Platform: Edge.
Tool used:-IE Developer (F12)
**Repro Steps:-**
1. Navigate to https://makecode.microbit.org/acc
2. Navigate to the Microbit section element and select Code control given on it.
3. Navigate to the Projects control lying in the header section on the page and select it.
4. Navigate to various controls lying on the page opened.
5. Verify that proper role is defined for all the elements showing on Project dialog box using F12.
**Actual Result:-**
Proper role is not defined for the 'New Project' ,'Import File' and 'Untitled' elements showing under Project dialog box.
**Expected Result:-**
Proper role like button or link should be defined for the 'New Project' ,'Import File' and 'Untitled' elements showing under Project dialog box.
**MAS Reference -**
https://microsoft.sharepoint.com/teams/msenable/_layouts/15/WopiFrame.aspx?sourcedoc={248054a6-5e68-4771-9e1e-242fb5025730}
**Suggested fix:-**
1- Using aria-label to give an invisible name to element, where name/title property is not used
e.g.
'Close' button can have only ‘X’, to make it more meaningful aria-label attribute is used with ‘Close’ text to help AT’s without showing the content on UI
<div id="box">
This is a pop-up box.
< button aria-label="Close" onclick="document.getElementById('box').style.display='none';" class="closebutton"> X < /button>
</div>
2- Using ARIA role to expose the control type of user interface component
e.g.
< div role="toolbar" tabindex="0" id="customToolbar" >
< img src="img/btn1.gif" role="button" tabindex="-1" alt="Home" id="b1" title="Home" >
3- Using ARIA attributes to expose the element’s state
e.g.
< div role="button" id="customToolbar" aria-checked=”” aria-disabled=”” aria-haspopup=”” aria-expanded=”” aria-selected=”” >
Reference:
https://www.w3.org/TR/WCAG20-TECHS/aria.html
https://www.w3.org/TR/WCAG20-TECHS/ARIA16.html
https://www.w3.org/TR/WCAG20-TECHS/ARIA4.html
https://www.w3.org/TR/WCAG20-TECHS/ARIA5.html
**Please refer the attachment**

[MAS4.1.2_Projects.zip](https://github.com/Microsoft/pxt/files/1176066/MAS4.1.2_Projects.zip)
| non_priority | proper role is not defined for the new project import file and untitled elements showing under project dialog box pre requisite turn on narrator win ctrl enter user experience users who depends on screen reader will get confuse if name role control type property is not defined for any control of the web page test environment os version os build platform edge tool used ie developer repro steps navigate to navigate to the microbit section element and select code control given on it navigate to the projects control lying in the header section on the page and select it navigate to various controls lying on the page opened verify that proper role is defined for all the elements showing on project dialog box using actual result proper role is not defined for the new project import file and untitled elements showing under project dialog box expected result proper role like button or link should be defined for the new project import file and untitled elements showing under project dialog box mas reference suggested fix using aria label to give an invisible name to element where name title property is not used e g close button can have only ‘x’ to make it more meaningful aria label attribute is used with ‘close’ text to help at’s without showing the content on ui this is a pop up box x using aria role to expose the control type of user interface component e g using aria attributes to expose the element’s state e g reference please refer the attachment | 0 |
20,349 | 3,808,510,721 | IssuesEvent | 2016-03-25 15:24:36 | Gapminder/dollar-street-pages | https://api.github.com/repos/Gapminder/dollar-street-pages | closed | Place page: Family portrait icon shouldn't be clickable | bug Iteration 2 Tested | **STR:**
1. Go to {instance}/place?thing=5477537786deda0b00d43be5&place=55d1ff793efe9e00273b0dbb&image=55d20228ff69295c271979ca .
2. Hover by mouse family portrait by mouse.
**Actual result:**
Mouse pointer looks like the icon is clickable.

**Expected result:**
Mouse pointer shouldn't change. | 1.0 | Place page: Family portrait icon shouldn't be clickable - **STR:**
1. Go to {instance}/place?thing=5477537786deda0b00d43be5&place=55d1ff793efe9e00273b0dbb&image=55d20228ff69295c271979ca .
2. Hover by mouse family portrait by mouse.
**Actual result:**
Mouse pointer looks like the icon is clickable.

**Expected result:**
Mouse pointer shouldn't change. | non_priority | place page family portrait icon shouldn t be clickable str go to instance place thing place image hover by mouse family portrait by mouse actual result mouse pointer looks like the icon is clickable expected result mouse pointer shouldn t change | 0 |
144,902 | 13,130,163,638 | IssuesEvent | 2020-08-06 14:57:45 | aces/Loris | https://api.github.com/repos/aces/Loris | closed | ReadTheDocs build failure on LORIS API symlinks | Bug Documentation | **Describe the bug**
ReadTheDocs is failing because it's unable to resolve the following symlinks:
docs/wiki/99\ -\ Developers/LORIS-REST-API-0.0.2.md
docs/wiki/99\ -\ Developers/LORIS-REST-API-0.0.3-dev.md
These were added in https://github.com/aces/Loris/pull/6151
**What did you expect to happen?**
ReadTheDocs and mkdocs should resolve the symlinks and render the documentation.
We may need to modify these to include relative path characters as RTD is served from `docs/` rather than the LORIS root.
**Additional context**
ReadTheDocs build error:
```
INFO - Cleaning site directory
INFO - Building documentation to directory: /home/docs/checkouts/readthedocs.org/user_builds/acesloris/checkouts/latest/_build/html
INFO - The following pages exist in the docs directory, but are not included in the "nav" configuration:
- React.README.md
- SQLModelingStandard.md
- deprecated_wiki/About-superuser.md
- deprecated_wiki/CentOS-Imaging-installation-transcript.md
- deprecated_wiki/Code-Customization.md
- deprecated_wiki/Developer's-Instrument-Guide.md
- deprecated_wiki/Getting-the-Release.md
- deprecated_wiki/Guide-to-Loris-React-components.md
- deprecated_wiki/How-to-Code-an-Instrument.md
- deprecated_wiki/How-to-make-a-LORIS-module.md
- deprecated_wiki/Installing-Loris-in-Brief.md
- deprecated_wiki/Instrument-Groups.md
- deprecated_wiki/Instrument-Insertion.md
- deprecated_wiki/Instrument-Scoring.md
- deprecated_wiki/Instrument-Scripts.md
- deprecated_wiki/Instrument-Testing-and-Troubleshooting.md
- deprecated_wiki/LORIS-Dictionary.md
- deprecated_wiki/LORIS-Form.md
- deprecated_wiki/LORIS-Module-Testing.md
- deprecated_wiki/LORIS-Modules.md
- deprecated_wiki/LORIS-scripts-in-the-tools--directory.md
- deprecated_wiki/Notification-system.md
- deprecated_wiki/Open-LORIS.md
- deprecated_wiki/Other-Imaging-Scripts.md
- deprecated_wiki/Reloading-MRI-data-for-mislabelled-session.md
- deprecated_wiki/Updating-your-LORIS.md
- deprecated_wiki/Upgrading-Loris.md
- deprecated_wiki/Using-Google-reCAPTCHA.md
- deprecated_wiki/Working-with-React.md
- deprecated_wiki/XIN-Rules.md
- deprecated_wiki/About/About.md
- deprecated_wiki/Setup/Setup.md
- deprecated_wiki/Setup/Initial Setup/Backups/Backups.md
- deprecated_wiki/Setup/Initial Setup/Behavioural Database/Behavioural-Database.md
- deprecated_wiki/Setup/Initial Setup/Data Querying Tool/Data-Querying-Tool.md
- deprecated_wiki/Setup/Initial Setup/Enable Mail Server/Enable-mail-server.md
- deprecated_wiki/Setup/Initial Setup/Imaging Database/Imaging-Database.md
- deprecated_wiki/Setup/Initial Setup/Install Script/Install-Script.md
- deprecated_wiki/Setup/Initial Setup/LORIS Modules/Candidate Information Page/Candidate-Information-Page.md
- deprecated_wiki/Setup/Initial Setup/LORIS Modules/Candidate Parameters/Candidate-Parameters.md
- deprecated_wiki/Technical/Code-Review-Checklist.md
- deprecated_wiki/Technical/Developer-Workshop-2015-02-13.md
- deprecated_wiki/Technical/Developer-Workshops.md
- deprecated_wiki/Technical/Technical.md
- wiki/00 - SERVER INSTALL AND CONFIGURATION/README.md
- wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/README.md
- wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/Macintosh/README.md
- wiki/01 - STUDY PARAMETERS SETUP/README.md
- wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/README.md
- wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/01 - Identifiers.md
- wiki/_ARCHIVE/README.md
- wiki/_DELETED/README.md
- wiki/_DELETED/API.md
- wiki/_DELETED/Install-Script-for-16.X.md
- wiki/_DELETED/Installing-Loris-(After-Installing-Prerequisites).md
- wiki/_DELETED/LORIS database schema.md
- wiki/_DELETED/LORIS-Setup-Schematic.md
- wiki/_DELETED/Request-Accounts-module.md
- wiki/_DELETED/_Footer.md
- wiki/_DELETED/_Sidebar.md
- wiki/_DELETED/Community/Community.md
- wiki/_DELETED/Community/Development.md
- wiki/_DELETED/Community/Documentation.md
- wiki/_DELETED/Community/Get-in-Touch.md
- wiki/_DELETED/Resources/Instrument Coding Guide/Instrument-Coding-Guide.md
- wiki/_DELETED/Resources/Ubuntu Upgrading/Upgrading-from-Ubuntu-12.04-to-14.04.md
WARNING - Documentation file 'CodingStandards.md' contains a link to 'wiki/99%20-%20Developers/Automated%20Testing.md' which is not found in the documentation files.
WARNING - Documentation file 'deprecated_wiki/Setup/Initial Setup/Install Script/Install-Script.md' contains a link to 'deprecated_wiki/Setup/Initial Setup/Install Script/Install-Script-for-16.X' which is not found in the documentation files.
WARNING - Documentation file 'wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/CentOS/README.md' contains a link to '../README.md' which is not found in the documentation files.
WARNING - Documentation file 'wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/CentOS/README.md' contains a link to '../README.md' which is not found in the documentation files.
WARNING - Documentation file 'wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/03 - Sites.md' contains a link to 'wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/SQL Dictionary.md' which is not found in the documentation files.
WARNING - Documentation file 'wiki/99 - Developers/Automated Testing.md' contains a link to '../test/UnitTestGuide.md' which is not found in the documentation files.
ERROR - File not found: wiki/99 - Developers/LORIS-REST-API-0.0.2.md
ERROR - Error reading page 'wiki/99 - Developers/LORIS-REST-API-0.0.2.md': [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/acesloris/checkouts/latest/docs/wiki/99 - Developers/LORIS-REST-API-0.0.2.md'
Traceback (most recent call last):
File "/home/docs/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/docs/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/__main__.py", line 202, in <module>
cli()
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/__main__.py", line 163, in build_command
), dirty=not clean)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/commands/build.py", line 274, in build
_populate_page(file.page, config, files, dirty)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/commands/build.py", line 170, in _populate_page
page.read_source(config)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/structure/pages.py", line 129, in read_source
with io.open(self.file.abs_src_path, 'r', encoding='utf-8-sig', errors='strict') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/acesloris/checkouts/latest/docs/wiki/99 - Developers/LORIS-REST-API-0.0.2.md'
``` | 1.0 | ReadTheDocs build failure on LORIS API symlinks - **Describe the bug**
ReadTheDocs is failing because it's unable to resolve the following symlinks:
docs/wiki/99\ -\ Developers/LORIS-REST-API-0.0.2.md
docs/wiki/99\ -\ Developers/LORIS-REST-API-0.0.3-dev.md
These were added in https://github.com/aces/Loris/pull/6151
**What did you expect to happen?**
ReadTheDocs and mkdocs should resolve the symlinks and render the documentation.
We may need to modify these to include relative path characters as RTD is served from `docs/` rather than the LORIS root.
**Additional context**
ReadTheDocs build error:
```
INFO - Cleaning site directory
INFO - Building documentation to directory: /home/docs/checkouts/readthedocs.org/user_builds/acesloris/checkouts/latest/_build/html
INFO - The following pages exist in the docs directory, but are not included in the "nav" configuration:
- React.README.md
- SQLModelingStandard.md
- deprecated_wiki/About-superuser.md
- deprecated_wiki/CentOS-Imaging-installation-transcript.md
- deprecated_wiki/Code-Customization.md
- deprecated_wiki/Developer's-Instrument-Guide.md
- deprecated_wiki/Getting-the-Release.md
- deprecated_wiki/Guide-to-Loris-React-components.md
- deprecated_wiki/How-to-Code-an-Instrument.md
- deprecated_wiki/How-to-make-a-LORIS-module.md
- deprecated_wiki/Installing-Loris-in-Brief.md
- deprecated_wiki/Instrument-Groups.md
- deprecated_wiki/Instrument-Insertion.md
- deprecated_wiki/Instrument-Scoring.md
- deprecated_wiki/Instrument-Scripts.md
- deprecated_wiki/Instrument-Testing-and-Troubleshooting.md
- deprecated_wiki/LORIS-Dictionary.md
- deprecated_wiki/LORIS-Form.md
- deprecated_wiki/LORIS-Module-Testing.md
- deprecated_wiki/LORIS-Modules.md
- deprecated_wiki/LORIS-scripts-in-the-tools--directory.md
- deprecated_wiki/Notification-system.md
- deprecated_wiki/Open-LORIS.md
- deprecated_wiki/Other-Imaging-Scripts.md
- deprecated_wiki/Reloading-MRI-data-for-mislabelled-session.md
- deprecated_wiki/Updating-your-LORIS.md
- deprecated_wiki/Upgrading-Loris.md
- deprecated_wiki/Using-Google-reCAPTCHA.md
- deprecated_wiki/Working-with-React.md
- deprecated_wiki/XIN-Rules.md
- deprecated_wiki/About/About.md
- deprecated_wiki/Setup/Setup.md
- deprecated_wiki/Setup/Initial Setup/Backups/Backups.md
- deprecated_wiki/Setup/Initial Setup/Behavioural Database/Behavioural-Database.md
- deprecated_wiki/Setup/Initial Setup/Data Querying Tool/Data-Querying-Tool.md
- deprecated_wiki/Setup/Initial Setup/Enable Mail Server/Enable-mail-server.md
- deprecated_wiki/Setup/Initial Setup/Imaging Database/Imaging-Database.md
- deprecated_wiki/Setup/Initial Setup/Install Script/Install-Script.md
- deprecated_wiki/Setup/Initial Setup/LORIS Modules/Candidate Information Page/Candidate-Information-Page.md
- deprecated_wiki/Setup/Initial Setup/LORIS Modules/Candidate Parameters/Candidate-Parameters.md
- deprecated_wiki/Technical/Code-Review-Checklist.md
- deprecated_wiki/Technical/Developer-Workshop-2015-02-13.md
- deprecated_wiki/Technical/Developer-Workshops.md
- deprecated_wiki/Technical/Technical.md
- wiki/00 - SERVER INSTALL AND CONFIGURATION/README.md
- wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/README.md
- wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/Macintosh/README.md
- wiki/01 - STUDY PARAMETERS SETUP/README.md
- wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/README.md
- wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/01 - Identifiers.md
- wiki/_ARCHIVE/README.md
- wiki/_DELETED/README.md
- wiki/_DELETED/API.md
- wiki/_DELETED/Install-Script-for-16.X.md
- wiki/_DELETED/Installing-Loris-(After-Installing-Prerequisites).md
- wiki/_DELETED/LORIS database schema.md
- wiki/_DELETED/LORIS-Setup-Schematic.md
- wiki/_DELETED/Request-Accounts-module.md
- wiki/_DELETED/_Footer.md
- wiki/_DELETED/_Sidebar.md
- wiki/_DELETED/Community/Community.md
- wiki/_DELETED/Community/Development.md
- wiki/_DELETED/Community/Documentation.md
- wiki/_DELETED/Community/Get-in-Touch.md
- wiki/_DELETED/Resources/Instrument Coding Guide/Instrument-Coding-Guide.md
- wiki/_DELETED/Resources/Ubuntu Upgrading/Upgrading-from-Ubuntu-12.04-to-14.04.md
WARNING - Documentation file 'CodingStandards.md' contains a link to 'wiki/99%20-%20Developers/Automated%20Testing.md' which is not found in the documentation files.
WARNING - Documentation file 'deprecated_wiki/Setup/Initial Setup/Install Script/Install-Script.md' contains a link to 'deprecated_wiki/Setup/Initial Setup/Install Script/Install-Script-for-16.X' which is not found in the documentation files.
WARNING - Documentation file 'wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/CentOS/README.md' contains a link to '../README.md' which is not found in the documentation files.
WARNING - Documentation file 'wiki/00 - SERVER INSTALL AND CONFIGURATION/01 - LORIS Install/CentOS/README.md' contains a link to '../README.md' which is not found in the documentation files.
WARNING - Documentation file 'wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/03 - Sites.md' contains a link to 'wiki/01 - STUDY PARAMETERS SETUP/01 - Study Variables/SQL Dictionary.md' which is not found in the documentation files.
WARNING - Documentation file 'wiki/99 - Developers/Automated Testing.md' contains a link to '../test/UnitTestGuide.md' which is not found in the documentation files.
ERROR - File not found: wiki/99 - Developers/LORIS-REST-API-0.0.2.md
ERROR - Error reading page 'wiki/99 - Developers/LORIS-REST-API-0.0.2.md': [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/acesloris/checkouts/latest/docs/wiki/99 - Developers/LORIS-REST-API-0.0.2.md'
Traceback (most recent call last):
File "/home/docs/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/docs/.pyenv/versions/3.7.3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/__main__.py", line 202, in <module>
cli()
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/__main__.py", line 163, in build_command
), dirty=not clean)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/commands/build.py", line 274, in build
_populate_page(file.page, config, files, dirty)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/commands/build.py", line 170, in _populate_page
page.read_source(config)
File "/home/docs/checkouts/readthedocs.org/user_builds/acesloris/envs/latest/lib/python3.7/site-packages/mkdocs/structure/pages.py", line 129, in read_source
with io.open(self.file.abs_src_path, 'r', encoding='utf-8-sig', errors='strict') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/acesloris/checkouts/latest/docs/wiki/99 - Developers/LORIS-REST-API-0.0.2.md'
``` | non_priority | readthedocs build failure on loris api symlinks describe the bug readthedocs is failing because it s unable to resolve the following symlinks docs wiki developers loris rest api md docs wiki developers loris rest api dev md these were added in what did you expect to happen readthedocs and mkdocs should resolve the symlinks and render the documentation we may need to modify these to include relative path characters as rtd is served from docs rather than the loris root additional context readthedocs build error info cleaning site directory info building documentation to directory home docs checkouts readthedocs org user builds acesloris checkouts latest build html info the following pages exist in the docs directory but are not included in the nav configuration react readme md sqlmodelingstandard md deprecated wiki about superuser md deprecated wiki centos imaging installation transcript md deprecated wiki code customization md deprecated wiki developer s instrument guide md deprecated wiki getting the release md deprecated wiki guide to loris react components md deprecated wiki how to code an instrument md deprecated wiki how to make a loris module md deprecated wiki installing loris in brief md deprecated wiki instrument groups md deprecated wiki instrument insertion md deprecated wiki instrument scoring md deprecated wiki instrument scripts md deprecated wiki instrument testing and troubleshooting md deprecated wiki loris dictionary md deprecated wiki loris form md deprecated wiki loris module testing md deprecated wiki loris modules md deprecated wiki loris scripts in the tools directory md deprecated wiki notification system md deprecated wiki open loris md deprecated wiki other imaging scripts md deprecated wiki reloading mri data for mislabelled session md deprecated wiki updating your loris md deprecated wiki upgrading loris md deprecated wiki using google recaptcha md deprecated wiki working with react md deprecated wiki xin rules md deprecated wiki about about md deprecated wiki setup setup md deprecated wiki setup initial setup backups backups md deprecated wiki setup initial setup behavioural database behavioural database md deprecated wiki setup initial setup data querying tool data querying tool md deprecated wiki setup initial setup enable mail server enable mail server md deprecated wiki setup initial setup imaging database imaging database md deprecated wiki setup initial setup install script install script md deprecated wiki setup initial setup loris modules candidate information page candidate information page md deprecated wiki setup initial setup loris modules candidate parameters candidate parameters md deprecated wiki technical code review checklist md deprecated wiki technical developer workshop md deprecated wiki technical developer workshops md deprecated wiki technical technical md wiki server install and configuration readme md wiki server install and configuration loris install readme md wiki server install and configuration loris install macintosh readme md wiki study parameters setup readme md wiki study parameters setup study variables readme md wiki study parameters setup study variables identifiers md wiki archive readme md wiki deleted readme md wiki deleted api md wiki deleted install script for x md wiki deleted installing loris after installing prerequisites md wiki deleted loris database schema md wiki deleted loris setup schematic md wiki deleted request accounts module md wiki deleted footer md wiki deleted sidebar md wiki deleted community community md wiki deleted community development md wiki deleted community documentation md wiki deleted community get in touch md wiki deleted resources instrument coding guide instrument coding guide md wiki deleted resources ubuntu upgrading upgrading from ubuntu to md warning documentation file codingstandards md contains a link to wiki automated md which is not found in the documentation files warning documentation file deprecated wiki setup initial setup install script install script md contains a link to deprecated wiki setup initial setup install script install script for x which is not found in the documentation files warning documentation file wiki server install and configuration loris install centos readme md contains a link to readme md which is not found in the documentation files warning documentation file wiki server install and configuration loris install centos readme md contains a link to readme md which is not found in the documentation files warning documentation file wiki study parameters setup study variables sites md contains a link to wiki study parameters setup study variables sql dictionary md which is not found in the documentation files warning documentation file wiki developers automated testing md contains a link to test unittestguide md which is not found in the documentation files error file not found wiki developers loris rest api md error error reading page wiki developers loris rest api md no such file or directory home docs checkouts readthedocs org user builds acesloris checkouts latest docs wiki developers loris rest api md traceback most recent call last file home docs pyenv versions lib runpy py line in run module as main main mod spec file home docs pyenv versions lib runpy py line in run code exec code run globals file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages mkdocs main py line in cli file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages click core py line in call return self main args kwargs file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages click core py line in main rv self invoke ctx file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages click core py line in invoke return ctx invoke self callback ctx params file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages click core py line in invoke return callback args kwargs file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages mkdocs main py line in build command dirty not clean file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages mkdocs commands build py line in build populate page file page config files dirty file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages mkdocs commands build py line in populate page page read source config file home docs checkouts readthedocs org user builds acesloris envs latest lib site packages mkdocs structure pages py line in read source with io open self file abs src path r encoding utf sig errors strict as f filenotfounderror no such file or directory home docs checkouts readthedocs org user builds acesloris checkouts latest docs wiki developers loris rest api md | 0 |
349,861 | 31,836,949,464 | IssuesEvent | 2023-09-14 14:03:24 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Serverless Observability API Integration Tests.x-pack/test_serverless/api_integration/test_suites/common/alerting/rules·ts - serverless common API Alerting APIs Alerting rules should throttle alerts when appropriate | failed-test Team:Observability Team:ResponseOps | A test failed on a tracked branch
```
Error: expected undefined to not equal undefined
at Assertion.assert (expect.js:100:11)
at Assertion.apply (expect.js:227:8)
at Assertion.be (expect.js:69:22)
at Context.<anonymous> (rules.ts:389:29)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-serverless/builds/1944#0189fb35-5c9f-491a-b25d-8cdf1353a406)
<!-- kibanaCiData = {"failed-test":{"test.class":"Serverless Observability API Integration Tests.x-pack/test_serverless/api_integration/test_suites/common/alerting/rules·ts","test.name":"serverless common API Alerting APIs Alerting rules should throttle alerts when appropriate","test.failCount":21}} --> | 1.0 | Failing test: Serverless Observability API Integration Tests.x-pack/test_serverless/api_integration/test_suites/common/alerting/rules·ts - serverless common API Alerting APIs Alerting rules should throttle alerts when appropriate - A test failed on a tracked branch
```
Error: expected undefined to not equal undefined
at Assertion.assert (expect.js:100:11)
at Assertion.apply (expect.js:227:8)
at Assertion.be (expect.js:69:22)
at Context.<anonymous> (rules.ts:389:29)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-serverless/builds/1944#0189fb35-5c9f-491a-b25d-8cdf1353a406)
<!-- kibanaCiData = {"failed-test":{"test.class":"Serverless Observability API Integration Tests.x-pack/test_serverless/api_integration/test_suites/common/alerting/rules·ts","test.name":"serverless common API Alerting APIs Alerting rules should throttle alerts when appropriate","test.failCount":21}} --> | non_priority | failing test serverless observability api integration tests x pack test serverless api integration test suites common alerting rules·ts serverless common api alerting apis alerting rules should throttle alerts when appropriate a test failed on a tracked branch error expected undefined to not equal undefined at assertion assert expect js at assertion apply expect js at assertion be expect js at context rules ts at processticksandrejections node internal process task queues at object apply wrap function js first failure | 0 |
219,054 | 16,815,732,906 | IssuesEvent | 2021-06-17 07:07:46 | gap-packages/recog | https://api.github.com/repos/gap-packages/recog | opened | Document hints | documentation | We have several hints that need to be documented:
- `FindHomMethodsMatrix.Scalar`
- `FindHomMethodsMatrix.BlockScalar`
Raised by @fingolfin in #263. | 1.0 | Document hints - We have several hints that need to be documented:
- `FindHomMethodsMatrix.Scalar`
- `FindHomMethodsMatrix.BlockScalar`
Raised by @fingolfin in #263. | non_priority | document hints we have several hints that need to be documented findhommethodsmatrix scalar findhommethodsmatrix blockscalar raised by fingolfin in | 0 |
13,173 | 2,735,128,646 | IssuesEvent | 2015-04-18 03:31:02 | gizmoboard/gizmoboard | https://api.github.com/repos/gizmoboard/gizmoboard | closed | Fix formatting for images on README | defect documentation | The README doesn't format correctly on NPM with spaces in the <div> blocks. | 1.0 | Fix formatting for images on README - The README doesn't format correctly on NPM with spaces in the <div> blocks. | non_priority | fix formatting for images on readme the readme doesn t format correctly on npm with spaces in the blocks | 0 |
360,155 | 25,276,786,334 | IssuesEvent | 2022-11-16 13:12:23 | matplotlib/matplotlib | https://api.github.com/repos/matplotlib/matplotlib | opened | [Doc]: Oscilloscope demo x-axis offset | Documentation topic: animation Good first issue | ### Documentation Link
https://matplotlib.org/devdocs/gallery/animation/strip_chart.html
### Problem
The demo starts off by displaying the range 0 to 2, then 2 to 4 and so on, but after a while an offset is introduced, which I assume is unwanted. For example, the 20th updates goes to 40.25 (probably a bit more) rather than 40.
No big deal and probably just a one-off error (or a floating-point effect), but I was just about to point a colleague to this and realized that something was not as one could hope for.
### Suggested improvement
Fix the code so that it always increment the range by 2 (or `t` rather). One may think of skipping the `dt` and instead use number of points per display (`t/dt`). | 1.0 | [Doc]: Oscilloscope demo x-axis offset - ### Documentation Link
https://matplotlib.org/devdocs/gallery/animation/strip_chart.html
### Problem
The demo starts off by displaying the range 0 to 2, then 2 to 4 and so on, but after a while an offset is introduced, which I assume is unwanted. For example, the 20th updates goes to 40.25 (probably a bit more) rather than 40.
No big deal and probably just a one-off error (or a floating-point effect), but I was just about to point a colleague to this and realized that something was not as one could hope for.
### Suggested improvement
Fix the code so that it always increment the range by 2 (or `t` rather). One may think of skipping the `dt` and instead use number of points per display (`t/dt`). | non_priority | oscilloscope demo x axis offset documentation link problem the demo starts off by displaying the range to then to and so on but after a while an offset is introduced which i assume is unwanted for example the updates goes to probably a bit more rather than no big deal and probably just a one off error or a floating point effect but i was just about to point a colleague to this and realized that something was not as one could hope for suggested improvement fix the code so that it always increment the range by or t rather one may think of skipping the dt and instead use number of points per display t dt | 0 |
309,038 | 26,648,631,422 | IssuesEvent | 2023-01-25 12:00:07 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix non_linear_activation_functions.test_torch_threshold | PyTorch Frontend Sub Task Failing Test | | | |
|---|---|
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3787482025/jobs/6439326637" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3780436677/jobs/6426522537" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3788736379/jobs/6441820526" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
| 1.0 | Fix non_linear_activation_functions.test_torch_threshold - | | |
|---|---|
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3787482025/jobs/6439326637" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3780436677/jobs/6426522537" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3788736379/jobs/6441820526" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
| non_priority | fix non linear activation functions test torch threshold torch img src numpy img src tensorflow img src | 0 |
295,213 | 22,201,720,443 | IssuesEvent | 2022-06-07 11:49:24 | ices-tools-dev/esas | https://api.github.com/repos/ices-tools-dev/esas | closed | Review observations: euring_species_code | 1.Documentation 3.Database vocab: update ESAS | Updated todo:
- [x] Rename field to `species_code` because one should be able to submit both EURING and APHIA ids
- [x] Decide how to differentiate between EURING and APHIA ids (both are integers). Maybe with a prefix `euring:` With extra field `SpeciesCodeType`
- [x] Remove non biological codes: 40fecb4
- [x] Understand use of `~`: it means **and** not **or** and can thus not be used for species_code
- [x] ~~Use `~` for combinations: 2645b2e~~
- [ ] Have reference for marine mammals (not in EURING?)
- [x] ~~Add field `vernacular_name` (see #46)~~
- [ ] Request necessary missing codes to be added to EURING
- [ ] Request necessary missing APHIA codes to be added to WORMS
---
`euring_species_code` is where one provides **scientific name** information. ESAS has relied on EURING codes for these, which - in contrast with many other taxonomic codes (like `aphia_id`) - allow to record observations where it was hard to identify the species. These uncertain identifications can be more precise that a shared taxonomic level:
- `unidentified wader`
- `unidentified small gull`
- `Uria aalge / Alca torda`: more precise than "Alcidae"
It might not be possible to translate all of these to `aphia_id`.
Through its history, ESAS has also invented some codes that do not exist in EURING (see both columns on [this page](https://ices-tools-dev.github.io/esas/species/)), especially non-birds and objects.
My suggestion would be that data providers submit the `scientific_name` (so we don't loose information) in addition to an `aphia_id` (where possible) | 1.0 | Review observations: euring_species_code - Updated todo:
- [x] Rename field to `species_code` because one should be able to submit both EURING and APHIA ids
- [x] Decide how to differentiate between EURING and APHIA ids (both are integers). Maybe with a prefix `euring:` With extra field `SpeciesCodeType`
- [x] Remove non biological codes: 40fecb4
- [x] Understand use of `~`: it means **and** not **or** and can thus not be used for species_code
- [x] ~~Use `~` for combinations: 2645b2e~~
- [ ] Have reference for marine mammals (not in EURING?)
- [x] ~~Add field `vernacular_name` (see #46)~~
- [ ] Request necessary missing codes to be added to EURING
- [ ] Request necessary missing APHIA codes to be added to WORMS
---
`euring_species_code` is where one provides **scientific name** information. ESAS has relied on EURING codes for these, which - in contrast with many other taxonomic codes (like `aphia_id`) - allow to record observations where it was hard to identify the species. These uncertain identifications can be more precise that a shared taxonomic level:
- `unidentified wader`
- `unidentified small gull`
- `Uria aalge / Alca torda`: more precise than "Alcidae"
It might not be possible to translate all of these to `aphia_id`.
Through its history, ESAS has also invented some codes that do not exist in EURING (see both columns on [this page](https://ices-tools-dev.github.io/esas/species/)), especially non-birds and objects.
My suggestion would be that data providers submit the `scientific_name` (so we don't loose information) in addition to an `aphia_id` (where possible) | non_priority | review observations euring species code updated todo rename field to species code because one should be able to submit both euring and aphia ids decide how to differentiate between euring and aphia ids both are integers maybe with a prefix euring with extra field speciescodetype remove non biological codes understand use of it means and not or and can thus not be used for species code use for combinations have reference for marine mammals not in euring add field vernacular name see request necessary missing codes to be added to euring request necessary missing aphia codes to be added to worms euring species code is where one provides scientific name information esas has relied on euring codes for these which in contrast with many other taxonomic codes like aphia id allow to record observations where it was hard to identify the species these uncertain identifications can be more precise that a shared taxonomic level unidentified wader unidentified small gull uria aalge alca torda more precise than alcidae it might not be possible to translate all of these to aphia id through its history esas has also invented some codes that do not exist in euring see both columns on especially non birds and objects my suggestion would be that data providers submit the scientific name so we don t loose information in addition to an aphia id where possible | 0 |
19,631 | 13,338,525,291 | IssuesEvent | 2020-08-28 11:12:26 | blockframes/blockframes | https://api.github.com/repos/blockframes/blockframes | opened | update notion article to explain prepareForTesting process | Infrastructure | https://www.notion.so/cascade8/Preparing-your-Environment-e3c184db24d447c496c3241d4f16bc94
I will update that to explain the entire prepareForTesting process for devs | 1.0 | update notion article to explain prepareForTesting process - https://www.notion.so/cascade8/Preparing-your-Environment-e3c184db24d447c496c3241d4f16bc94
I will update that to explain the entire prepareForTesting process for devs | non_priority | update notion article to explain preparefortesting process i will update that to explain the entire preparefortesting process for devs | 0 |
254,375 | 19,211,832,144 | IssuesEvent | 2021-12-07 03:26:14 | theautomelon/text-based-game | https://api.github.com/repos/theautomelon/text-based-game | opened | add wiki documentation | documentation | add documentation to the wiki tab to document the features of the project and how we have organized it | 1.0 | add wiki documentation - add documentation to the wiki tab to document the features of the project and how we have organized it | non_priority | add wiki documentation add documentation to the wiki tab to document the features of the project and how we have organized it | 0 |
272,817 | 23,705,606,590 | IssuesEvent | 2022-08-30 00:33:25 | microsoft/win32metadata | https://api.github.com/repos/microsoft/win32metadata | closed | Unable to parse certain attributes in v28 | bug blocking rust needs test | My parser is struggling with some attributes in v28. Seems ILSpy is also struggling.
```C#
[Guid(/*Could not decode attribute arguments.*/)]
public static Guid MEDIASUBTYPE_P208;
``` | 1.0 | Unable to parse certain attributes in v28 - My parser is struggling with some attributes in v28. Seems ILSpy is also struggling.
```C#
[Guid(/*Could not decode attribute arguments.*/)]
public static Guid MEDIASUBTYPE_P208;
``` | non_priority | unable to parse certain attributes in my parser is struggling with some attributes in seems ilspy is also struggling c public static guid mediasubtype | 0 |
23,022 | 3,988,022,859 | IssuesEvent | 2016-05-09 08:00:31 | bigchaindb/bigchaindb | https://api.github.com/repos/bigchaindb/bigchaindb | closed | Write tool to bulk-upload a set of realistic transactions into the backlog table of a test cluster | testing | We, or some other BigchainDB user, might have files of realistic-looking test transactions and may want to dump them, in bulk, into the backlog table of a test cluster.
Those files of might come from a transaction-generator tool (see Issue #111 and Issue #112) or from another source.
This may not be as easy as it sounds: one might want to dump transactions _as fast as possible_ into the backlog table, which will mean using a RethinkDB proxy or some other mechanism to help balance the incoming traffic to each node in the cluster (or at least the nodes with the primary of each shard). | 1.0 | Write tool to bulk-upload a set of realistic transactions into the backlog table of a test cluster - We, or some other BigchainDB user, might have files of realistic-looking test transactions and may want to dump them, in bulk, into the backlog table of a test cluster.
Those files of might come from a transaction-generator tool (see Issue #111 and Issue #112) or from another source.
This may not be as easy as it sounds: one might want to dump transactions _as fast as possible_ into the backlog table, which will mean using a RethinkDB proxy or some other mechanism to help balance the incoming traffic to each node in the cluster (or at least the nodes with the primary of each shard). | non_priority | write tool to bulk upload a set of realistic transactions into the backlog table of a test cluster we or some other bigchaindb user might have files of realistic looking test transactions and may want to dump them in bulk into the backlog table of a test cluster those files of might come from a transaction generator tool see issue and issue or from another source this may not be as easy as it sounds one might want to dump transactions as fast as possible into the backlog table which will mean using a rethinkdb proxy or some other mechanism to help balance the incoming traffic to each node in the cluster or at least the nodes with the primary of each shard | 0 |
106,403 | 13,283,482,486 | IssuesEvent | 2020-08-24 03:22:15 | oppia/oppia | https://api.github.com/repos/oppia/oppia | closed | Send automatic email reminders to creators who have started but not finished creating explorations | full-stack needs design doc talk-to: @prasanna08 | As a creator who abandoned the exploration creation process before publishing my exploration, I want to receive an email notification encouraging me to keep going. This email should include a link that takes me to the exploration creator page for that exploration.
We should be able to configure how much time passes before such an email is sent, as well as having the option to send more than one email for the same exploration.
The exact text of the email will be written later, so please feel free to use placeholder text for now. | 1.0 | Send automatic email reminders to creators who have started but not finished creating explorations - As a creator who abandoned the exploration creation process before publishing my exploration, I want to receive an email notification encouraging me to keep going. This email should include a link that takes me to the exploration creator page for that exploration.
We should be able to configure how much time passes before such an email is sent, as well as having the option to send more than one email for the same exploration.
The exact text of the email will be written later, so please feel free to use placeholder text for now. | non_priority | send automatic email reminders to creators who have started but not finished creating explorations as a creator who abandoned the exploration creation process before publishing my exploration i want to receive an email notification encouraging me to keep going this email should include a link that takes me to the exploration creator page for that exploration we should be able to configure how much time passes before such an email is sent as well as having the option to send more than one email for the same exploration the exact text of the email will be written later so please feel free to use placeholder text for now | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.