Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
76,552
| 26,485,877,213
|
IssuesEvent
|
2023-01-17 17:58:34
|
idaholab/HERON
|
https://api.github.com/repos/idaholab/HERON
|
opened
|
[DEFECT] Static Histories Does Not Work Fully in Debug Dispatch Mode.
|
defect
|
--------
Defect Description
--------
**Describe the defect**
##### What did you expect to see happen?
To be able to use static histories and debug mode
##### What did you see instead?
There are some errors concerning _ROM_Cluster not being found
Also we must deal with reducing the project time despite the CSV containing more years of data. For example, setting macro_steps = 1 and keeping 20 years of data in your CSV causes an error.
##### Do you have a suggested fix for the development team?
**Describe how to Reproduce**
Steps to reproduce the behavior:
1.
2.
3.
4.
**Screenshots and Input Files**
Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue.
**Platform (please complete the following information):**
- OS: [e.g. iOS]
- Version: [e.g. 22]
- Dependencies Installation: [CONDA or PIP]
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [ ] 1. Is it tagged with a type: defect or task?
- [ ] 2. Is it tagged with a priority: critical, normal or minor?
- [ ] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [ ] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [ ] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
1.0
|
[DEFECT] Static Histories Does Not Work Fully in Debug Dispatch Mode. - --------
Defect Description
--------
**Describe the defect**
##### What did you expect to see happen?
To be able to use static histories and debug mode
##### What did you see instead?
There are some errors concerning _ROM_Cluster not being found
Also we must deal with reducing the project time despite the CSV containing more years of data. For example, setting macro_steps = 1 and keeping 20 years of data in your CSV causes an error.
##### Do you have a suggested fix for the development team?
**Describe how to Reproduce**
Steps to reproduce the behavior:
1.
2.
3.
4.
**Screenshots and Input Files**
Please attach the input file(s) that generate this error. The simpler the input, the faster we can find the issue.
**Platform (please complete the following information):**
- OS: [e.g. iOS]
- Version: [e.g. 22]
- Dependencies Installation: [CONDA or PIP]
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [ ] 1. Is it tagged with a type: defect or task?
- [ ] 2. Is it tagged with a priority: critical, normal or minor?
- [ ] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [ ] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [ ] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
- [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
|
non_test
|
static histories does not work fully in debug dispatch mode defect description describe the defect what did you expect to see happen to be able to use static histories and debug mode what did you see instead there are some errors concerning rom cluster not being found also we must deal with reducing the project time despite the csv containing more years of data for example setting macro steps and keeping years of data in your csv causes an error do you have a suggested fix for the development team describe how to reproduce steps to reproduce the behavior screenshots and input files please attach the input file s that generate this error the simpler the input the faster we can find the issue platform please complete the following information os version dependencies installation for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
| 0
|
33,212
| 4,818,575,574
|
IssuesEvent
|
2016-11-04 16:44:07
|
mapbox/mapbox-gl-native
|
https://api.github.com/repos/mapbox/mapbox-gl-native
|
closed
|
icon alignment differences in tests
|
tests
|
Moved from https://github.com/mapbox/mapbox-gl-js/issues/1569 now that this happens only in native.
icon-offset/literal:
http://mapbox.s3.amazonaws.com/mapbox-gl-native/render-tests/8219.6/index.html

|
1.0
|
icon alignment differences in tests - Moved from https://github.com/mapbox/mapbox-gl-js/issues/1569 now that this happens only in native.
icon-offset/literal:
http://mapbox.s3.amazonaws.com/mapbox-gl-native/render-tests/8219.6/index.html

|
test
|
icon alignment differences in tests moved from now that this happens only in native icon offset literal
| 1
|
28,451
| 2,702,711,624
|
IssuesEvent
|
2015-04-06 11:31:03
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
Custom Org Admin Page: Change default value for custom styling to our standard main nav style
|
Custom org page Priority-Medium
|
Use this less code as default:
@hdxLogoUrl: "@{imagePath}/homepage-new/logo-beta.svg";
@headerUserBackgroundColor: @darkGrayColor;
@headerNavBackgroundColor: @whiteColor;
@headerNavBorderColor: @lightGrayColor;
@headerNavSearchBorderColor: @grayColor;
@toolbarBackgroundColor: @extraLightGrayColor;
Alternatively, we may want to hide this field since the decision was made to not allow customization of the main nav.
(do not remove the functionality :) )
|
1.0
|
Custom Org Admin Page: Change default value for custom styling to our standard main nav style - Use this less code as default:
@hdxLogoUrl: "@{imagePath}/homepage-new/logo-beta.svg";
@headerUserBackgroundColor: @darkGrayColor;
@headerNavBackgroundColor: @whiteColor;
@headerNavBorderColor: @lightGrayColor;
@headerNavSearchBorderColor: @grayColor;
@toolbarBackgroundColor: @extraLightGrayColor;
Alternatively, we may want to hide this field since the decision was made to not allow customization of the main nav.
(do not remove the functionality :) )
|
non_test
|
custom org admin page change default value for custom styling to our standard main nav style use this less code as default hdxlogourl imagepath homepage new logo beta svg headeruserbackgroundcolor darkgraycolor headernavbackgroundcolor whitecolor headernavbordercolor lightgraycolor headernavsearchbordercolor graycolor toolbarbackgroundcolor extralightgraycolor alternatively we may want to hide this field since the decision was made to not allow customization of the main nav do not remove the functionality
| 0
|
344,597
| 30,751,816,649
|
IssuesEvent
|
2023-07-28 20:03:09
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
[Backport v2.7.6] Node gets kicked out of Cluster after snapshots are restored.
|
kind/bug internal [zube]: To Test QA/S area/provisioning-v2 team/area2 regression JIRA
|
This is a backport issue for https://github.com/rancher/rancher/issues/42201, automatically created via rancherbot by @Sahota1225
Original issue description:
<!--------- For bugs and general issues --------->
**Setup**
Rancher version: 2.7.5
Downstream cluster: Custom cluster
Nodes: 3
CNI: Cilium
Kubernetes version: v1.25.7+rke2r1
**Describe the bug**
When restoring a snapshot on a custom cluster a node gets deleted from the cluster.
**To Reproduce**
1. Deploy a fresh RKE2 custom cluster
2. Take a snapshot
3. Restore snapshot
4. repeat steps 2-3 until the bug is hit. (usually 2 tries)
**Result**
A worker node gets deleted from the cluster.
**Expected Result**
All nodes remain in the cluster and the restore occurs properly.
**Screenshots**
<img width="1100" alt="image" src="https://github.com/rancher/dashboard/assets/136753565/3f0a0484-0733-4582-b47f-ede6da8053a7">
**Additional context**
Tried to reproduce this on 2.7.4 but was unable to do so after 5-10 restores
SURE-6669
|
1.0
|
[Backport v2.7.6] Node gets kicked out of Cluster after snapshots are restored. - This is a backport issue for https://github.com/rancher/rancher/issues/42201, automatically created via rancherbot by @Sahota1225
Original issue description:
<!--------- For bugs and general issues --------->
**Setup**
Rancher version: 2.7.5
Downstream cluster: Custom cluster
Nodes: 3
CNI: Cilium
Kubernetes version: v1.25.7+rke2r1
**Describe the bug**
When restoring a snapshot on a custom cluster a node gets deleted from the cluster.
**To Reproduce**
1. Deploy a fresh RKE2 custom cluster
2. Take a snapshot
3. Restore snapshot
4. repeat steps 2-3 until the bug is hit. (usually 2 tries)
**Result**
A worker node gets deleted from the cluster.
**Expected Result**
All nodes remain in the cluster and the restore occurs properly.
**Screenshots**
<img width="1100" alt="image" src="https://github.com/rancher/dashboard/assets/136753565/3f0a0484-0733-4582-b47f-ede6da8053a7">
**Additional context**
Tried to reproduce this on 2.7.4 but was unable to do so after 5-10 restores
SURE-6669
|
test
|
node gets kicked out of cluster after snapshots are restored this is a backport issue for automatically created via rancherbot by original issue description setup rancher version downstream cluster custom cluster nodes cni cilium kubernetes version describe the bug when restoring a snapshot on a custom cluster a node gets deleted from the cluster to reproduce deploy a fresh custom cluster take a snapshot restore snapshot repeat steps until the bug is hit usually tries result a worker node gets deleted from the cluster expected result all nodes remain in the cluster and the restore occurs properly screenshots img width alt image src additional context tried to reproduce this on but was unable to do so after restores sure
| 1
|
208,172
| 7,136,419,281
|
IssuesEvent
|
2018-01-23 07:00:49
|
wso2/testgrid
|
https://api.github.com/repos/wso2/testgrid
|
closed
|
Fix undefined error when navigating to test log view in web app
|
Priority/High Severity/Major Type/Bug
|
**Description:**
When navigating to the web app the following error is displayed.
```
react-dom.production.min.js:164 TypeError: Cannot read property 'infraParameters' of undefined
at t.value (TestRunView.js:116)
at l (react-dom.production.min.js:130)
at beginWork (react-dom.production.min.js:133)
at o (react-dom.production.min.js:161)
at s (react-dom.production.min.js:161)
at a (react-dom.production.min.js:162)
at C (react-dom.production.min.js:169)
at w (react-dom.production.min.js:168)
at p (react-dom.production.min.js:167)
at f (react-dom.production.min.js:165)
l @ react-dom.production.min.js:164
TestRunView.js:116 Uncaught (in promise) TypeError: Cannot read property 'infraParameters' of undefined
at t.value (TestRunView.js:116)
at l (react-dom.production.min.js:130)
at beginWork (react-dom.production.min.js:133)
at o (react-dom.production.min.js:161)
at s (react-dom.production.min.js:161)
at a (react-dom.production.min.js:162)
at C (react-dom.production.min.js:169)
at w (react-dom.production.min.js:168)
at p (react-dom.production.min.js:167)
at f (react-dom.production.min.js:165)
Failed to load resource: the server responded with a status of 404 ()
```
|
1.0
|
Fix undefined error when navigating to test log view in web app - **Description:**
When navigating to the web app the following error is displayed.
```
react-dom.production.min.js:164 TypeError: Cannot read property 'infraParameters' of undefined
at t.value (TestRunView.js:116)
at l (react-dom.production.min.js:130)
at beginWork (react-dom.production.min.js:133)
at o (react-dom.production.min.js:161)
at s (react-dom.production.min.js:161)
at a (react-dom.production.min.js:162)
at C (react-dom.production.min.js:169)
at w (react-dom.production.min.js:168)
at p (react-dom.production.min.js:167)
at f (react-dom.production.min.js:165)
l @ react-dom.production.min.js:164
TestRunView.js:116 Uncaught (in promise) TypeError: Cannot read property 'infraParameters' of undefined
at t.value (TestRunView.js:116)
at l (react-dom.production.min.js:130)
at beginWork (react-dom.production.min.js:133)
at o (react-dom.production.min.js:161)
at s (react-dom.production.min.js:161)
at a (react-dom.production.min.js:162)
at C (react-dom.production.min.js:169)
at w (react-dom.production.min.js:168)
at p (react-dom.production.min.js:167)
at f (react-dom.production.min.js:165)
Failed to load resource: the server responded with a status of 404 ()
```
|
non_test
|
fix undefined error when navigating to test log view in web app description when navigating to the web app the following error is displayed react dom production min js typeerror cannot read property infraparameters of undefined at t value testrunview js at l react dom production min js at beginwork react dom production min js at o react dom production min js at s react dom production min js at a react dom production min js at c react dom production min js at w react dom production min js at p react dom production min js at f react dom production min js l react dom production min js testrunview js uncaught in promise typeerror cannot read property infraparameters of undefined at t value testrunview js at l react dom production min js at beginwork react dom production min js at o react dom production min js at s react dom production min js at a react dom production min js at c react dom production min js at w react dom production min js at p react dom production min js at f react dom production min js failed to load resource the server responded with a status of
| 0
|
62,665
| 3,192,939,217
|
IssuesEvent
|
2015-09-30 00:18:43
|
fusioninventory/fusioninventory-for-glpi
|
https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi
|
closed
|
Use device template when create device
|
Component: For junior contributor Priority: Normal Status: Closed Tracker: Feature
|
---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 985, http://forge.fusioninventory.org/issues/985
Original Date: 2011-06-27
---
Example for computer, create a computer with a template.
Manage it in rules import equipment:
Add action : "Template for new item" "is" "template xxxx"
When only device is created, we can use the template
|
1.0
|
Use device template when create device - ---
Author Name: **David Durieux** (@ddurieux)
Original Redmine Issue: 985, http://forge.fusioninventory.org/issues/985
Original Date: 2011-06-27
---
Example for computer, create a computer with a template.
Manage it in rules import equipment:
Add action : "Template for new item" "is" "template xxxx"
When only device is created, we can use the template
|
non_test
|
use device template when create device author name david durieux ddurieux original redmine issue original date example for computer create a computer with a template manage it in rules import equipment add action template for new item is template xxxx when only device is created we can use the template
| 0
|
98,108
| 8,674,304,094
|
IssuesEvent
|
2018-11-30 07:00:56
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
reopened
|
FXLabs Testing 30 : ApiV1RunsIdTestSuiteResponsesGetPathParamIdMysqlSqlInjectionTimebound
|
FXLabs Testing 30
|
Project : FXLabs Testing 30
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YzgwMDExZjItZjU1NS00ZmFmLWE5ZmUtOTk1NWY2YjJlYTc4; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:46:17 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/runs//test-suite-responses
Request :
Response :
{
"timestamp" : "2018-11-30T06:46:18.335+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/runs/test-suite-responses"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [546 < 7000 OR 546 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot ---
|
1.0
|
FXLabs Testing 30 : ApiV1RunsIdTestSuiteResponsesGetPathParamIdMysqlSqlInjectionTimebound - Project : FXLabs Testing 30
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YzgwMDExZjItZjU1NS00ZmFmLWE5ZmUtOTk1NWY2YjJlYTc4; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:46:17 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/runs//test-suite-responses
Request :
Response :
{
"timestamp" : "2018-11-30T06:46:18.335+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/runs/test-suite-responses"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [546 < 7000 OR 546 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot ---
|
test
|
fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api runs test suite responses logs assertion resolved to result assertion resolved to result fx bot
| 1
|
6,577
| 2,610,257,149
|
IssuesEvent
|
2015-02-26 19:22:05
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳激光祛粉刺要几次搞定
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳激光祛粉刺要几次搞定【深圳韩方科颜全国热线400-869-181
8,24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构��
�韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳�
��,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不
反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创��
�内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客�
��上的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:39
|
1.0
|
深圳激光祛粉刺要几次搞定 - ```
深圳激光祛粉刺要几次搞定【深圳韩方科颜全国热线400-869-181
8,24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构��
�韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳�
��,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不
反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创��
�内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客�
��上的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:39
|
non_test
|
深圳激光祛粉刺要几次搞定 深圳激光祛粉刺要几次搞定【 , 】深圳韩方科颜专业祛痘连锁机构,机构�� �韩国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳� ��,韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不 反弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创�� �内专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客� ��上的痘痘。 original issue reported on code google com by szft com on may at
| 0
|
673,365
| 22,959,840,374
|
IssuesEvent
|
2022-07-19 14:33:07
|
kubernetes/ingress-nginx
|
https://api.github.com/repos/kubernetes/ingress-nginx
|
closed
|
Affinity setting affinity-canary-behavior: "legacy" doesn't fully restore old behavior
|
kind/bug lifecycle/rotten needs-triage needs-priority
|
**NGINX Ingress controller version**
v1.1.0
**Kubernetes version** (use `kubectl version`):
v1.21.2
**Environment**:
- **Cloud provider or hardware configuration**: AKS
- **OS** (e.g. from /etc/os-release): ubuntu
- **Kernel** (e.g. `uname -a`): 5.4.0-1062-azure
- **How was the ingress-nginx-controller installed**:
`kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml`
**What happened**:
Recently upgraded nginx, which changed the behavior of how the canary feature behaves together with cookie sessions affinity. In the past the session cookie was completely ignored by the canary, which means even if the user was already affinitized to a primary pod, using the header defined by canary-by-header or the cookie defined by canary-by-cookie, the user would go to the canary side.
As we use the canary feature mainly for internal testing, and additionally for selected users(we set the cookie for them), we depended on this behavior.
With the new release using affinity-canary-behavior: "legacy", if the user is already affinitized to a primary pod, it will completely ignore the canary header/cookie, until the session cookie is deleted/ expired, which is a breaking change for us, as it now requires manual deletion of the session cookie, to direct the user to the canary side.
**What you expected to happen**:
Using the affinity-canary-behavior: "legacy", restores the old behavior when the session cookie was completely ignored by the canary feature, so with the correct canary header/cookie the user goes to the canary side even if already affinitized to a primary pod.
Form the looks of it the change is caused by https://github.com/kubernetes/ingress-nginx/pull/7371/files#diff-1057b4fc96d635cc08eabd1301c6a4ab3bf3272ceb86667020ecc56d94d6c195R189 which doesn't consider if the legacy behavior is used
**How to reproduce it**:
Create a canary deployment for the echoserver that uses session affinity with the legacy flag and canary.
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-svc
spec:
replicas: 1
selector:
matchLabels:
app: http-svc
template:
metadata:
labels:
app: http-svc
spec:
containers:
- name: http-svc
image: k8s.gcr.io/e2e-test-images/echoserver:2.3
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc
labels:
app: http-svc
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-test
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/affinity-canary-behavior: "legacy"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-svc-canary
spec:
replicas: 1
selector:
matchLabels:
app: http-svc-canary
template:
metadata:
labels:
app: http-svc-canary
spec:
containers:
- name: http-svc-canary
image: k8s.gcr.io/e2e-test-images/echoserver:2.3
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc-canary
labels:
app: http-svc-canary
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc-canary
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-test-canary
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "canary"
nginx.ingress.kubernetes.io/canary-by-cookie: "canary"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc-canary
port:
number: 80
```
Make a request to get the session cookie (using the ip of the nginx service here)
`curl -k -I https://51.136.77.203 -H "Host: stickyingress.example.com"`
`set-cookie: route=1645002633.401.440.174114|db2969d6c9db73733a7d888a6bd51c15; Expires=Fri, 18-Feb-22 09:10:32 GMT; Max-Age=172800; Path=/; Secure; HttpOnly`
Using the session cookie and canary header make a request, this should hit the canary, but the header is ignored:
`curl -k https://51.136.77.203 -H "Host: stickyingress.example.com" --cookie "route=1645000503.171.439.345197|db2969d6c9db73733a7d888a6bd51c15" -H "canary: always"`
`Hostname: http-svc-58dcbd68c4-lpkk7`
Make a request without the session cookie:
`curl -k https://51.136.77.203 -H "Host: stickyingress.example.com" -H "canary: always"`
`Hostname: http-svc-canary-5bbccbc7cd-c6s6p`
|
1.0
|
Affinity setting affinity-canary-behavior: "legacy" doesn't fully restore old behavior - **NGINX Ingress controller version**
v1.1.0
**Kubernetes version** (use `kubectl version`):
v1.21.2
**Environment**:
- **Cloud provider or hardware configuration**: AKS
- **OS** (e.g. from /etc/os-release): ubuntu
- **Kernel** (e.g. `uname -a`): 5.4.0-1062-azure
- **How was the ingress-nginx-controller installed**:
`kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml`
**What happened**:
Recently upgraded nginx, which changed the behavior of how the canary feature behaves together with cookie sessions affinity. In the past the session cookie was completely ignored by the canary, which means even if the user was already affinitized to a primary pod, using the header defined by canary-by-header or the cookie defined by canary-by-cookie, the user would go to the canary side.
As we use the canary feature mainly for internal testing, and additionally for selected users(we set the cookie for them), we depended on this behavior.
With the new release using affinity-canary-behavior: "legacy", if the user is already affinitized to a primary pod, it will completely ignore the canary header/cookie, until the session cookie is deleted/ expired, which is a breaking change for us, as it now requires manual deletion of the session cookie, to direct the user to the canary side.
**What you expected to happen**:
Using the affinity-canary-behavior: "legacy", restores the old behavior when the session cookie was completely ignored by the canary feature, so with the correct canary header/cookie the user goes to the canary side even if already affinitized to a primary pod.
Form the looks of it the change is caused by https://github.com/kubernetes/ingress-nginx/pull/7371/files#diff-1057b4fc96d635cc08eabd1301c6a4ab3bf3272ceb86667020ecc56d94d6c195R189 which doesn't consider if the legacy behavior is used
**How to reproduce it**:
Create a canary deployment for the echoserver that uses session affinity with the legacy flag and canary.
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-svc
spec:
replicas: 1
selector:
matchLabels:
app: http-svc
template:
metadata:
labels:
app: http-svc
spec:
containers:
- name: http-svc
image: k8s.gcr.io/e2e-test-images/echoserver:2.3
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc
labels:
app: http-svc
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-test
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/affinity-canary-behavior: "legacy"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-svc-canary
spec:
replicas: 1
selector:
matchLabels:
app: http-svc-canary
template:
metadata:
labels:
app: http-svc-canary
spec:
containers:
- name: http-svc-canary
image: k8s.gcr.io/e2e-test-images/echoserver:2.3
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc-canary
labels:
app: http-svc-canary
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc-canary
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-test-canary
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "canary"
nginx.ingress.kubernetes.io/canary-by-cookie: "canary"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: http-svc-canary
port:
number: 80
```
Make a request to get the session cookie (using the ip of the nginx service here)
`curl -k -I https://51.136.77.203 -H "Host: stickyingress.example.com"`
`set-cookie: route=1645002633.401.440.174114|db2969d6c9db73733a7d888a6bd51c15; Expires=Fri, 18-Feb-22 09:10:32 GMT; Max-Age=172800; Path=/; Secure; HttpOnly`
Using the session cookie and canary header make a request, this should hit the canary, but the header is ignored:
`curl -k https://51.136.77.203 -H "Host: stickyingress.example.com" --cookie "route=1645000503.171.439.345197|db2969d6c9db73733a7d888a6bd51c15" -H "canary: always"`
`Hostname: http-svc-58dcbd68c4-lpkk7`
Make a request without the session cookie:
`curl -k https://51.136.77.203 -H "Host: stickyingress.example.com" -H "canary: always"`
`Hostname: http-svc-canary-5bbccbc7cd-c6s6p`
|
non_test
|
affinity setting affinity canary behavior legacy doesn t fully restore old behavior nginx ingress controller version kubernetes version use kubectl version environment cloud provider or hardware configuration aks os e g from etc os release ubuntu kernel e g uname a azure how was the ingress nginx controller installed kubectl apply f what happened recently upgraded nginx which changed the behavior of how the canary feature behaves together with cookie sessions affinity in the past the session cookie was completely ignored by the canary which means even if the user was already affinitized to a primary pod using the header defined by canary by header or the cookie defined by canary by cookie the user would go to the canary side as we use the canary feature mainly for internal testing and additionally for selected users we set the cookie for them we depended on this behavior with the new release using affinity canary behavior legacy if the user is already affinitized to a primary pod it will completely ignore the canary header cookie until the session cookie is deleted expired which is a breaking change for us as it now requires manual deletion of the session cookie to direct the user to the canary side what you expected to happen using the affinity canary behavior legacy restores the old behavior when the session cookie was completely ignored by the canary feature so with the correct canary header cookie the user goes to the canary side even if already affinitized to a primary pod form the looks of it the change is caused by which doesn t consider if the legacy behavior is used how to reproduce it create a canary deployment for the echoserver that uses session affinity with the legacy flag and canary apiversion apps kind deployment metadata name http svc spec replicas selector matchlabels app http svc template metadata labels app http svc spec containers name http svc image gcr io test images echoserver ports containerport env name node name valuefrom fieldref fieldpath spec nodename name pod name valuefrom fieldref fieldpath metadata name name pod namespace valuefrom fieldref fieldpath metadata namespace name pod ip valuefrom fieldref fieldpath status podip apiversion kind service metadata name http svc labels app http svc spec ports port targetport protocol tcp name http selector app http svc apiversion networking io kind ingress metadata name nginx test annotations kubernetes io ingress class nginx nginx ingress kubernetes io affinity cookie nginx ingress kubernetes io session cookie name route nginx ingress kubernetes io session cookie expires nginx ingress kubernetes io session cookie max age nginx ingress kubernetes io affinity canary behavior legacy spec rules host stickyingress example com http paths path pathtype prefix backend service name http svc port number apiversion apps kind deployment metadata name http svc canary spec replicas selector matchlabels app http svc canary template metadata labels app http svc canary spec containers name http svc canary image gcr io test images echoserver ports containerport env name node name valuefrom fieldref fieldpath spec nodename name pod name valuefrom fieldref fieldpath metadata name name pod namespace valuefrom fieldref fieldpath metadata namespace name pod ip valuefrom fieldref fieldpath status podip apiversion kind service metadata name http svc canary labels app http svc canary spec ports port targetport protocol tcp name http selector app http svc canary apiversion networking io kind ingress metadata name nginx test canary annotations kubernetes io ingress class nginx nginx ingress kubernetes io canary true nginx ingress kubernetes io canary by header canary nginx ingress kubernetes io canary by cookie canary spec rules host stickyingress example com http paths path pathtype prefix backend service name http svc canary port number make a request to get the session cookie using the ip of the nginx service here curl k i h host stickyingress example com set cookie route expires fri feb gmt max age path secure httponly using the session cookie and canary header make a request this should hit the canary but the header is ignored curl k h host stickyingress example com cookie route h canary always hostname http svc make a request without the session cookie curl k h host stickyingress example com h canary always hostname http svc canary
| 0
|
7,215
| 4,820,744,018
|
IssuesEvent
|
2016-11-05 00:48:49
|
VirtualDisgrace/opencollar
|
https://api.github.com/repos/VirtualDisgrace/opencollar
|
closed
|
Restart stopped relays with reboot command
|
completed enhancement usability
|
We have already a command to restart all scripts (settings excluded) to get the collar refreshed without losing any settings.
`prefix reboot`
In a case where the rlv relay (oc_relay) got crashed by a very bad spamming object somehow, this sadly does not get the relay back to work.
So, lets add a `llSetScriptState("oc_relay",TRUE)` to the reboot command in case the relay was crashed before.
This way there is no need to reset manually but a command can do it with hassle and possible resetting all scripts which ends in setting reset as well and often needs also a relog into non rlv etc etc
|
True
|
Restart stopped relays with reboot command - We have already a command to restart all scripts (settings excluded) to get the collar refreshed without losing any settings.
`prefix reboot`
In a case where the rlv relay (oc_relay) got crashed by a very bad spamming object somehow, this sadly does not get the relay back to work.
So, lets add a `llSetScriptState("oc_relay",TRUE)` to the reboot command in case the relay was crashed before.
This way there is no need to reset manually but a command can do it with hassle and possible resetting all scripts which ends in setting reset as well and often needs also a relog into non rlv etc etc
|
non_test
|
restart stopped relays with reboot command we have already a command to restart all scripts settings excluded to get the collar refreshed without losing any settings prefix reboot in a case where the rlv relay oc relay got crashed by a very bad spamming object somehow this sadly does not get the relay back to work so lets add a llsetscriptstate oc relay true to the reboot command in case the relay was crashed before this way there is no need to reset manually but a command can do it with hassle and possible resetting all scripts which ends in setting reset as well and often needs also a relog into non rlv etc etc
| 0
|
676,969
| 23,144,870,194
|
IssuesEvent
|
2022-07-28 22:52:46
|
apcountryman/picolibrary
|
https://api.github.com/repos/apcountryman/picolibrary
|
closed
|
Add not connected generic error
|
priority-normal status-awaiting_review type-enhancement
|
Add operation timeout generic error (`::picolibrary::Generic_Error::NOT_CONNECTED`).
|
1.0
|
Add not connected generic error - Add operation timeout generic error (`::picolibrary::Generic_Error::NOT_CONNECTED`).
|
non_test
|
add not connected generic error add operation timeout generic error picolibrary generic error not connected
| 0
|
340,337
| 24,650,288,408
|
IssuesEvent
|
2022-10-17 18:02:38
|
PyFPDF/fpdf2
|
https://api.github.com/repos/PyFPDF/fpdf2
|
closed
|
Doc: provide an example on how to combine usages of PyPDF2 & fpdf2
|
documentation good first issue up-for-grabs hacktoberfest
|
We already have a page about [editing existing PDFs using `pdfrw` & `fpdf2`](https://pyfpdf.github.io/fpdf2/ExistingPDFs.html) in our documentation, and another one about [combining `borb` & `fpdf2`](https://pyfpdf.github.io/fpdf2/borb.html).
Given that [PyPDF2](https://github.com/py-pdf/PyPDF2) is a lot more popular that `pdfrw`, we could provide another documentation page about combining `PyPDF2` & `fpdf2`.
Practically, we could:
* copy `docs/ExistingPDFs.md` into `docs/CombineWithPyPDF2.md`, a new documentation page describing how to
- open a PDF file with `PyPDF2` and edit it with `fpdf2`
- create a PDF document `fpdf2` and edit it with `PyPDF2`
* rename `docs/ExistingPDFs.md` into `docs/CombineWithPdfrw.md`
* rename `docs/borb.md` into `docs/CombineWithBorb.md`
|
1.0
|
Doc: provide an example on how to combine usages of PyPDF2 & fpdf2 - We already have a page about [editing existing PDFs using `pdfrw` & `fpdf2`](https://pyfpdf.github.io/fpdf2/ExistingPDFs.html) in our documentation, and another one about [combining `borb` & `fpdf2`](https://pyfpdf.github.io/fpdf2/borb.html).
Given that [PyPDF2](https://github.com/py-pdf/PyPDF2) is a lot more popular that `pdfrw`, we could provide another documentation page about combining `PyPDF2` & `fpdf2`.
Practically, we could:
* copy `docs/ExistingPDFs.md` into `docs/CombineWithPyPDF2.md`, a new documentation page describing how to
- open a PDF file with `PyPDF2` and edit it with `fpdf2`
- create a PDF document `fpdf2` and edit it with `PyPDF2`
* rename `docs/ExistingPDFs.md` into `docs/CombineWithPdfrw.md`
* rename `docs/borb.md` into `docs/CombineWithBorb.md`
|
non_test
|
doc provide an example on how to combine usages of we already have a page about in our documentation and another one about given that is a lot more popular that pdfrw we could provide another documentation page about combining practically we could copy docs existingpdfs md into docs md a new documentation page describing how to open a pdf file with and edit it with create a pdf document and edit it with rename docs existingpdfs md into docs combinewithpdfrw md rename docs borb md into docs combinewithborb md
| 0
|
18,423
| 5,631,669,895
|
IssuesEvent
|
2017-04-05 15:00:54
|
TEAMMATES/teammates
|
https://api.github.com/repos/TEAMMATES/teammates
|
closed
|
Typos in CourseRoster, TeamEvalResult and FieldValidator
|
a-CodeQuality d.FirstTimers p.Low
|
Detail CheckStyle Report:
https://htmlpreview.github.io/?https://github.com/xpdavid/CS2103R-Report/blob/master/codingStandard/spelling/main.html
CourseRoster.java
Stuent is not a word according to provided dictionary
``` java
public CourseRoster(List<StudentAttributes> students, List<InstructorAttributes> instructors) {
populateStuentListByEmail(students);
populateInstructorListByEmail(instructors);
}
```
TeamEvalResult.java
Should be camel case sumOfPerceived
``` java
double sumOfperceived = sum(filteredPerceived);
double sumOfActual = sum(filteredSanitizedActual);
```
FieldValidator.java
Invalidty is not a word according to provided dictionary
``` java
public String getInvalidityInfoForTimeForVisibilityStartAndResultsPublish(Date visibilityStart,
Date resultsPublish) {
return getInvalidtyInfoForFirstTimeIsBeforeSecondTime(visibilityStart, resultsPublish,
SESSION_VISIBLE_TIME_FIELD_NAME, RESULTS_VISIBLE_TIME_FIELD_NAME);
}
```
|
1.0
|
Typos in CourseRoster, TeamEvalResult and FieldValidator - Detail CheckStyle Report:
https://htmlpreview.github.io/?https://github.com/xpdavid/CS2103R-Report/blob/master/codingStandard/spelling/main.html
CourseRoster.java
Stuent is not a word according to provided dictionary
``` java
public CourseRoster(List<StudentAttributes> students, List<InstructorAttributes> instructors) {
populateStuentListByEmail(students);
populateInstructorListByEmail(instructors);
}
```
TeamEvalResult.java
Should be camel case sumOfPerceived
``` java
double sumOfperceived = sum(filteredPerceived);
double sumOfActual = sum(filteredSanitizedActual);
```
FieldValidator.java
Invalidty is not a word according to provided dictionary
``` java
public String getInvalidityInfoForTimeForVisibilityStartAndResultsPublish(Date visibilityStart,
Date resultsPublish) {
return getInvalidtyInfoForFirstTimeIsBeforeSecondTime(visibilityStart, resultsPublish,
SESSION_VISIBLE_TIME_FIELD_NAME, RESULTS_VISIBLE_TIME_FIELD_NAME);
}
```
|
non_test
|
typos in courseroster teamevalresult and fieldvalidator detail checkstyle report courseroster java stuent is not a word according to provided dictionary java public courseroster list students list instructors populatestuentlistbyemail students populateinstructorlistbyemail instructors teamevalresult java should be camel case sumofperceived java double sumofperceived sum filteredperceived double sumofactual sum filteredsanitizedactual fieldvalidator java invalidty is not a word according to provided dictionary java public string getinvalidityinfofortimeforvisibilitystartandresultspublish date visibilitystart date resultspublish return getinvalidtyinfoforfirsttimeisbeforesecondtime visibilitystart resultspublish session visible time field name results visible time field name
| 0
|
160,914
| 13,803,338,829
|
IssuesEvent
|
2020-10-11 02:21:32
|
kevtan/CUDA
|
https://api.github.com/repos/kevtan/CUDA
|
opened
|
CUDA binaries
|
documentation
|
I know that the `nvcc` is, to some extent, just a wrapper around a typical C/C++ compiler like `gcc`. This means that the structure of the object files that are generated must be similar. So, what exactly is _different_ about CUDA binaries? Are there special sections? What are their meanings?
|
1.0
|
CUDA binaries - I know that the `nvcc` is, to some extent, just a wrapper around a typical C/C++ compiler like `gcc`. This means that the structure of the object files that are generated must be similar. So, what exactly is _different_ about CUDA binaries? Are there special sections? What are their meanings?
|
non_test
|
cuda binaries i know that the nvcc is to some extent just a wrapper around a typical c c compiler like gcc this means that the structure of the object files that are generated must be similar so what exactly is different about cuda binaries are there special sections what are their meanings
| 0
|
4,046
| 2,610,086,508
|
IssuesEvent
|
2015-02-26 18:26:17
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳除青春痘费用
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳除青春痘费用【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:12
|
1.0
|
深圳除青春痘费用 - ```
深圳除青春痘费用【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:12
|
non_test
|
深圳除青春痘费用 深圳除青春痘费用【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at
| 0
|
11,694
| 3,218,645,572
|
IssuesEvent
|
2015-10-08 03:21:36
|
SpongePowered/Sponge
|
https://api.github.com/repos/SpongePowered/Sponge
|
closed
|
PositionOutOfBoundsException
|
bug needs testing
|
I had this Crash now twice: http://pastebin.com/2ErVndLE
I know its not the latest Sponge Version, but as i am not exactly sure what it caused i cant reproduce it really.
I updated my Sponge Version now and i'll tell you if the crash persists.
Maybe you can figure something with that Stacktrace :)
As far as i can tell none of my Plugins are involved.
|
1.0
|
PositionOutOfBoundsException - I had this Crash now twice: http://pastebin.com/2ErVndLE
I know its not the latest Sponge Version, but as i am not exactly sure what it caused i cant reproduce it really.
I updated my Sponge Version now and i'll tell you if the crash persists.
Maybe you can figure something with that Stacktrace :)
As far as i can tell none of my Plugins are involved.
|
test
|
positionoutofboundsexception i had this crash now twice i know its not the latest sponge version but as i am not exactly sure what it caused i cant reproduce it really i updated my sponge version now and i ll tell you if the crash persists maybe you can figure something with that stacktrace as far as i can tell none of my plugins are involved
| 1
|
391,028
| 11,567,635,847
|
IssuesEvent
|
2020-02-20 14:38:14
|
bryntum/support
|
https://api.github.com/repos/bryntum/support
|
closed
|
Gantt React: Trial runtime error with development server
|
bug forum high-priority react
|
Reported here
https://www.bryntum.com/forum/viewtopic.php?f=52&t=13384
Gantt react javascript demos fail to run in browser with
```
npm install
npm run start
```
Runtime error occurs
```
TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received type undefined
...
```
Build with `npm run start` works fine.
|
1.0
|
Gantt React: Trial runtime error with development server - Reported here
https://www.bryntum.com/forum/viewtopic.php?f=52&t=13384
Gantt react javascript demos fail to run in browser with
```
npm install
npm run start
```
Runtime error occurs
```
TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received type undefined
...
```
Build with `npm run start` works fine.
|
non_test
|
gantt react trial runtime error with development server reported here gantt react javascript demos fail to run in browser with npm install npm run start runtime error occurs typeerror the path argument must be of type string received type undefined build with npm run start works fine
| 0
|
24,487
| 23,827,960,544
|
IssuesEvent
|
2022-09-05 16:35:53
|
ultorg/public_issues
|
https://api.github.com/repos/ultorg/public_issues
|
closed
|
Improve the UI around creation and organization of perspectives
|
usability
|
Currently, one of the very first things a new Ultorg user tends to encounter is the following annoying dialog box:
<img width="277" alt="newpersp" src="https://user-images.githubusercontent.com/886243/166390247-1361106e-4461-4446-b518-e692b8116d7d.png">
The default behavior should probably be changed so that the default action when double-clicking a new database table is to create a new perspective based on that database table. On the other hand, we don't want to pollute the folder hierarchy with dozens of new perspectives. Some UI design work is needed here to work out the best behavior.
The current UI also does not prompt the user where to save new perspectives, or what to name them, leading to a large number of perspectives with names like "Courses (2)", "Courses (3)" etc. gathering in the root folder by default:
<img width="210" alt="Folders" src="https://user-images.githubusercontent.com/886243/166390507-a1d57d39-0cdd-49a7-9795-968aa7f252ff.png">
Furthermore, most users have not discovered that it is possible to create user-defined folders of data sources and perspectives in the Folders hierarchy. (Currently requires right-clicking the parent folder and clicking "New Folder".)
These related usability problems will be fixed in a future Ultorg version.
|
True
|
Improve the UI around creation and organization of perspectives - Currently, one of the very first things a new Ultorg user tends to encounter is the following annoying dialog box:
<img width="277" alt="newpersp" src="https://user-images.githubusercontent.com/886243/166390247-1361106e-4461-4446-b518-e692b8116d7d.png">
The default behavior should probably be changed so that the default action when double-clicking a new database table is to create a new perspective based on that database table. On the other hand, we don't want to pollute the folder hierarchy with dozens of new perspectives. Some UI design work is needed here to work out the best behavior.
The current UI also does not prompt the user where to save new perspectives, or what to name them, leading to a large number of perspectives with names like "Courses (2)", "Courses (3)" etc. gathering in the root folder by default:
<img width="210" alt="Folders" src="https://user-images.githubusercontent.com/886243/166390507-a1d57d39-0cdd-49a7-9795-968aa7f252ff.png">
Furthermore, most users have not discovered that it is possible to create user-defined folders of data sources and perspectives in the Folders hierarchy. (Currently requires right-clicking the parent folder and clicking "New Folder".)
These related usability problems will be fixed in a future Ultorg version.
|
non_test
|
improve the ui around creation and organization of perspectives currently one of the very first things a new ultorg user tends to encounter is the following annoying dialog box img width alt newpersp src the default behavior should probably be changed so that the default action when double clicking a new database table is to create a new perspective based on that database table on the other hand we don t want to pollute the folder hierarchy with dozens of new perspectives some ui design work is needed here to work out the best behavior the current ui also does not prompt the user where to save new perspectives or what to name them leading to a large number of perspectives with names like courses courses etc gathering in the root folder by default img width alt folders src furthermore most users have not discovered that it is possible to create user defined folders of data sources and perspectives in the folders hierarchy currently requires right clicking the parent folder and clicking new folder these related usability problems will be fixed in a future ultorg version
| 0
|
49,010
| 10,314,354,937
|
IssuesEvent
|
2019-08-30 03:07:10
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/code/file_tree·ts - Code File Tree Click file/directory on the file tree
|
Team:Code failed-test skipped-test
|
A test failed on a tracked branch
```
Error: retry.tryForTime timeout: Error: expected false to be truthy
at Assertion.assert (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/packages/kbn-expect/expect.js:100:11)
at Assertion.ok (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/packages/kbn-expect/expect.js:119:8)
at ok (test/functional/apps/code/file_tree.ts:108:11)
at process._tickCallback (internal/process/next_tick.js:68:7)
at lastError (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/test/common/services/retry/retry_for_success.ts:28:9)
at onFailure (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/test/common/services/retry/retry_for_success.ts:68:13)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=x-pack-firefoxSmoke,node=linux-immutable/1611/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/code/file_tree·ts","test.name":"Code File Tree Click file/directory on the file tree","test.failCount":5}} -->
|
1.0
|
Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/code/file_tree·ts - Code File Tree Click file/directory on the file tree - A test failed on a tracked branch
```
Error: retry.tryForTime timeout: Error: expected false to be truthy
at Assertion.assert (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/packages/kbn-expect/expect.js:100:11)
at Assertion.ok (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/packages/kbn-expect/expect.js:119:8)
at ok (test/functional/apps/code/file_tree.ts:108:11)
at process._tickCallback (internal/process/next_tick.js:68:7)
at lastError (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/test/common/services/retry/retry_for_success.ts:28:9)
at onFailure (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-firefoxSmoke/node/linux-immutable/kibana/test/common/services/retry/retry_for_success.ts:68:13)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=x-pack-firefoxSmoke,node=linux-immutable/1611/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/code/file_tree·ts","test.name":"Code File Tree Click file/directory on the file tree","test.failCount":5}} -->
|
non_test
|
failing test firefox xpack ui functional tests x pack test functional apps code file tree·ts code file tree click file directory on the file tree a test failed on a tracked branch error retry tryfortime timeout error expected false to be truthy at assertion assert var lib jenkins workspace elastic kibana master job x pack firefoxsmoke node linux immutable kibana packages kbn expect expect js at assertion ok var lib jenkins workspace elastic kibana master job x pack firefoxsmoke node linux immutable kibana packages kbn expect expect js at ok test functional apps code file tree ts at process tickcallback internal process next tick js at lasterror var lib jenkins workspace elastic kibana master job x pack firefoxsmoke node linux immutable kibana test common services retry retry for success ts at onfailure var lib jenkins workspace elastic kibana master job x pack firefoxsmoke node linux immutable kibana test common services retry retry for success ts first failure
| 0
|
253,694
| 27,300,796,536
|
IssuesEvent
|
2023-02-24 01:38:48
|
panasalap/linux-4.19.72_1
|
https://api.github.com/repos/panasalap/linux-4.19.72_1
|
closed
|
CVE-2021-38300 (High) detected in kernelv4.19.76 - autoclosed
|
security vulnerability
|
## CVE-2021-38300 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kernelv4.19.76</b></p></summary>
<p>
<p>Our patched kernel sources. This repository is generated from https://github.com/openSUSE/kernel-source</p>
<p>Library home page: <a href=https://github.com/openSUSE/kernel.git>https://github.com/openSUSE/kernel.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/c5a08fe8179013aad614165d792bc5b436591df6">c5a08fe8179013aad614165d792bc5b436591df6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
arch/mips/net/bpf_jit.c in the Linux kernel before 5.4.10 can generate undesirable machine code when transforming unprivileged cBPF programs, allowing execution of arbitrary code within the kernel context. This occurs because conditional branches can exceed the 128 KB limit of the MIPS architecture.
<p>Publish Date: 2021-09-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-38300>CVE-2021-38300</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38300">https://www.linuxkernelcves.com/cves/CVE-2021-38300</a></p>
<p>Release Date: 2021-09-20</p>
<p>Fix Resolution: v4.14.251,v4.19.211,v5.4.153,v5.10.71,v5.14.10,v5.15-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-38300 (High) detected in kernelv4.19.76 - autoclosed - ## CVE-2021-38300 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kernelv4.19.76</b></p></summary>
<p>
<p>Our patched kernel sources. This repository is generated from https://github.com/openSUSE/kernel-source</p>
<p>Library home page: <a href=https://github.com/openSUSE/kernel.git>https://github.com/openSUSE/kernel.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/c5a08fe8179013aad614165d792bc5b436591df6">c5a08fe8179013aad614165d792bc5b436591df6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/mips/net/bpf_jit.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
arch/mips/net/bpf_jit.c in the Linux kernel before 5.4.10 can generate undesirable machine code when transforming unprivileged cBPF programs, allowing execution of arbitrary code within the kernel context. This occurs because conditional branches can exceed the 128 KB limit of the MIPS architecture.
<p>Publish Date: 2021-09-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-38300>CVE-2021-38300</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38300">https://www.linuxkernelcves.com/cves/CVE-2021-38300</a></p>
<p>Release Date: 2021-09-20</p>
<p>Fix Resolution: v4.14.251,v4.19.211,v5.4.153,v5.10.71,v5.14.10,v5.15-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in autoclosed cve high severity vulnerability vulnerable library our patched kernel sources this repository is generated from library home page a href found in head commit a href found in base branch master vulnerable source files arch mips net bpf jit c arch mips net bpf jit c vulnerability details arch mips net bpf jit c in the linux kernel before can generate undesirable machine code when transforming unprivileged cbpf programs allowing execution of arbitrary code within the kernel context this occurs because conditional branches can exceed the kb limit of the mips architecture publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
54,429
| 6,387,430,860
|
IssuesEvent
|
2017-08-03 13:39:55
|
QubesOS/updates-status
|
https://api.github.com/repos/QubesOS/updates-status
|
closed
|
artwork v4.0.0 (r4.0)
|
r4.0-dom0-testing
|
Update of artwork to v4.0.0 for Qubes r4.0, see comments below for details.
Built from: https://github.com/QubesOS/qubes-artwork/commit/93b71b6fd75db319aaa7452e6e089985601e40d5
[Changes since previous version](https://github.com/QubesOS/qubes-artwork/compare/v3.2.0...v4.0.0):
QubesOS/qubes-artwork@93b71b6 version 4.0.0
QubesOS/qubes-artwork@cfc2a53 travis: update for Qubes 4.0
QubesOS/qubes-artwork@50500b0 Switch mkpadlock script to python 3, adjust build deps
QubesOS/qubes-artwork@067f0ed travis: drop debootstrap workaround
QubesOS/qubes-artwork@9780a95 Renamed imgconverter module
Referenced issues:
If you're release manager, you can issue GPG-inline signed command:
* `Upload artwork 93b71b6fd75db319aaa7452e6e089985601e40d5 r4.0 current repo` (available 7 days from now)
* `Upload artwork 93b71b6fd75db319aaa7452e6e089985601e40d5 r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload artwork 93b71b6fd75db319aaa7452e6e089985601e40d5 r4.0 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
|
1.0
|
artwork v4.0.0 (r4.0) - Update of artwork to v4.0.0 for Qubes r4.0, see comments below for details.
Built from: https://github.com/QubesOS/qubes-artwork/commit/93b71b6fd75db319aaa7452e6e089985601e40d5
[Changes since previous version](https://github.com/QubesOS/qubes-artwork/compare/v3.2.0...v4.0.0):
QubesOS/qubes-artwork@93b71b6 version 4.0.0
QubesOS/qubes-artwork@cfc2a53 travis: update for Qubes 4.0
QubesOS/qubes-artwork@50500b0 Switch mkpadlock script to python 3, adjust build deps
QubesOS/qubes-artwork@067f0ed travis: drop debootstrap workaround
QubesOS/qubes-artwork@9780a95 Renamed imgconverter module
Referenced issues:
If you're release manager, you can issue GPG-inline signed command:
* `Upload artwork 93b71b6fd75db319aaa7452e6e089985601e40d5 r4.0 current repo` (available 7 days from now)
* `Upload artwork 93b71b6fd75db319aaa7452e6e089985601e40d5 r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload artwork 93b71b6fd75db319aaa7452e6e089985601e40d5 r4.0 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
|
test
|
artwork update of artwork to for qubes see comments below for details built from qubesos qubes artwork version qubesos qubes artwork travis update for qubes qubesos qubes artwork switch mkpadlock script to python adjust build deps qubesos qubes artwork travis drop debootstrap workaround qubesos qubes artwork renamed imgconverter module referenced issues if you re release manager you can issue gpg inline signed command upload artwork current repo available days from now upload artwork current dists repo you can choose subset of distributions like vm vm available days from now upload artwork security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it
| 1
|
241,458
| 20,142,853,302
|
IssuesEvent
|
2022-02-09 02:18:28
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
reopened
|
Frequent test failures of `TestOffline`
|
priority/backlog kind/failing-test
|
This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[Docker_Linux_containerd](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_containerd&test=TestOffline)|27.59|
|
1.0
|
Frequent test failures of `TestOffline` - This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[Docker_Linux_containerd](https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux_containerd&test=TestOffline)|27.59|
|
test
|
frequent test failures of testoffline this test has high flake rates for the following environments environment flake rate
| 1
|
81,098
| 7,767,726,502
|
IssuesEvent
|
2018-06-03 10:06:30
|
NucleusPowered/Nucleus
|
https://api.github.com/repos/NucleusPowered/Nucleus
|
closed
|
Unable to delete nucleus-created world
|
needs testing pending information
|
Hello,
I recently created a world via /world create that I would now like to delete. When I attempt to unload it, which is necessary for deletion according to /world delete, using /world unload it gives the following output:
```
[18:01:21] [Server thread/INFO]: Attempting to unload world mining.
[18:01:21] [Server thread/INFO]: Unable to unload the world mining.
```
This happens even after a fresh server start with no players online. There should be no chunks force-loaded in the world. How can I unload and delete this world?
Thank you.
Versioning Information:
/nucleus info: http://paste.ubuntu.com/26030773/
Modpack: FTB Direwolf20 1.12 pack version 1.1.0
SpongeForge version: 1.12.2-2529-7.0.0-BETA-2730
OS: Ubuntu Server 16.04 LTS x64
Java: Oracle Java version 8 in ppa:webupd8team/java
|
1.0
|
Unable to delete nucleus-created world - Hello,
I recently created a world via /world create that I would now like to delete. When I attempt to unload it, which is necessary for deletion according to /world delete, using /world unload it gives the following output:
```
[18:01:21] [Server thread/INFO]: Attempting to unload world mining.
[18:01:21] [Server thread/INFO]: Unable to unload the world mining.
```
This happens even after a fresh server start with no players online. There should be no chunks force-loaded in the world. How can I unload and delete this world?
Thank you.
Versioning Information:
/nucleus info: http://paste.ubuntu.com/26030773/
Modpack: FTB Direwolf20 1.12 pack version 1.1.0
SpongeForge version: 1.12.2-2529-7.0.0-BETA-2730
OS: Ubuntu Server 16.04 LTS x64
Java: Oracle Java version 8 in ppa:webupd8team/java
|
test
|
unable to delete nucleus created world hello i recently created a world via world create that i would now like to delete when i attempt to unload it which is necessary for deletion according to world delete using world unload it gives the following output attempting to unload world mining unable to unload the world mining this happens even after a fresh server start with no players online there should be no chunks force loaded in the world how can i unload and delete this world thank you versioning information nucleus info modpack ftb pack version spongeforge version beta os ubuntu server lts java oracle java version in ppa java
| 1
|
296,181
| 9,105,230,865
|
IssuesEvent
|
2019-02-20 20:13:34
|
JustArchiNET/ASF-ui
|
https://api.github.com/repos/JustArchiNET/ASF-ui
|
closed
|
Local links in fetched wiki is broken
|
Bug Priority: Medium
|
## Description
ASF-ui fetched wiki for config params, but the local links in wiki text are broken like this:
```
http://127.0.0.1:1242/page/bot/botname/config#json-mapping
```
HTML may like this:
```html
<a href="#json-mapping">flags mapping</a>
```
This issue has nothing to do with the new fetching l10n wiki feature.
1 global config param and 3 bot config params are affected.
## Steps to reproduce
* go to any bot config window
* expand help text of `TradingPreferences`
* hover the `flags mapping` link and look the target
## Expected behavior
add missing configuration page link
## Current behavior
see description above
## Full snapshot
none
## Screenshots
none
## Additional information
none
|
1.0
|
Local links in fetched wiki is broken - ## Description
ASF-ui fetched wiki for config params, but the local links in wiki text are broken like this:
```
http://127.0.0.1:1242/page/bot/botname/config#json-mapping
```
HTML may like this:
```html
<a href="#json-mapping">flags mapping</a>
```
This issue has nothing to do with the new fetching l10n wiki feature.
1 global config param and 3 bot config params are affected.
## Steps to reproduce
* go to any bot config window
* expand help text of `TradingPreferences`
* hover the `flags mapping` link and look the target
## Expected behavior
add missing configuration page link
## Current behavior
see description above
## Full snapshot
none
## Screenshots
none
## Additional information
none
|
non_test
|
local links in fetched wiki is broken description asf ui fetched wiki for config params but the local links in wiki text are broken like this html may like this html flags mapping this issue has nothing to do with the new fetching wiki feature global config param and bot config params are affected steps to reproduce go to any bot config window expand help text of tradingpreferences hover the flags mapping link and look the target expected behavior add missing configuration page link current behavior see description above full snapshot none screenshots none additional information none
| 0
|
63,052
| 17,358,581,770
|
IssuesEvent
|
2021-07-29 17:16:04
|
galasa-dev/projectmanagement
|
https://api.github.com/repos/galasa-dev/projectmanagement
|
opened
|
Pointer icon doesn't indicate a link to the Detail view, when hovering over the table entries
|
defect design
|
The previously run tests in the table are both summaries and links to their Detail panels.
The row correctly gets a hover highlight (silver background)
The mouse pointer remains the standard icon pointer arrow over the row, unless over text, at which point it becomes the text insertion caret - in both cases, the Mouse-Down event places a text caret at the end of the text of the nearest column to the left of the Mouse-Down.
With the mouse down, a Move/Drag event highlights all text to the right of the Mouse-Down caret, through to the text immediately left of the current mouse position. That is all fine except for the following:
If the selection is limited to some or all of the text on one line, the Mouse-Up event acts as a simple click anywhere on the line and links through to the Detail view. This would be fine if the mouse icon pointer changed to a pointy finger whenever over any part of a row and text could not be selected. So Mouse Up on the same line as a previous Mouse Down would make the Detail link. However if text selection is important or useful, (which I think it probably is for cutting and pasting in the filter view), then the behaviour shopuld be: pointy finger icon over all the line. If the user does a Mouse down over any text and keeps it down for more than 0.3secs, switch the pointer to a text caret. Drag will highlight the text between the Mouse Down and current position. Mouse up WILL NOT follow the link to Detail view it will just leave the text highlighting in place, allowing the user to copy the text. This behaviour already exists (apart from the pointer changes) if the drag spans multiple rows.
If this explanation is too complex to understand, book 15 mins with @lubelg for a demo
|
1.0
|
Pointer icon doesn't indicate a link to the Detail view, when hovering over the table entries - The previously run tests in the table are both summaries and links to their Detail panels.
The row correctly gets a hover highlight (silver background)
The mouse pointer remains the standard icon pointer arrow over the row, unless over text, at which point it becomes the text insertion caret - in both cases, the Mouse-Down event places a text caret at the end of the text of the nearest column to the left of the Mouse-Down.
With the mouse down, a Move/Drag event highlights all text to the right of the Mouse-Down caret, through to the text immediately left of the current mouse position. That is all fine except for the following:
If the selection is limited to some or all of the text on one line, the Mouse-Up event acts as a simple click anywhere on the line and links through to the Detail view. This would be fine if the mouse icon pointer changed to a pointy finger whenever over any part of a row and text could not be selected. So Mouse Up on the same line as a previous Mouse Down would make the Detail link. However if text selection is important or useful, (which I think it probably is for cutting and pasting in the filter view), then the behaviour shopuld be: pointy finger icon over all the line. If the user does a Mouse down over any text and keeps it down for more than 0.3secs, switch the pointer to a text caret. Drag will highlight the text between the Mouse Down and current position. Mouse up WILL NOT follow the link to Detail view it will just leave the text highlighting in place, allowing the user to copy the text. This behaviour already exists (apart from the pointer changes) if the drag spans multiple rows.
If this explanation is too complex to understand, book 15 mins with @lubelg for a demo
|
non_test
|
pointer icon doesn t indicate a link to the detail view when hovering over the table entries the previously run tests in the table are both summaries and links to their detail panels the row correctly gets a hover highlight silver background the mouse pointer remains the standard icon pointer arrow over the row unless over text at which point it becomes the text insertion caret in both cases the mouse down event places a text caret at the end of the text of the nearest column to the left of the mouse down with the mouse down a move drag event highlights all text to the right of the mouse down caret through to the text immediately left of the current mouse position that is all fine except for the following if the selection is limited to some or all of the text on one line the mouse up event acts as a simple click anywhere on the line and links through to the detail view this would be fine if the mouse icon pointer changed to a pointy finger whenever over any part of a row and text could not be selected so mouse up on the same line as a previous mouse down would make the detail link however if text selection is important or useful which i think it probably is for cutting and pasting in the filter view then the behaviour shopuld be pointy finger icon over all the line if the user does a mouse down over any text and keeps it down for more than switch the pointer to a text caret drag will highlight the text between the mouse down and current position mouse up will not follow the link to detail view it will just leave the text highlighting in place allowing the user to copy the text this behaviour already exists apart from the pointer changes if the drag spans multiple rows if this explanation is too complex to understand book mins with lubelg for a demo
| 0
|
253,849
| 21,709,538,717
|
IssuesEvent
|
2022-05-10 12:47:33
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
Event Hubs Function Bindings: Enable tests ignored due to data binding failures
|
Event Hubs Client Functions test-reliability
|
# Summary
Due to limitations in how data binding is performed in the WebJobs SDK, changes in the class hierarchy for `EventData` have resulted in consistent test failures. These should be restored once a fix has been made to the WebJobs SDK and a new package released.
Tests Ignored:
- [EventHub_InitialOffsetFromEnqueuedTime](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/eventhub/Microsoft.Azure.WebJobs.Extensions.EventHubs/tests/EventHubEndToEndTests.cs#L387)
- [EventHub_ProducerClient](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/eventhub/Microsoft.Azure.WebJobs.Extensions.EventHubs/tests/EventHubEndToEndTests.cs#L139)
- [EventHub_PartitionKey](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/eventhub/Microsoft.Azure.WebJobs.Extensions.EventHubs/tests/EventHubEndToEndTests.cs#L315)
# References and Related
- [BindingDataProvider.FromType Fails to Resolve Shadowed Properties (WeJobs SDK #2830)](https://github.com/Azure/azure-webjobs-sdk/issues/2830)
- [Example Test Run _(Microsoft internal)_](https://dev.azure.com/azure-sdk/internal/_build/results?buildId=1355409&view=ms.vss-test-web.build-test-results-tab)
|
1.0
|
Event Hubs Function Bindings: Enable tests ignored due to data binding failures - # Summary
Due to limitations in how data binding is performed in the WebJobs SDK, changes in the class hierarchy for `EventData` have resulted in consistent test failures. These should be restored once a fix has been made to the WebJobs SDK and a new package released.
Tests Ignored:
- [EventHub_InitialOffsetFromEnqueuedTime](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/eventhub/Microsoft.Azure.WebJobs.Extensions.EventHubs/tests/EventHubEndToEndTests.cs#L387)
- [EventHub_ProducerClient](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/eventhub/Microsoft.Azure.WebJobs.Extensions.EventHubs/tests/EventHubEndToEndTests.cs#L139)
- [EventHub_PartitionKey](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/eventhub/Microsoft.Azure.WebJobs.Extensions.EventHubs/tests/EventHubEndToEndTests.cs#L315)
# References and Related
- [BindingDataProvider.FromType Fails to Resolve Shadowed Properties (WeJobs SDK #2830)](https://github.com/Azure/azure-webjobs-sdk/issues/2830)
- [Example Test Run _(Microsoft internal)_](https://dev.azure.com/azure-sdk/internal/_build/results?buildId=1355409&view=ms.vss-test-web.build-test-results-tab)
|
test
|
event hubs function bindings enable tests ignored due to data binding failures summary due to limitations in how data binding is performed in the webjobs sdk changes in the class hierarchy for eventdata have resulted in consistent test failures these should be restored once a fix has been made to the webjobs sdk and a new package released tests ignored references and related
| 1
|
118,854
| 10,013,783,715
|
IssuesEvent
|
2019-07-15 15:55:21
|
CentOS-PaaS-SIG/linchpin
|
https://api.github.com/repos/CentOS-PaaS-SIG/linchpin
|
closed
|
RFE: Minimize the integration testing on downstream and upstream
|
medium priority rfe testing
|
Linchpin has an agressive testing suit. However, It is observed that it has many duplicate tests in place which tests the same feature multiple time.
For example:
if the templating feature is tested it automatically tests the provisioning feature withing aws instances.
Similarly, All the tests can be minimized to match effective testing in less time.
|
1.0
|
RFE: Minimize the integration testing on downstream and upstream - Linchpin has an agressive testing suit. However, It is observed that it has many duplicate tests in place which tests the same feature multiple time.
For example:
if the templating feature is tested it automatically tests the provisioning feature withing aws instances.
Similarly, All the tests can be minimized to match effective testing in less time.
|
test
|
rfe minimize the integration testing on downstream and upstream linchpin has an agressive testing suit however it is observed that it has many duplicate tests in place which tests the same feature multiple time for example if the templating feature is tested it automatically tests the provisioning feature withing aws instances similarly all the tests can be minimized to match effective testing in less time
| 1
|
252,748
| 19,062,108,509
|
IssuesEvent
|
2021-11-26 09:12:37
|
pace/cloud-sdk-capacitor-plugin
|
https://api.github.com/repos/pace/cloud-sdk-capacitor-plugin
|
closed
|
Update README
|
documentation
|
### Motivation
Currently it is quite hard to know how to setup the project. The README should include:
1. How to setup the capacitor plugin for iOS
2. How to setup the capacitor plugin for Android
3. How to update the PACE Cloud SDK for iOS
4. How to update the PACE Cloud SDK for Android
5. How to update the plugin (adjusting of interface files etc.)
4. How to release a new version of the plugin via npm
|
1.0
|
Update README - ### Motivation
Currently it is quite hard to know how to setup the project. The README should include:
1. How to setup the capacitor plugin for iOS
2. How to setup the capacitor plugin for Android
3. How to update the PACE Cloud SDK for iOS
4. How to update the PACE Cloud SDK for Android
5. How to update the plugin (adjusting of interface files etc.)
4. How to release a new version of the plugin via npm
|
non_test
|
update readme motivation currently it is quite hard to know how to setup the project the readme should include how to setup the capacitor plugin for ios how to setup the capacitor plugin for android how to update the pace cloud sdk for ios how to update the pace cloud sdk for android how to update the plugin adjusting of interface files etc how to release a new version of the plugin via npm
| 0
|
287,623
| 21,662,183,344
|
IssuesEvent
|
2022-05-06 20:38:38
|
microsoft/vscode-webview-ui-toolkit-samples
|
https://api.github.com/repos/microsoft/vscode-webview-ui-toolkit-samples
|
closed
|
Create sample extension using Svelte
|
documentation
|
### Sample Extension Description
Implement the default `hello-world` sample extension using Svelte to demonstrate how extension authors can scaffold a Svelte-based webview extension and use the toolkit with that extension.
|
1.0
|
Create sample extension using Svelte - ### Sample Extension Description
Implement the default `hello-world` sample extension using Svelte to demonstrate how extension authors can scaffold a Svelte-based webview extension and use the toolkit with that extension.
|
non_test
|
create sample extension using svelte sample extension description implement the default hello world sample extension using svelte to demonstrate how extension authors can scaffold a svelte based webview extension and use the toolkit with that extension
| 0
|
83,790
| 7,881,629,816
|
IssuesEvent
|
2018-06-26 19:43:45
|
Microsoft/vscode
|
https://api.github.com/repos/Microsoft/vscode
|
closed
|
Test: createQuickPick, createInputBox API
|
testplan-item
|
- [x] anyOS @dbaeumer
- [x] anyOS @misolori
Complexity: 5
Test the `createQuickPick` and `createInputBox` proposed APIs. Including:
- Documentation in `vscode.proposed.d.ts`.
- Samples in [quickinput-sample](https://github.com/Microsoft/vscode-extension-samples/tree/master/quickinput-sample)
- Think of different scenarios where you would use this API and make sure you can and it works as expected.
- Go through the various methods/properties and check they work.
|
1.0
|
Test: createQuickPick, createInputBox API - - [x] anyOS @dbaeumer
- [x] anyOS @misolori
Complexity: 5
Test the `createQuickPick` and `createInputBox` proposed APIs. Including:
- Documentation in `vscode.proposed.d.ts`.
- Samples in [quickinput-sample](https://github.com/Microsoft/vscode-extension-samples/tree/master/quickinput-sample)
- Think of different scenarios where you would use this API and make sure you can and it works as expected.
- Go through the various methods/properties and check they work.
|
test
|
test createquickpick createinputbox api anyos dbaeumer anyos misolori complexity test the createquickpick and createinputbox proposed apis including documentation in vscode proposed d ts samples in think of different scenarios where you would use this api and make sure you can and it works as expected go through the various methods properties and check they work
| 1
|
157,095
| 12,354,543,692
|
IssuesEvent
|
2020-05-16 08:17:03
|
saltstack/salt
|
https://api.github.com/repos/saltstack/salt
|
closed
|
[Test Failure] unit.setup.test_install.InstallTest
|
Test Failure
|
https://jenkinsci.saltstack.com/job/pr-amazon2-py3-slow/job/master/17/
```
unit.setup.test_install.InstallTest.test_egg
unit.setup.test_install.InstallTest.test_sdist
unit.setup.test_install.InstallTest.test_setup_install
unit.setup.test_install.InstallTest.test_wheel
```
|
1.0
|
[Test Failure] unit.setup.test_install.InstallTest - https://jenkinsci.saltstack.com/job/pr-amazon2-py3-slow/job/master/17/
```
unit.setup.test_install.InstallTest.test_egg
unit.setup.test_install.InstallTest.test_sdist
unit.setup.test_install.InstallTest.test_setup_install
unit.setup.test_install.InstallTest.test_wheel
```
|
test
|
unit setup test install installtest unit setup test install installtest test egg unit setup test install installtest test sdist unit setup test install installtest test setup install unit setup test install installtest test wheel
| 1
|
225,177
| 17,796,565,859
|
IssuesEvent
|
2021-08-31 23:21:38
|
openservicemesh/osm
|
https://api.github.com/repos/openservicemesh/osm
|
closed
|
test: cmd/cli/proxy_get.go
|
size/S area/tests
|
In `cmd/cli/proxy_get.go` the run function does not have good unit test coverage for the `osm proxy get QUERY POD` command. It would be great to write a small test for this function.
It will be helpful to test any/all of the following:
* port forwarding for a given pod in a mesh is able to start
* http.get request is correctly issued and the url is correctly resolved from the arg
* if the pod is not in the mesh, an error is returned
* response body is correctly rendered



|
1.0
|
test: cmd/cli/proxy_get.go - In `cmd/cli/proxy_get.go` the run function does not have good unit test coverage for the `osm proxy get QUERY POD` command. It would be great to write a small test for this function.
It will be helpful to test any/all of the following:
* port forwarding for a given pod in a mesh is able to start
* http.get request is correctly issued and the url is correctly resolved from the arg
* if the pod is not in the mesh, an error is returned
* response body is correctly rendered



|
test
|
test cmd cli proxy get go in cmd cli proxy get go the run function does not have good unit test coverage for the osm proxy get query pod command it would be great to write a small test for this function it will be helpful to test any all of the following port forwarding for a given pod in a mesh is able to start http get request is correctly issued and the url is correctly resolved from the arg if the pod is not in the mesh an error is returned response body is correctly rendered
| 1
|
135,686
| 30,344,705,473
|
IssuesEvent
|
2023-07-11 14:44:26
|
ita-social-projects/StreetCode
|
https://api.github.com/repos/ita-social-projects/StreetCode
|
opened
|
Admin [Text and Video block] When Admin click on the 'Дещо менше' button in previewing the text,it throws to the footer of the site
|
bug (Epic#2) Admin/New StreetCode
|
**Environment:** OS, Windows 11
**Browser**: Google Chrome Version 114.0.5735.135
**Reproducible:** always
**Build found:** https://github.com/ita-social-projects/StreetCode/commit/1b79c162699aafffaa1f620fb97e9eb04fd6e0fb
**Preconditions**
1.Go to http://185.230.138.173/admin-panel
2.Login as 'adminStreetcode ' , password - 'pH2603VkN4d'
3.Go to 'Стріткоди'
**Steps to reproduce**
1. Click on 'Новий стріткод' button
2. Go to the Text and Video block
3. Enter the text in the field 'Основний текст' which contains 15 000 symlols ( the max limit )
4. Click on 'Попередній перегляд тексту' button
5. Click on 'Трохи ще' button
6. Click on 'Дещо менше' button
Uploading Стріткод _ Історія на кожному кроці - Google Chrome 2023-07-11 16-43-50.mp4…
**Actual result**
The text is wrapped and Admin stays in the 'Text and Video' block
**Expected result**
Admin is redirected to the footer of the site
**User story and test case links**
E.g.: "User story #122
**Labels to be added**
"Bug", Priority ("pri: "), Severity ("severity:"), Type ("UI, "Functional"), "API" (for back-end bugs).
|
1.0
|
Admin [Text and Video block] When Admin click on the 'Дещо менше' button in previewing the text,it throws to the footer of the site - **Environment:** OS, Windows 11
**Browser**: Google Chrome Version 114.0.5735.135
**Reproducible:** always
**Build found:** https://github.com/ita-social-projects/StreetCode/commit/1b79c162699aafffaa1f620fb97e9eb04fd6e0fb
**Preconditions**
1.Go to http://185.230.138.173/admin-panel
2.Login as 'adminStreetcode ' , password - 'pH2603VkN4d'
3.Go to 'Стріткоди'
**Steps to reproduce**
1. Click on 'Новий стріткод' button
2. Go to the Text and Video block
3. Enter the text in the field 'Основний текст' which contains 15 000 symlols ( the max limit )
4. Click on 'Попередній перегляд тексту' button
5. Click on 'Трохи ще' button
6. Click on 'Дещо менше' button
Uploading Стріткод _ Історія на кожному кроці - Google Chrome 2023-07-11 16-43-50.mp4…
**Actual result**
The text is wrapped and Admin stays in the 'Text and Video' block
**Expected result**
Admin is redirected to the footer of the site
**User story and test case links**
E.g.: "User story #122
**Labels to be added**
"Bug", Priority ("pri: "), Severity ("severity:"), Type ("UI, "Functional"), "API" (for back-end bugs).
|
non_test
|
admin when admin click on the дещо менше button in previewing the text it throws to the footer of the site environment os windows browser google chrome version reproducible always build found preconditions go to login as adminstreetcode password go to стріткоди steps to reproduce click on новий стріткод button go to the text and video block enter the text in the field основний текст which contains symlols the max limit click on попередній перегляд тексту button click on трохи ще button click on дещо менше button uploading стріткод історія на кожному кроці google chrome … actual result the text is wrapped and admin stays in the text and video block expected result admin is redirected to the footer of the site user story and test case links e g user story labels to be added bug priority pri severity severity type ui functional api for back end bugs
| 0
|
97,890
| 16,255,447,016
|
IssuesEvent
|
2021-05-08 03:44:22
|
scriptex/weather-app
|
https://api.github.com/repos/scriptex/weather-app
|
closed
|
CVE-2021-23343 (Medium) detected in path-parse-1.0.6.tgz
|
security vulnerability
|
## CVE-2021-23343 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>Path to dependency file: weather-app/package.json</p>
<p>Path to vulnerable library: weather-app/node_modules/path-parse/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- resolve-1.18.1.tgz
- :x: **path-parse-1.0.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/weather-app/commit/a3e73003f5179309dd90075e7126dea6e53d3db2">a3e73003f5179309dd90075e7126dea6e53d3db2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23343 (Medium) detected in path-parse-1.0.6.tgz - ## CVE-2021-23343 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary>
<p>Node.js path.parse() ponyfill</p>
<p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p>
<p>Path to dependency file: weather-app/package.json</p>
<p>Path to vulnerable library: weather-app/node_modules/path-parse/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- resolve-1.18.1.tgz
- :x: **path-parse-1.0.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/weather-app/commit/a3e73003f5179309dd90075e7126dea6e53d3db2">a3e73003f5179309dd90075e7126dea6e53d3db2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in path parse tgz cve medium severity vulnerability vulnerable library path parse tgz node js path parse ponyfill library home page a href path to dependency file weather app package json path to vulnerable library weather app node modules path parse package json dependency hierarchy react scripts tgz root library resolve tgz x path parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href step up your open source security game with whitesource
| 0
|
291,501
| 25,152,293,033
|
IssuesEvent
|
2022-11-10 10:54:49
|
wazuh/wazuh-qa
|
https://api.github.com/repos/wazuh/wazuh-qa
|
opened
|
Check fix to Windows 11 agent installation error
|
team/qa type/dev-testing status/not-tracked
|
| Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.3.10 | https://github.com/wazuh/wazuh/issues/15216 | |
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
<!-- Description that puts into context and shows the QA tester the changes that have been made by the developer and need to be tested. -->
An error was reported while installing the Wazuh agent in Windows 11. The investigation founds that, in those hosts with an incorrect VMI status, the error can occur so the installation could fail.
## Proposed checks
<!-- Indicate through a list of checkboxes the suggested checks to be carried out by the QA tester -->
The following OS needs to be tested: Windows XP, 2003, Vista, 2008, 7, 2008R2, 8, 2012, 8.1, 2012R2, 10, 2016, 2019, 2022, 11
Two scenarios:
- OS with disabled WMI permissions.
- OS with enabled WMI permissions.
## Steps to reproduce
<!--
(DELETE SECTION IF NOT APPLICABLE) If the changes correspond to the fix of a bug or behavior, indicate the steps necessary to reproduce it before the fix
-->
Win+R, go to wmimgmt.msc, root/cimv2, Security, and mark Deny permissions in Authenticated users.
## Expected results
<!-- Indicate expected results such as behaviors, logs... -->
Installation and upgrades should be completed without problems. Further, in the disabled WMI permissions scenario, the installation using logs should have lines explaining that the WMI implementation couldn't be fetched and the installation continues.
|
1.0
|
Check fix to Windows 11 agent installation error - | Target version | Related issue | Related PR |
|--------------------|--------------------|-----------------|
| 4.3.10 | https://github.com/wazuh/wazuh/issues/15216 | |
<!-- Important: No section may be left blank. If not, delete it directly (in principle only Steps to reproduce could be left blank in case of not proceeding, although there are always exceptions). -->
## Description
<!-- Description that puts into context and shows the QA tester the changes that have been made by the developer and need to be tested. -->
An error was reported while installing the Wazuh agent in Windows 11. The investigation founds that, in those hosts with an incorrect VMI status, the error can occur so the installation could fail.
## Proposed checks
<!-- Indicate through a list of checkboxes the suggested checks to be carried out by the QA tester -->
The following OS needs to be tested: Windows XP, 2003, Vista, 2008, 7, 2008R2, 8, 2012, 8.1, 2012R2, 10, 2016, 2019, 2022, 11
Two scenarios:
- OS with disabled WMI permissions.
- OS with enabled WMI permissions.
## Steps to reproduce
<!--
(DELETE SECTION IF NOT APPLICABLE) If the changes correspond to the fix of a bug or behavior, indicate the steps necessary to reproduce it before the fix
-->
Win+R, go to wmimgmt.msc, root/cimv2, Security, and mark Deny permissions in Authenticated users.
## Expected results
<!-- Indicate expected results such as behaviors, logs... -->
Installation and upgrades should be completed without problems. Further, in the disabled WMI permissions scenario, the installation using logs should have lines explaining that the WMI implementation couldn't be fetched and the installation continues.
|
test
|
check fix to windows agent installation error target version related issue related pr description an error was reported while installing the wazuh agent in windows the investigation founds that in those hosts with an incorrect vmi status the error can occur so the installation could fail proposed checks the following os needs to be tested windows xp vista two scenarios os with disabled wmi permissions os with enabled wmi permissions steps to reproduce delete section if not applicable if the changes correspond to the fix of a bug or behavior indicate the steps necessary to reproduce it before the fix win r go to wmimgmt msc root security and mark deny permissions in authenticated users expected results installation and upgrades should be completed without problems further in the disabled wmi permissions scenario the installation using logs should have lines explaining that the wmi implementation couldn t be fetched and the installation continues
| 1
|
124,951
| 10,330,945,718
|
IssuesEvent
|
2019-09-02 16:03:24
|
red/red
|
https://api.github.com/repos/red/red
|
closed
|
[Reactivity] access violation on evaluation of REPEAT expression bounded to reactor's context
|
status.built status.tested test.written type.bug
|
**Describe the bug**
Prerequisites:
1. Reactive object that contains a word.
1. `repeat` expression that **(a)** iterates more than once over the word with the same spelling as in **(1)** and **(b)** queries the said word.
Now, bind **(2)** to **(1)** and evaluate it, or simply embed expression **(2)** inside `make reactor! ...` body. In either case this results in access violation.
**To reproduce**
```red
make reactor! [i: repeat i 2 [i]]
```
Stack trace:
```red
*** Runtime Error 1: access violation
*** in file: .../runtime/datatypes/block.reds
*** at line: 50
***
*** stack: red/block/rs-head 026CF794h
*** stack: red/interpreter/eval 026CF794h true
*** stack: red/natives/repeat* false
*** stack: red/interpreter/eval-arguments 027D0394h 029386B0h 029386B0h 00000000h 00000000h
*** stack: red/interpreter/eval-code 027D0394h 02938680h 029386B0h false 00000000h 00000000h 027D0394h
*** stack: red/interpreter/eval-expression 02938680h 029386B0h false false false
*** stack: red/interpreter/eval 026CF734h false
*** stack: red/object/make 026CF724h 026CF734h 32
*** stack: red/actions/make 026CF724h 026CF734h
*** stack: red/actions/make*
*** stack: red/interpreter/eval-arguments 027CFDB4h 02938584h 02938584h 00000000h 00000000h
*** stack: red/interpreter/eval-code 027CFDB4h 02938564h 02938584h false 00000000h 00000000h 027CFDB4h
*** stack: red/interpreter/eval-expression 02938564h 02938584h false false false
*** stack: red/interpreter/eval 026CF6F4h true
*** stack: red/natives/catch* true 1
*** stack: ctx419~try-do 006EA918h
*** stack: ctx419~do-command 006EA918h
*** stack: ctx419~eval-command 006EA918h
*** stack: ctx419~run 006EA918h
*** stack: ctx419~launch 006EA918h
*** stack: ctx437~launch 006EA418h
```
**Expected behavior**
Loop runs as usual, reactions are triggered when necessary.
**Platform version**
```
Red 0.6.4 for Windows built 31-Aug-2019/17:47:43+05:00 commit #b28d8f5
```
|
2.0
|
[Reactivity] access violation on evaluation of REPEAT expression bounded to reactor's context - **Describe the bug**
Prerequisites:
1. Reactive object that contains a word.
1. `repeat` expression that **(a)** iterates more than once over the word with the same spelling as in **(1)** and **(b)** queries the said word.
Now, bind **(2)** to **(1)** and evaluate it, or simply embed expression **(2)** inside `make reactor! ...` body. In either case this results in access violation.
**To reproduce**
```red
make reactor! [i: repeat i 2 [i]]
```
Stack trace:
```red
*** Runtime Error 1: access violation
*** in file: .../runtime/datatypes/block.reds
*** at line: 50
***
*** stack: red/block/rs-head 026CF794h
*** stack: red/interpreter/eval 026CF794h true
*** stack: red/natives/repeat* false
*** stack: red/interpreter/eval-arguments 027D0394h 029386B0h 029386B0h 00000000h 00000000h
*** stack: red/interpreter/eval-code 027D0394h 02938680h 029386B0h false 00000000h 00000000h 027D0394h
*** stack: red/interpreter/eval-expression 02938680h 029386B0h false false false
*** stack: red/interpreter/eval 026CF734h false
*** stack: red/object/make 026CF724h 026CF734h 32
*** stack: red/actions/make 026CF724h 026CF734h
*** stack: red/actions/make*
*** stack: red/interpreter/eval-arguments 027CFDB4h 02938584h 02938584h 00000000h 00000000h
*** stack: red/interpreter/eval-code 027CFDB4h 02938564h 02938584h false 00000000h 00000000h 027CFDB4h
*** stack: red/interpreter/eval-expression 02938564h 02938584h false false false
*** stack: red/interpreter/eval 026CF6F4h true
*** stack: red/natives/catch* true 1
*** stack: ctx419~try-do 006EA918h
*** stack: ctx419~do-command 006EA918h
*** stack: ctx419~eval-command 006EA918h
*** stack: ctx419~run 006EA918h
*** stack: ctx419~launch 006EA918h
*** stack: ctx437~launch 006EA418h
```
**Expected behavior**
Loop runs as usual, reactions are triggered when necessary.
**Platform version**
```
Red 0.6.4 for Windows built 31-Aug-2019/17:47:43+05:00 commit #b28d8f5
```
|
test
|
access violation on evaluation of repeat expression bounded to reactor s context describe the bug prerequisites reactive object that contains a word repeat expression that a iterates more than once over the word with the same spelling as in and b queries the said word now bind to and evaluate it or simply embed expression inside make reactor body in either case this results in access violation to reproduce red make reactor stack trace red runtime error access violation in file runtime datatypes block reds at line stack red block rs head stack red interpreter eval true stack red natives repeat false stack red interpreter eval arguments stack red interpreter eval code false stack red interpreter eval expression false false false stack red interpreter eval false stack red object make stack red actions make stack red actions make stack red interpreter eval arguments stack red interpreter eval code false stack red interpreter eval expression false false false stack red interpreter eval true stack red natives catch true stack try do stack do command stack eval command stack run stack launch stack launch expected behavior loop runs as usual reactions are triggered when necessary platform version red for windows built aug commit
| 1
|
51,520
| 12,743,305,095
|
IssuesEvent
|
2020-06-26 10:08:03
|
cake-build/frosting
|
https://api.github.com/repos/cake-build/frosting
|
closed
|
Fix iconUrl and licenseUrl in nuget packages
|
build help wanted
|
- Cake.Frosting
- [ ] icon
- Cake.Frosting.Template
- [ ] licence
- [ ] icon
To address:
```diff
- WARNING: The <iconUrl> element is deprecated.
- Consider using the <icon> element instead. https://aka.ms/deprecateIconUrl
- WARNING: The <licenseUrl> element is deprecated.
- Consider using the <license> element instead. Learn more: https://aka.ms/deprecateLicenseUrl.
```
|
1.0
|
Fix iconUrl and licenseUrl in nuget packages - - Cake.Frosting
- [ ] icon
- Cake.Frosting.Template
- [ ] licence
- [ ] icon
To address:
```diff
- WARNING: The <iconUrl> element is deprecated.
- Consider using the <icon> element instead. https://aka.ms/deprecateIconUrl
- WARNING: The <licenseUrl> element is deprecated.
- Consider using the <license> element instead. Learn more: https://aka.ms/deprecateLicenseUrl.
```
|
non_test
|
fix iconurl and licenseurl in nuget packages cake frosting icon cake frosting template licence icon to address diff warning the element is deprecated consider using the element instead warning the element is deprecated consider using the element instead learn more
| 0
|
597,943
| 18,216,695,817
|
IssuesEvent
|
2021-09-30 05:51:31
|
kubesphere/console
|
https://api.github.com/repos/kubesphere/console
|
closed
|
Project Quota or Container Quota should not greater than Workspace Quota.
|
kind/bug kind/feature kind/need-to-verify priority/medium
|
**Describe**
Project Quota should not greater than Workspace Quota.
Container Quota should not greater than Project Quota、 Workspace Quota.
**Environment**
`kubespheredev/ks-console:latest`
**Preset conditions**
1、There is project 'pro1' in workspace 'wx-ws'
2、Workspace Quota as follow:limits.cpu 10 core, requests.cpu 1 core; limits.memory 10Gi, requests.memory 1 Gi
**To Reproduce**
Steps to reproduce the behavior:
1. Go to project 'pro1'
2. Click on 'Project settings'
3. Click on 'Base info'
4. Click on 'manage project' --> click on 'edit quota'
5. enter 2 in 'cpu Resource Request'
**Expected behavior**
There is prompt on the page:Project Quota should not greater than Workspace Quota. and can not save.
**Actual behavior**

/kind feature
/assign @leoendless
/milestone 3.1.0
/priority medium
|
1.0
|
Project Quota or Container Quota should not greater than Workspace Quota. - **Describe**
Project Quota should not greater than Workspace Quota.
Container Quota should not greater than Project Quota、 Workspace Quota.
**Environment**
`kubespheredev/ks-console:latest`
**Preset conditions**
1、There is project 'pro1' in workspace 'wx-ws'
2、Workspace Quota as follow:limits.cpu 10 core, requests.cpu 1 core; limits.memory 10Gi, requests.memory 1 Gi
**To Reproduce**
Steps to reproduce the behavior:
1. Go to project 'pro1'
2. Click on 'Project settings'
3. Click on 'Base info'
4. Click on 'manage project' --> click on 'edit quota'
5. enter 2 in 'cpu Resource Request'
**Expected behavior**
There is prompt on the page:Project Quota should not greater than Workspace Quota. and can not save.
**Actual behavior**

/kind feature
/assign @leoendless
/milestone 3.1.0
/priority medium
|
non_test
|
project quota or container quota should not greater than workspace quota describe project quota should not greater than workspace quota container quota should not greater than project quota、 workspace quota environment kubespheredev ks console latest preset conditions 、there is project in workspace wx ws 、workspace quota as follow:limits cpu core requests cpu core limits memory requests memory gi to reproduce steps to reproduce the behavior go to project click on project settings click on base info click on manage project click on edit quota enter in cpu resource request expected behavior there is prompt on the page:project quota should not greater than workspace quota and can not save actual behavior kind feature assign leoendless milestone priority medium
| 0
|
280,633
| 24,319,860,109
|
IssuesEvent
|
2022-09-30 09:44:51
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
[test-triage] chip_sw_rstmgr_alert_info
|
Component:TestTriage
|
### Hierarchy of regression failure
Chip Level
### Failure Description
```
UVM_ERROR @ 3178.232878 us: (cip_base_scoreboard.sv:431) [uvm_test_top.env.scoreboard] Check failed item.d_error == exp_d_error (1 [0x1] vs 0 [0x0]) On interface chip_reg_block, TL item: req: (cip_tl_seq_item@107135) { a_addr: 'h200042b8 a_data: 'h0 a_mask: 'hf a_size: 'h2 a_param: 'h0 a_source: 'h0 a_opcode: 'h4 a_user: 'h2662a d_param: 'h0 d_source: 'h0 d_data: 'hffffffff d_size: 'h2 d_opcode: 'h1 d_error: 'h1 d_sink: 'h0 d_user: 'heaa a_source_is_overridden: 'h0 a_valid_delay: 'h0 d_valid_delay: 'h0 a_valid_len: 'h0 d_valid_len: 'h0 req_abort_after_a_valid_len: 'h0 rsp_abort_after_d_valid_len: 'h0 req_completed: 'h0 rsp_completed: 'h0 tl_intg_err_type: TlIntgErrNone max_ecc_errors: 'h3 }
, unmapped_err: 0, mem_access_err: 0, bus_intg_err: 0, byte_wr_err: 0, csr_size_err: 0, tl_item_err: 0, write_w_instr_type_err: 0, cfg.tl_mem_access_gated: 0 ecc_err: 0
UVM_INFO @ 3178.232878 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
```
### Steps to Reproduce
- Commit hash where failure was observed 0c214bdb3
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_rstmgr_alert_info --build-seed 739773536 --waves -v h`
### Tests with similar or related failures
- [ ] chip_sw_rstmgr_alert_info
- [ ] chip_sw_clkmgr_escalation_reset
- [ ] chip_sw_flash_ctrl_lc_rw_en
- [ ] chip_sw_lc_ctrl_transition
- [ ] chip_sw_lc_walkthrough_dev
- [ ] chip_sw_lc_walkthrough_prod
|
1.0
|
[test-triage] chip_sw_rstmgr_alert_info - ### Hierarchy of regression failure
Chip Level
### Failure Description
```
UVM_ERROR @ 3178.232878 us: (cip_base_scoreboard.sv:431) [uvm_test_top.env.scoreboard] Check failed item.d_error == exp_d_error (1 [0x1] vs 0 [0x0]) On interface chip_reg_block, TL item: req: (cip_tl_seq_item@107135) { a_addr: 'h200042b8 a_data: 'h0 a_mask: 'hf a_size: 'h2 a_param: 'h0 a_source: 'h0 a_opcode: 'h4 a_user: 'h2662a d_param: 'h0 d_source: 'h0 d_data: 'hffffffff d_size: 'h2 d_opcode: 'h1 d_error: 'h1 d_sink: 'h0 d_user: 'heaa a_source_is_overridden: 'h0 a_valid_delay: 'h0 d_valid_delay: 'h0 a_valid_len: 'h0 d_valid_len: 'h0 req_abort_after_a_valid_len: 'h0 rsp_abort_after_d_valid_len: 'h0 req_completed: 'h0 rsp_completed: 'h0 tl_intg_err_type: TlIntgErrNone max_ecc_errors: 'h3 }
, unmapped_err: 0, mem_access_err: 0, bus_intg_err: 0, byte_wr_err: 0, csr_size_err: 0, tl_item_err: 0, write_w_instr_type_err: 0, cfg.tl_mem_access_gated: 0 ecc_err: 0
UVM_INFO @ 3178.232878 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
```
### Steps to Reproduce
- Commit hash where failure was observed 0c214bdb3
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i chip_sw_rstmgr_alert_info --build-seed 739773536 --waves -v h`
### Tests with similar or related failures
- [ ] chip_sw_rstmgr_alert_info
- [ ] chip_sw_clkmgr_escalation_reset
- [ ] chip_sw_flash_ctrl_lc_rw_en
- [ ] chip_sw_lc_ctrl_transition
- [ ] chip_sw_lc_walkthrough_dev
- [ ] chip_sw_lc_walkthrough_prod
|
test
|
chip sw rstmgr alert info hierarchy of regression failure chip level failure description uvm error us cip base scoreboard sv check failed item d error exp d error vs on interface chip reg block tl item req cip tl seq item a addr a data a mask hf a size a param a source a opcode a user d param d source d data hffffffff d size d opcode d error d sink d user heaa a source is overridden a valid delay d valid delay a valid len d valid len req abort after a valid len rsp abort after d valid len req completed rsp completed tl intg err type tlintgerrnone max ecc errors unmapped err mem access err bus intg err byte wr err csr size err tl item err write w instr type err cfg tl mem access gated ecc err uvm info us uvm report catcher svh uvm report catcher summary steps to reproduce commit hash where failure was observed dvsim invocation command to reproduce the failure inclusive of build and run seeds util dvsim dvsim py hw top earlgrey dv chip sim cfg hjson i chip sw rstmgr alert info build seed waves v h tests with similar or related failures chip sw rstmgr alert info chip sw clkmgr escalation reset chip sw flash ctrl lc rw en chip sw lc ctrl transition chip sw lc walkthrough dev chip sw lc walkthrough prod
| 1
|
243,320
| 18,682,768,616
|
IssuesEvent
|
2021-11-01 08:29:50
|
pnp/generator-teams
|
https://api.github.com/repos/pnp/generator-teams
|
closed
|
Feature request: MKDocs for documentation and tutorials
|
good first issue request: documentation work in progress
|
### 💡 Idea
I'd like to see tutorials and docs on a dedicated GH page using MKDocs similar to other PnP initiatives
### Is your feature related to a bug
no
### Alternatives
_No response_
### Additional Info
_No response_
|
1.0
|
Feature request: MKDocs for documentation and tutorials - ### 💡 Idea
I'd like to see tutorials and docs on a dedicated GH page using MKDocs similar to other PnP initiatives
### Is your feature related to a bug
no
### Alternatives
_No response_
### Additional Info
_No response_
|
non_test
|
feature request mkdocs for documentation and tutorials 💡 idea i d like to see tutorials and docs on a dedicated gh page using mkdocs similar to other pnp initiatives is your feature related to a bug no alternatives no response additional info no response
| 0
|
314,315
| 26,992,719,481
|
IssuesEvent
|
2023-02-09 21:21:51
|
ernst-fanfan/Hanman
|
https://api.github.com/repos/ernst-fanfan/Hanman
|
closed
|
Dictionary Functionality
|
enhancement functional testing
|
Store a list of words with four letters or more (JSON)
Store the list in a dictionary object
Randomly select a word from the list
Return selected word
Functional test
|
1.0
|
Dictionary Functionality - Store a list of words with four letters or more (JSON)
Store the list in a dictionary object
Randomly select a word from the list
Return selected word
Functional test
|
test
|
dictionary functionality store a list of words with four letters or more json store the list in a dictionary object randomly select a word from the list return selected word functional test
| 1
|
125,296
| 10,339,683,490
|
IssuesEvent
|
2019-09-03 19:57:36
|
phetsims/QA
|
https://api.github.com/repos/phetsims/QA
|
closed
|
Dev test: Energy Skate Park 1.0.0-dev.4
|
QA:dev-test
|
<!---
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ PhET Development Test Template ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Notes and Instructions for Developers:
1. Comments indicate whether something can be omitted or edited.
2. Please check the comments before trying to omit or edit something.
3. Please don't rearrange the sections.
-->
@arouinfar, @ariel-phet energy-skate-park/1.0.0-dev.4 is ready for dev testing. This is the first dev test of energy-skate-park, although the sim is quite similar to energy-skate-park-basics. The largest changes from the basics version are that you can record skater data (Measure screen and Graphs screens) play back skater states from graphs (Graphs screen), as well as change gravity and skater mass. Please test these new features thoroughly in particular. It would also be helpful to get a sense of sim performance, please test this sim on an array of slow platforms and report results. Finally, document any issues in https://github.com/phetsims/energy-skate-park/issues and link to this issue. Thanks!
Assigning @ariel-phet for prioritization.
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////// Section 1: General Dev Testing [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>General Dev Test</b></summary>
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>What to Test</h3>
- Click every single button.
- If there is sound, make sure it works.
- Make sure you can't lose anything.
- Play with the sim normally.
- Try to break the sim.
- If there is a console available, check for errors and include them in the Problem Description.
- Rung through the string tests on at least one platform, especially if it is about to go to rc.
<!--- [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Focus and Special Instructions</h3>
[Provide further instructions here. If you have any further tests you want done or specific platforms you want tested, list them here. Any behaviors you want QA to pay special attention to should be listed here.]
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>General Dev Test Platforms</h3>
- [x] Latest macOS, Chrome and Safari
- [x] Latest iOS, Safari
- [x] Windows 10, all browsers
- [x] Latest Chrome OS, Chrome
These issues should have either use the labels "status:ready-for-qa" or "status:ready-for-review." If it is ready for QA then close the issue if fixed. If ready for review then leave open and assign back to the developer.
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>Link(s)</h3>
- **[Simulation](https://phet-dev.colorado.edu/html/energy-skate-park/1.0.0-dev.4/phet/energy-skate-park_all_phet.html)**
<hr>
</details>
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////// Section 0: FAQs for QA Members [DO NOT OMIT, DO NOT EDIT]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>FAQs for QA Members</b></summary>
<br>
<!--- Subsection 0.1: There are multiple tests in this issue... What should I test first? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>There are multiple tests in this issue... Which test should I do first?</i></summary>
Test in order! Test the first thing first, the second thing second, and so on.
</details>
<br>
<!--- Subsection 0.2: How should I format my issue? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>How should I format my issue?</i></summary>
Here's a template for making issues:
<b>Test Device</b>
blah
<b>Operating System</b>
blah
<b>Browser</b>
blah
<b>Problem Description</b>
blah
<b>Steps to Reproduce</b>
blah
<b>Visuals</b>
blah
<details>
<summary><b>Troubleshooting Information</b></summary>
blah
</details>
</details>
<br>
<!--- Subsection 0.3: Who should I assign? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>Who should I assign?</i></summary>
We typically assign the developer who opened the issue in the QA repository.
</details>
<br>
<!--- Subsection 0.4: My question isn't in here... What should I do? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>My question isn't in here... What should I do?</i></summary>
You should:
1. Consult the [QA Book](link).
2. Google it.
3. Ask Katie.
4. Ask a developer.
5. Google it again.
6. Cry.
</details>
<br>
<hr>
</details>
|
1.0
|
Dev test: Energy Skate Park 1.0.0-dev.4 - <!---
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~ PhET Development Test Template ~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Notes and Instructions for Developers:
1. Comments indicate whether something can be omitted or edited.
2. Please check the comments before trying to omit or edit something.
3. Please don't rearrange the sections.
-->
@arouinfar, @ariel-phet energy-skate-park/1.0.0-dev.4 is ready for dev testing. This is the first dev test of energy-skate-park, although the sim is quite similar to energy-skate-park-basics. The largest changes from the basics version are that you can record skater data (Measure screen and Graphs screens) play back skater states from graphs (Graphs screen), as well as change gravity and skater mass. Please test these new features thoroughly in particular. It would also be helpful to get a sense of sim performance, please test this sim on an array of slow platforms and report results. Finally, document any issues in https://github.com/phetsims/energy-skate-park/issues and link to this issue. Thanks!
Assigning @ariel-phet for prioritization.
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////// Section 1: General Dev Testing [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>General Dev Test</b></summary>
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>What to Test</h3>
- Click every single button.
- If there is sound, make sure it works.
- Make sure you can't lose anything.
- Play with the sim normally.
- Try to break the sim.
- If there is a console available, check for errors and include them in the Problem Description.
- Rung through the string tests on at least one platform, especially if it is about to go to rc.
<!--- [CAN BE OMITTED, SHOULD BE EDITED IF NOT OMITTED] -->
<h3>Focus and Special Instructions</h3>
[Provide further instructions here. If you have any further tests you want done or specific platforms you want tested, list them here. Any behaviors you want QA to pay special attention to should be listed here.]
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>General Dev Test Platforms</h3>
- [x] Latest macOS, Chrome and Safari
- [x] Latest iOS, Safari
- [x] Windows 10, all browsers
- [x] Latest Chrome OS, Chrome
These issues should have either use the labels "status:ready-for-qa" or "status:ready-for-review." If it is ready for QA then close the issue if fixed. If ready for review then leave open and assign back to the developer.
<!--- [DO NOT OMIT, CAN BE EDITED] -->
<h3>Link(s)</h3>
- **[Simulation](https://phet-dev.colorado.edu/html/energy-skate-park/1.0.0-dev.4/phet/energy-skate-park_all_phet.html)**
<hr>
</details>
<!---
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////// Section 0: FAQs for QA Members [DO NOT OMIT, DO NOT EDIT]
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<details>
<summary><b>FAQs for QA Members</b></summary>
<br>
<!--- Subsection 0.1: There are multiple tests in this issue... What should I test first? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>There are multiple tests in this issue... Which test should I do first?</i></summary>
Test in order! Test the first thing first, the second thing second, and so on.
</details>
<br>
<!--- Subsection 0.2: How should I format my issue? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>How should I format my issue?</i></summary>
Here's a template for making issues:
<b>Test Device</b>
blah
<b>Operating System</b>
blah
<b>Browser</b>
blah
<b>Problem Description</b>
blah
<b>Steps to Reproduce</b>
blah
<b>Visuals</b>
blah
<details>
<summary><b>Troubleshooting Information</b></summary>
blah
</details>
</details>
<br>
<!--- Subsection 0.3: Who should I assign? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>Who should I assign?</i></summary>
We typically assign the developer who opened the issue in the QA repository.
</details>
<br>
<!--- Subsection 0.4: My question isn't in here... What should I do? [DO NOT OMIT, DO NOT EDIT] -->
<details>
<summary><i>My question isn't in here... What should I do?</i></summary>
You should:
1. Consult the [QA Book](link).
2. Google it.
3. Ask Katie.
4. Ask a developer.
5. Google it again.
6. Cry.
</details>
<br>
<hr>
</details>
|
test
|
dev test energy skate park dev phet development test template notes and instructions for developers comments indicate whether something can be omitted or edited please check the comments before trying to omit or edit something please don t rearrange the sections arouinfar ariel phet energy skate park dev is ready for dev testing this is the first dev test of energy skate park although the sim is quite similar to energy skate park basics the largest changes from the basics version are that you can record skater data measure screen and graphs screens play back skater states from graphs graphs screen as well as change gravity and skater mass please test these new features thoroughly in particular it would also be helpful to get a sense of sim performance please test this sim on an array of slow platforms and report results finally document any issues in and link to this issue thanks assigning ariel phet for prioritization section general dev testing general dev test what to test click every single button if there is sound make sure it works make sure you can t lose anything play with the sim normally try to break the sim if there is a console available check for errors and include them in the problem description rung through the string tests on at least one platform especially if it is about to go to rc focus and special instructions general dev test platforms latest macos chrome and safari latest ios safari windows all browsers latest chrome os chrome these issues should have either use the labels status ready for qa or status ready for review if it is ready for qa then close the issue if fixed if ready for review then leave open and assign back to the developer link s section faqs for qa members faqs for qa members there are multiple tests in this issue which test should i do first test in order test the first thing first the second thing second and so on how should i format my issue here s a template for making issues test device blah operating system blah browser blah problem description blah steps to reproduce blah visuals blah troubleshooting information blah who should i assign we typically assign the developer who opened the issue in the qa repository my question isn t in here what should i do you should consult the link google it ask katie ask a developer google it again cry
| 1
|
134,249
| 18,454,565,420
|
IssuesEvent
|
2021-10-15 14:52:21
|
bgoonz/searchAwesome
|
https://api.github.com/repos/bgoonz/searchAwesome
|
opened
|
CVE-2021-3795 (High) detected in semver-regex-2.0.0.tgz
|
security vulnerability
|
## CVE-2021-3795 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-2.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz</a></p>
<p>Path to dependency file: searchAwesome/clones/awesome-stacks/package.json</p>
<p>Path to vulnerable library: searchAwesome/clones/awesome-stacks/node_modules/semver-regex/package.json,searchAwesome/clones/awesome-wpo/website/node_modules/semver-regex/package.json,searchAwesome/clones/Mind-Expanding-Books/app/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-plugin-sharp-2.0.23.tgz (Root Library)
- imagemin-mozjpeg-8.0.0.tgz
- mozjpeg-6.0.1.tgz
- bin-wrapper-4.1.0.tgz
- bin-version-check-4.0.0.tgz
- bin-version-3.0.0.tgz
- find-versions-3.0.0.tgz
- :x: **semver-regex-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bgoonz/searchAwesome/commit/cb1b8421c464b43b24d4816929e575612a00cd49">cb1b8421c464b43b24d4816929e575612a00cd49</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3795 (High) detected in semver-regex-2.0.0.tgz - ## CVE-2021-3795 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>semver-regex-2.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-2.0.0.tgz</a></p>
<p>Path to dependency file: searchAwesome/clones/awesome-stacks/package.json</p>
<p>Path to vulnerable library: searchAwesome/clones/awesome-stacks/node_modules/semver-regex/package.json,searchAwesome/clones/awesome-wpo/website/node_modules/semver-regex/package.json,searchAwesome/clones/Mind-Expanding-Books/app/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-plugin-sharp-2.0.23.tgz (Root Library)
- imagemin-mozjpeg-8.0.0.tgz
- mozjpeg-6.0.1.tgz
- bin-wrapper-4.1.0.tgz
- bin-version-check-4.0.0.tgz
- bin-version-3.0.0.tgz
- find-versions-3.0.0.tgz
- :x: **semver-regex-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bgoonz/searchAwesome/commit/cb1b8421c464b43b24d4816929e575612a00cd49">cb1b8421c464b43b24d4816929e575612a00cd49</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution: semver-regex - 3.1.3,4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in semver regex tgz cve high severity vulnerability vulnerable library semver regex tgz regular expression for matching semver versions library home page a href path to dependency file searchawesome clones awesome stacks package json path to vulnerable library searchawesome clones awesome stacks node modules semver regex package json searchawesome clones awesome wpo website node modules semver regex package json searchawesome clones mind expanding books app node modules semver regex package json dependency hierarchy gatsby plugin sharp tgz root library imagemin mozjpeg tgz mozjpeg tgz bin wrapper tgz bin version check tgz bin version tgz find versions tgz x semver regex tgz vulnerable library found in head commit a href found in base branch master vulnerability details semver regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution semver regex step up your open source security game with whitesource
| 0
|
90,173
| 8,230,503,341
|
IssuesEvent
|
2018-09-07 13:10:17
|
italia/spid
|
https://api.github.com/repos/italia/spid
|
closed
|
Controllo metadati - Comune di Calatabiano
|
metadata nuovo md test
|
Buongiorno,
per conto del Comune di Calatabiano, richiediamo la verifica dei metadati, generati mediante plugin WP SPID Italia di Marco Milesi, pubblicati all'url: https://comune.calatabiano.ct.it/wp-content/plugins/wp-spid-italia/lib/www/module.php/saml/sp/metadata.php/default-sp
Grazie e cordiali saluti
Datanet srl
|
1.0
|
Controllo metadati - Comune di Calatabiano - Buongiorno,
per conto del Comune di Calatabiano, richiediamo la verifica dei metadati, generati mediante plugin WP SPID Italia di Marco Milesi, pubblicati all'url: https://comune.calatabiano.ct.it/wp-content/plugins/wp-spid-italia/lib/www/module.php/saml/sp/metadata.php/default-sp
Grazie e cordiali saluti
Datanet srl
|
test
|
controllo metadati comune di calatabiano buongiorno per conto del comune di calatabiano richiediamo la verifica dei metadati generati mediante plugin wp spid italia di marco milesi pubblicati all url grazie e cordiali saluti datanet srl
| 1
|
215,811
| 16,706,452,540
|
IssuesEvent
|
2021-06-09 10:32:31
|
rancher/dashboard
|
https://api.github.com/repos/rancher/dashboard
|
closed
|
Cluster "Import Import" navigation title
|
[zube]: To Test area/cluster kind/bug
|
Importing "any Kubernetes cluster" shows strange navigation title `Cluster: Import Import`
Going to: Dashboard -> Cluster Management -> Import Existing -> Import

Tested on: master-head (62aef76)
|
1.0
|
Cluster "Import Import" navigation title - Importing "any Kubernetes cluster" shows strange navigation title `Cluster: Import Import`
Going to: Dashboard -> Cluster Management -> Import Existing -> Import

Tested on: master-head (62aef76)
|
test
|
cluster import import navigation title importing any kubernetes cluster shows strange navigation title cluster import import going to dashboard cluster management import existing import tested on master head
| 1
|
278,576
| 30,702,358,289
|
IssuesEvent
|
2023-07-27 01:23:20
|
nidhi7598/linux-3.0.35
|
https://api.github.com/repos/nidhi7598/linux-3.0.35
|
closed
|
CVE-2023-1998 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2023-1998 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel allows userspace processes to enable mitigations by calling prctl with PR_SET_SPECULATION_CTRL which disables the speculation feature as well as by using seccomp. We had noticed that on VMs of at least one major cloud provider, the kernel still left the victim process exposed to attacks in some cases even after enabling the spectre-BTI mitigation with prctl. The same behavior can be observed on a bare-metal machine when forcing the mitigation to IBRS on boot command line.
This happened because when plain IBRS was enabled (not enhanced IBRS), the kernel had some logic that determined that STIBP was not needed. The IBRS bit implicitly protects against cross-thread branch target injection. However, with legacy IBRS, the IBRS bit was cleared on returning to userspace, due to performance reasons, which disabled the implicit STIBP and left userspace threads vulnerable to cross-thread branch target injection against which STIBP protects.
<p>Publish Date: 2023-04-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1998>CVE-2023-1998</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1998">https://www.linuxkernelcves.com/cves/CVE-2023-1998</a></p>
<p>Release Date: 2023-04-12</p>
<p>Fix Resolution: v6.1.16,v6.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-1998 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2023-1998 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel allows userspace processes to enable mitigations by calling prctl with PR_SET_SPECULATION_CTRL which disables the speculation feature as well as by using seccomp. We had noticed that on VMs of at least one major cloud provider, the kernel still left the victim process exposed to attacks in some cases even after enabling the spectre-BTI mitigation with prctl. The same behavior can be observed on a bare-metal machine when forcing the mitigation to IBRS on boot command line.
This happened because when plain IBRS was enabled (not enhanced IBRS), the kernel had some logic that determined that STIBP was not needed. The IBRS bit implicitly protects against cross-thread branch target injection. However, with legacy IBRS, the IBRS bit was cleared on returning to userspace, due to performance reasons, which disabled the implicit STIBP and left userspace threads vulnerable to cross-thread branch target injection against which STIBP protects.
<p>Publish Date: 2023-04-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1998>CVE-2023-1998</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1998">https://www.linuxkernelcves.com/cves/CVE-2023-1998</a></p>
<p>Release Date: 2023-04-12</p>
<p>Fix Resolution: v6.1.16,v6.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details the linux kernel allows userspace processes to enable mitigations by calling prctl with pr set speculation ctrl which disables the speculation feature as well as by using seccomp we had noticed that on vms of at least one major cloud provider the kernel still left the victim process exposed to attacks in some cases even after enabling the spectre bti mitigation with prctl the same behavior can be observed on a bare metal machine when forcing the mitigation to ibrs on boot command line this happened because when plain ibrs was enabled not enhanced ibrs the kernel had some logic that determined that stibp was not needed the ibrs bit implicitly protects against cross thread branch target injection however with legacy ibrs the ibrs bit was cleared on returning to userspace due to performance reasons which disabled the implicit stibp and left userspace threads vulnerable to cross thread branch target injection against which stibp protects publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
72,009
| 7,275,396,073
|
IssuesEvent
|
2018-02-21 13:27:55
|
Kademi/kademi-dev
|
https://api.github.com/repos/Kademi/kademi-dev
|
closed
|
Video asset doesn't play preview in selector
|
Ready to Test QA bug
|

http://vladtest30b.admin.kademi-ci.co/websites/vlad30bweb/version1/programs/p1/c1/m1/#files-tab
1. Go to page
2. Add video component
3. Push change video
4. Click on assets tab
5. Upload some video
6. After it allows you to press play - press play
7. Video will not play and gives an error in console
|
1.0
|
Video asset doesn't play preview in selector - 
http://vladtest30b.admin.kademi-ci.co/websites/vlad30bweb/version1/programs/p1/c1/m1/#files-tab
1. Go to page
2. Add video component
3. Push change video
4. Click on assets tab
5. Upload some video
6. After it allows you to press play - press play
7. Video will not play and gives an error in console
|
test
|
video asset doesn t play preview in selector go to page add video component push change video click on assets tab upload some video after it allows you to press play press play video will not play and gives an error in console
| 1
|
118,906
| 10,014,885,992
|
IssuesEvent
|
2019-07-15 18:40:07
|
Brycey92/Galaxy-Craft-Issues
|
https://api.github.com/repos/Brycey92/Galaxy-Craft-Issues
|
closed
|
Explosions cause block damage
|
fixed - needs testing
|
**Pack version**
1.0.2-1
**Describe the bug**
Explosions cause block damage everywhere, though sometimes not in claims.
**Expected behavior**
Explosions should cause block damage nowhere.
|
1.0
|
Explosions cause block damage - **Pack version**
1.0.2-1
**Describe the bug**
Explosions cause block damage everywhere, though sometimes not in claims.
**Expected behavior**
Explosions should cause block damage nowhere.
|
test
|
explosions cause block damage pack version describe the bug explosions cause block damage everywhere though sometimes not in claims expected behavior explosions should cause block damage nowhere
| 1
|
289,061
| 24,955,403,961
|
IssuesEvent
|
2022-11-01 11:15:10
|
ethersphere/bee
|
https://api.github.com/repos/ethersphere/bee
|
closed
|
`TestSendChunkAndTimeoutinReceivingReceipt` flakes
|
flaky-test issue
|
```
2022-06-21T06:08:31.5327220Z === RUN TestSendChunkAndTimeoutinReceivingReceipt
2022-06-21T06:08:31.5327690Z pusher_test.go:341: chunk not syned error expected
2022-06-21T06:08:31.5328240Z --- FAIL: TestSendChunkAndTimeoutinReceivingReceipt (1.08s)
```
|
1.0
|
`TestSendChunkAndTimeoutinReceivingReceipt` flakes - ```
2022-06-21T06:08:31.5327220Z === RUN TestSendChunkAndTimeoutinReceivingReceipt
2022-06-21T06:08:31.5327690Z pusher_test.go:341: chunk not syned error expected
2022-06-21T06:08:31.5328240Z --- FAIL: TestSendChunkAndTimeoutinReceivingReceipt (1.08s)
```
|
test
|
testsendchunkandtimeoutinreceivingreceipt flakes run testsendchunkandtimeoutinreceivingreceipt pusher test go chunk not syned error expected fail testsendchunkandtimeoutinreceivingreceipt
| 1
|
222,851
| 17,094,544,991
|
IssuesEvent
|
2021-07-08 23:01:44
|
woocommerce/woocommerce-admin
|
https://api.github.com/repos/woocommerce/woocommerce-admin
|
closed
|
Hook References: Add Slot/Fill to the reference guide
|
category: extensibility cooldown period type: documentation
|
The hook reference [script](https://github.com/woocommerce/woocommerce-admin/blob/main/bin/hook-reference/index.js) compiles filters but not Slot/Fill.
There is currently only one slotFill in the Navigation, but more may come. It would be great to amend the data gathering script to handle slotFills too so developers can access all the ways they can hook into WC Admin.
|
1.0
|
Hook References: Add Slot/Fill to the reference guide - The hook reference [script](https://github.com/woocommerce/woocommerce-admin/blob/main/bin/hook-reference/index.js) compiles filters but not Slot/Fill.
There is currently only one slotFill in the Navigation, but more may come. It would be great to amend the data gathering script to handle slotFills too so developers can access all the ways they can hook into WC Admin.
|
non_test
|
hook references add slot fill to the reference guide the hook reference compiles filters but not slot fill there is currently only one slotfill in the navigation but more may come it would be great to amend the data gathering script to handle slotfills too so developers can access all the ways they can hook into wc admin
| 0
|
317,113
| 27,213,996,219
|
IssuesEvent
|
2023-02-20 19:23:35
|
PalisadoesFoundation/talawa-api
|
https://api.github.com/repos/PalisadoesFoundation/talawa-api
|
closed
|
Test: tests/resolvers/Query
|
bug good first issue points 01 test
|
**Describe the bug**
Most of the test for the Query doesn't have 100% test coverage, it doesn't cover the error handling when IN_PRODUCTION=true
**Expected behavior**
For IN_PRODUCTION=true, it should have 100% test coverage
**Actual behavior**
some of the tests do not have 100% test coverage for IN_PRODUCTION=true
**Screenshots**

**Additional details**
|
1.0
|
Test: tests/resolvers/Query - **Describe the bug**
Most of the test for the Query doesn't have 100% test coverage, it doesn't cover the error handling when IN_PRODUCTION=true
**Expected behavior**
For IN_PRODUCTION=true, it should have 100% test coverage
**Actual behavior**
some of the tests do not have 100% test coverage for IN_PRODUCTION=true
**Screenshots**

**Additional details**
|
test
|
test tests resolvers query describe the bug most of the test for the query doesn t have test coverage it doesn t cover the error handling when in production true expected behavior for in production true it should have test coverage actual behavior some of the tests do not have test coverage for in production true screenshots additional details
| 1
|
15,217
| 3,933,009,718
|
IssuesEvent
|
2016-04-25 17:39:34
|
PolymerElements/paper-dialog
|
https://api.github.com/repos/PolymerElements/paper-dialog
|
closed
|
iron-overlay-closed documentation error
|
documentation
|
### Description
The documentation specifies that the event for iron-overlay-closed contains a closingReason attribute (or is that supposed to be the name of the callback parameter?). However, the correct attribute is ``detail``, which contains the ``canceled`` attribute.
Might be a good idea to also document the ``confirmed`` attribute.
### Steps to reproduce
1. Put a `paper-dialog` element in the page with cancel/ok buttons as per the demo example
2. Open the dialog
3. Click the `OK` button.
### Browsers Affected
- [X] Chrome
|
1.0
|
iron-overlay-closed documentation error - ### Description
The documentation specifies that the event for iron-overlay-closed contains a closingReason attribute (or is that supposed to be the name of the callback parameter?). However, the correct attribute is ``detail``, which contains the ``canceled`` attribute.
Might be a good idea to also document the ``confirmed`` attribute.
### Steps to reproduce
1. Put a `paper-dialog` element in the page with cancel/ok buttons as per the demo example
2. Open the dialog
3. Click the `OK` button.
### Browsers Affected
- [X] Chrome
|
non_test
|
iron overlay closed documentation error description the documentation specifies that the event for iron overlay closed contains a closingreason attribute or is that supposed to be the name of the callback parameter however the correct attribute is detail which contains the canceled attribute might be a good idea to also document the confirmed attribute steps to reproduce put a paper dialog element in the page with cancel ok buttons as per the demo example open the dialog click the ok button browsers affected chrome
| 0
|
342,280
| 30,612,207,415
|
IssuesEvent
|
2023-07-23 18:53:42
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix array.test_array__imod__
|
Sub Task Failing Test
|
| | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5601735286/jobs/10246033624"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix array.test_array__imod__ - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5601735286/jobs/10246033624"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5638037172"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix array test array imod jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
| 1
|
315,109
| 27,046,436,851
|
IssuesEvent
|
2023-02-13 10:06:33
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[Security Solution] The isolate --help command shows error when the user is not having privilege for host isolation
|
bug impact:medium Team: SecuritySolution Team:Defend Workflows QA:Ready for Testing OLM Sprint v8.6.0 v8.7.0
|
**Description:**
The isolate --help command shows error when the user is not having privilege for host isolation
**Build Details:**
```
VERSION: 8.6.0 BC5
BUILD: 58693
COMMIT: ed40c16ce9999cc47ad55c11bb097d2e443b31a6
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in
**Steps to Reproduce:**
1. Create a role with Custom privileges
2. Set Kibana privilege for Host Isolation to NONE
3. Endpoint alerts should be generated
4. Try to isolate the endpoint
5. Observe, the error message displayed
**Actual Result:**
The isolate --help command shows error when the user is not having privilege for host isolation
**Expected Result:**
The isolate --help command should not show an error when the user is not having privilege for host isolation as it is the supported command
**Screenshot:**

**Logs:**
N/A
|
1.0
|
[Security Solution] The isolate --help command shows error when the user is not having privilege for host isolation - **Description:**
The isolate --help command shows error when the user is not having privilege for host isolation
**Build Details:**
```
VERSION: 8.6.0 BC5
BUILD: 58693
COMMIT: ed40c16ce9999cc47ad55c11bb097d2e443b31a6
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in
**Steps to Reproduce:**
1. Create a role with Custom privileges
2. Set Kibana privilege for Host Isolation to NONE
3. Endpoint alerts should be generated
4. Try to isolate the endpoint
5. Observe, the error message displayed
**Actual Result:**
The isolate --help command shows error when the user is not having privilege for host isolation
**Expected Result:**
The isolate --help command should not show an error when the user is not having privilege for host isolation as it is the supported command
**Screenshot:**

**Logs:**
N/A
|
test
|
the isolate help command shows error when the user is not having privilege for host isolation description the isolate help command shows error when the user is not having privilege for host isolation build details version build commit browser details all preconditions kibana user should be logged in steps to reproduce create a role with custom privileges set kibana privilege for host isolation to none endpoint alerts should be generated try to isolate the endpoint observe the error message displayed actual result the isolate help command shows error when the user is not having privilege for host isolation expected result the isolate help command should not show an error when the user is not having privilege for host isolation as it is the supported command screenshot logs n a
| 1
|
680,135
| 23,260,144,271
|
IssuesEvent
|
2022-08-04 12:52:31
|
thesaurus-linguae-aegyptiae/tla-web
|
https://api.github.com/repos/thesaurus-linguae-aegyptiae/tla-web
|
closed
|
Satzdarstellung: typenlose #...#-Kommentare werden nicht angezeigt
|
bug high priority essential
|
https://tlabeta.bbaw.de/sentence/IBcAZuZHQFUieE9hpnUeIQK3zA8
http://localhost:8080/sentence/IBcAZuZHQFUieE9hpnUeIQK3zA8
Rohdaten:
http://localhost:9200/_search?q=id:IBcAZuZHQFUieE9hpnUeIQK3zA8
```
"tokens": [
{
"id": "IBgAZB2IqBQiiE0Wny7pzJl8amo",
"type": "desc",
"label": "Bildfeld mit sechs stehenden Personen",
"glyphs": {}
},
{
"id": "IBgAZHPHJBCPDUdqsD5LFncCqUU",
"type": "rechts außen Anbetender, nach links schauend",
"glyphs": {}
},
{
"id": "IBgAZNjT8hFm20xSskBNnWmVkFw",
"type": "in anblickend, von rechts nach links, Osiris, Anubis, Horus, Isis und Nephthys",
"glyphs": {}
},
{
"id": "IBgAZKzDYi4MbktwtxM7GbZexdI",
"type": "alle mit Was-Zepter",
"glyphs": {}
}
],
"wordCount": 0,
```
Auch Inhalte von typen lc, para usw. prüfen.
Siehe auch:
- https://github.com/thesaurus-linguae-aegyptiae/tla-datentransformation/issues/54
- https://github.com/thesaurus-linguae-aegyptiae/tla-datentransformation/issues/79
|
1.0
|
Satzdarstellung: typenlose #...#-Kommentare werden nicht angezeigt - https://tlabeta.bbaw.de/sentence/IBcAZuZHQFUieE9hpnUeIQK3zA8
http://localhost:8080/sentence/IBcAZuZHQFUieE9hpnUeIQK3zA8
Rohdaten:
http://localhost:9200/_search?q=id:IBcAZuZHQFUieE9hpnUeIQK3zA8
```
"tokens": [
{
"id": "IBgAZB2IqBQiiE0Wny7pzJl8amo",
"type": "desc",
"label": "Bildfeld mit sechs stehenden Personen",
"glyphs": {}
},
{
"id": "IBgAZHPHJBCPDUdqsD5LFncCqUU",
"type": "rechts außen Anbetender, nach links schauend",
"glyphs": {}
},
{
"id": "IBgAZNjT8hFm20xSskBNnWmVkFw",
"type": "in anblickend, von rechts nach links, Osiris, Anubis, Horus, Isis und Nephthys",
"glyphs": {}
},
{
"id": "IBgAZKzDYi4MbktwtxM7GbZexdI",
"type": "alle mit Was-Zepter",
"glyphs": {}
}
],
"wordCount": 0,
```
Auch Inhalte von typen lc, para usw. prüfen.
Siehe auch:
- https://github.com/thesaurus-linguae-aegyptiae/tla-datentransformation/issues/54
- https://github.com/thesaurus-linguae-aegyptiae/tla-datentransformation/issues/79
|
non_test
|
satzdarstellung typenlose kommentare werden nicht angezeigt rohdaten tokens id type desc label bildfeld mit sechs stehenden personen glyphs id type rechts außen anbetender nach links schauend glyphs id type in anblickend von rechts nach links osiris anubis horus isis und nephthys glyphs id type alle mit was zepter glyphs wordcount auch inhalte von typen lc para usw prüfen siehe auch
| 0
|
413,283
| 12,064,394,733
|
IssuesEvent
|
2020-04-16 08:12:46
|
openmsupply/mobile
|
https://api.github.com/repos/openmsupply/mobile
|
closed
|
Read-only constraints not being consistently applied
|
Bug: development Docs: not needed Effort: small Feature Ivory Coast (phase 2) Module: dispensary Priority: high
|
## Is your feature request related to a problem? Please describe.
A few problems/inconsistencies with dispensing for non-local patients:
- the `supplyingStoreId` field is not being correctly set for patients created via the lookup API.
- the `storeId` field is not being correctly set for patients created via the lookup API.
- non-local patients/prescribers cannot be edited, but the edit form can still be accessed and the icon is not disabled as it is for policies.
## Describe the solution you'd like
- correctly set supplying store fields for patients/prescribers.
- disable edit icon in the dispensing window for non-local patients/prescribers
## Implementation
- update `PatientActions.patientUpdate`.
- update `PrescriberActions.updatePrescriber`
- update `PrescriptionInfo` component.
## Describe alternatives you've considered
N/A.
## Additional context
See epic #2446.
|
1.0
|
Read-only constraints not being consistently applied - ## Is your feature request related to a problem? Please describe.
A few problems/inconsistencies with dispensing for non-local patients:
- the `supplyingStoreId` field is not being correctly set for patients created via the lookup API.
- the `storeId` field is not being correctly set for patients created via the lookup API.
- non-local patients/prescribers cannot be edited, but the edit form can still be accessed and the icon is not disabled as it is for policies.
## Describe the solution you'd like
- correctly set supplying store fields for patients/prescribers.
- disable edit icon in the dispensing window for non-local patients/prescribers
## Implementation
- update `PatientActions.patientUpdate`.
- update `PrescriberActions.updatePrescriber`
- update `PrescriptionInfo` component.
## Describe alternatives you've considered
N/A.
## Additional context
See epic #2446.
|
non_test
|
read only constraints not being consistently applied is your feature request related to a problem please describe a few problems inconsistencies with dispensing for non local patients the supplyingstoreid field is not being correctly set for patients created via the lookup api the storeid field is not being correctly set for patients created via the lookup api non local patients prescribers cannot be edited but the edit form can still be accessed and the icon is not disabled as it is for policies describe the solution you d like correctly set supplying store fields for patients prescribers disable edit icon in the dispensing window for non local patients prescribers implementation update patientactions patientupdate update prescriberactions updateprescriber update prescriptioninfo component describe alternatives you ve considered n a additional context see epic
| 0
|
163,189
| 20,323,109,417
|
IssuesEvent
|
2022-02-18 01:44:59
|
kapseliboi/crowi
|
https://api.github.com/repos/kapseliboi/crowi
|
opened
|
CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
|
security vulnerability
|
## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- socket.io-client-2.3.0.tgz (Root Library)
- engine.io-client-3.4.0.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution (xmlhttprequest-ssl): 1.6.1</p>
<p>Direct dependency fix Resolution (socket.io-client): 2.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- socket.io-client-2.3.0.tgz (Root Library)
- engine.io-client-3.4.0.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution (xmlhttprequest-ssl): 1.6.1</p>
<p>Direct dependency fix Resolution (socket.io-client): 2.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file package json path to vulnerable library node modules xmlhttprequest ssl package json dependency hierarchy socket io client tgz root library engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in base branch master vulnerability details the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest ssl direct dependency fix resolution socket io client step up your open source security game with whitesource
| 0
|
376,534
| 11,148,185,198
|
IssuesEvent
|
2019-12-23 14:50:03
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.xvideos.com - see bug description
|
browser-focus-geckoview engine-gecko ml-needsdiagnosis-false priority-critical
|
<!-- @browser: Firefox Mobile 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.xvideos.com/video10001481/lascars_public_french_bulge_from_the_guetto
**Browser / Version**: Firefox Mobile 71.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: links on this page hijacked
**Steps to Reproduce**:
Pages reload elsewhere
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.xvideos.com - see bug description - <!-- @browser: Firefox Mobile 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.xvideos.com/video10001481/lascars_public_french_bulge_from_the_guetto
**Browser / Version**: Firefox Mobile 71.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: links on this page hijacked
**Steps to Reproduce**:
Pages reload elsewhere
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description links on this page hijacked steps to reproduce pages reload elsewhere browser configuration none from with ❤️
| 0
|
115,977
| 9,818,138,606
|
IssuesEvent
|
2019-06-13 18:28:16
|
kcigeospatial/Fred_Co_Land-Management
|
https://api.github.com/repos/kcigeospatial/Fred_Co_Land-Management
|
closed
|
Planning - Plat - Verification
|
Bug Retest Failed
|
Acres is misspelled in the Final Plat Details tab.

GH/AM
|
1.0
|
Planning - Plat - Verification - Acres is misspelled in the Final Plat Details tab.

GH/AM
|
test
|
planning plat verification acres is misspelled in the final plat details tab gh am
| 1
|
67,090
| 3,266,159,283
|
IssuesEvent
|
2015-10-22 19:24:30
|
rurban/perl-compiler
|
https://api.github.com/repos/rurban/perl-compiler
|
opened
|
Bizarre copy of CODE in B::UNOP_AUX::save (Fails to generate .c file)
|
Priority-High
|
This output is from master but
```sh
$>perlcc -e 'my $z = 0; my $li2 = "c"; my $rh = { foo => [ "ok\n" ]}; print $rh->{"foo"}->[$li2+$z];'
/usr/local/cpanel/3rdparty/perl/522-debug/bin/perlcc: Unexpected compiler output
Bizarre copy of CODE in list assignment at /usr/local/cpanel/3rdparty/perl/522-debug/lib/perl5/cpanel_lib/i386-linux-debug-64int/B/C.pm line 1467.
CHECK failed--call queue aborted.
/usr/lib/gcc/i686-redhat-linux/4.4.7/../../../crt1.o: In function `_start':
(.text+0x18): undefined reference to `main'
collect2: ld returned 1 exit status
```
|
1.0
|
Bizarre copy of CODE in B::UNOP_AUX::save (Fails to generate .c file) - This output is from master but
```sh
$>perlcc -e 'my $z = 0; my $li2 = "c"; my $rh = { foo => [ "ok\n" ]}; print $rh->{"foo"}->[$li2+$z];'
/usr/local/cpanel/3rdparty/perl/522-debug/bin/perlcc: Unexpected compiler output
Bizarre copy of CODE in list assignment at /usr/local/cpanel/3rdparty/perl/522-debug/lib/perl5/cpanel_lib/i386-linux-debug-64int/B/C.pm line 1467.
CHECK failed--call queue aborted.
/usr/lib/gcc/i686-redhat-linux/4.4.7/../../../crt1.o: In function `_start':
(.text+0x18): undefined reference to `main'
collect2: ld returned 1 exit status
```
|
non_test
|
bizarre copy of code in b unop aux save fails to generate c file this output is from master but sh perlcc e my z my c my rh foo print rh foo usr local cpanel perl debug bin perlcc unexpected compiler output bizarre copy of code in list assignment at usr local cpanel perl debug lib cpanel lib linux debug b c pm line check failed call queue aborted usr lib gcc redhat linux o in function start text undefined reference to main ld returned exit status
| 0
|
130,238
| 10,602,488,805
|
IssuesEvent
|
2019-10-10 14:18:58
|
learn-co-curriculum/rails-edit-update-action-readme
|
https://api.github.com/repos/learn-co-curriculum/rails-edit-update-action-readme
|
closed
|
using form_for does not pass the test
|
Test
|
I had to reach out to AAQ to pass the code along due to the edit form with form helper form_for did not pass the test, had to use form_tag to pass the test
|
1.0
|
using form_for does not pass the test - I had to reach out to AAQ to pass the code along due to the edit form with form helper form_for did not pass the test, had to use form_tag to pass the test
|
test
|
using form for does not pass the test i had to reach out to aaq to pass the code along due to the edit form with form helper form for did not pass the test had to use form tag to pass the test
| 1
|
339,533
| 30,454,260,283
|
IssuesEvent
|
2023-07-16 17:31:41
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix nn.test_tensorflow_separable_conv2d
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix nn.test_tensorflow_separable_conv2d - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5568710693/jobs/10171542101"><img src=https://img.shields.io/badge/-failure-red></a>
|
test
|
fix nn test tensorflow separable jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
| 1
|
346,585
| 30,958,443,696
|
IssuesEvent
|
2023-08-08 00:27:28
|
acquire-project/acquire-driver-common
|
https://api.github.com/repos/acquire-project/acquire-driver-common
|
closed
|
add test for software trigger
|
testing
|
The software trigger functionality should be usable to produce single frames deterministically. Need to assert that it works.
There are two behaviors:
1. Enabling the software triggers blocks frames being produced until `acquire_execute_trigger` is called on the stream.
2. Disabling software triggering after it's been enabled allows free running frame generation.
See grabber triggering tests.
|
1.0
|
add test for software trigger - The software trigger functionality should be usable to produce single frames deterministically. Need to assert that it works.
There are two behaviors:
1. Enabling the software triggers blocks frames being produced until `acquire_execute_trigger` is called on the stream.
2. Disabling software triggering after it's been enabled allows free running frame generation.
See grabber triggering tests.
|
test
|
add test for software trigger the software trigger functionality should be usable to produce single frames deterministically need to assert that it works there are two behaviors enabling the software triggers blocks frames being produced until acquire execute trigger is called on the stream disabling software triggering after it s been enabled allows free running frame generation see grabber triggering tests
| 1
|
85,968
| 16,771,246,423
|
IssuesEvent
|
2021-06-14 15:01:28
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[Mono] Upgrade LLVM to 11.0
|
area-Codegen-LLVM-mono tracking
|
Tracking issue to upgrade LLVM from 9.x to 11.0 to consume newer features
- [x] Forward-port mono-specific patches to LLVM 11
- [x] Ensure dotnet/runtime builds against a local copy of patched LLVM 11
- [x] Enable running dotnet/runtime against a custom build of LLVM 11 on CI
- [x] Switch to using LLVM 11 in production
Not strictly needed
- [ ] Verify that Apple bitcode submission works with bitcode generated by patched LLVM 11
|
1.0
|
[Mono] Upgrade LLVM to 11.0 - Tracking issue to upgrade LLVM from 9.x to 11.0 to consume newer features
- [x] Forward-port mono-specific patches to LLVM 11
- [x] Ensure dotnet/runtime builds against a local copy of patched LLVM 11
- [x] Enable running dotnet/runtime against a custom build of LLVM 11 on CI
- [x] Switch to using LLVM 11 in production
Not strictly needed
- [ ] Verify that Apple bitcode submission works with bitcode generated by patched LLVM 11
|
non_test
|
upgrade llvm to tracking issue to upgrade llvm from x to to consume newer features forward port mono specific patches to llvm ensure dotnet runtime builds against a local copy of patched llvm enable running dotnet runtime against a custom build of llvm on ci switch to using llvm in production not strictly needed verify that apple bitcode submission works with bitcode generated by patched llvm
| 0
|
10,699
| 4,076,368,664
|
IssuesEvent
|
2016-05-29 21:16:05
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
Unwanted characters in the alias
|
No Code Attached Yet
|
#### Steps to reproduce the issue
Install Joomla_3.5.1 and install Hungarian language pack
http://community.joomla.org/translations/joomla-3-translations.html#hu-hu
Set the front end language to Hungarian but the admin still English.
Create a new Category or Article like this: "közös" and the alias result will be "koezoes" instead of "kozos". Same for letter "ü" = "ue" instead of "u".
If you set the site admin to Hungarian, the characters will be okay.
So "ö" will be "o" and "ü" will be "u" as we wanted.
I discussed it with our Hungarian translator and we would like to correct this.
Thank you very much.
|
1.0
|
Unwanted characters in the alias - #### Steps to reproduce the issue
Install Joomla_3.5.1 and install Hungarian language pack
http://community.joomla.org/translations/joomla-3-translations.html#hu-hu
Set the front end language to Hungarian but the admin still English.
Create a new Category or Article like this: "közös" and the alias result will be "koezoes" instead of "kozos". Same for letter "ü" = "ue" instead of "u".
If you set the site admin to Hungarian, the characters will be okay.
So "ö" will be "o" and "ü" will be "u" as we wanted.
I discussed it with our Hungarian translator and we would like to correct this.
Thank you very much.
|
non_test
|
unwanted characters in the alias steps to reproduce the issue install joomla and install hungarian language pack set the front end language to hungarian but the admin still english create a new category or article like this közös and the alias result will be koezoes instead of kozos same for letter ü ue instead of u if you set the site admin to hungarian the characters will be okay so ö will be o and ü will be u as we wanted i discussed it with our hungarian translator and we would like to correct this thank you very much
| 0
|
15,203
| 3,451,900,876
|
IssuesEvent
|
2015-12-17 00:03:54
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Can't login
|
bug help wanted needs testing
|
actually I think it's a little more complicated then that.
the password encoding algorithm seems to be different depending on where you setup your account from.
If you click reset password and set a password that way you can't use it to login to eco. Once logged in on the website you can edit your password but you won't be able to login on the website, you will however be able to login on the eco servers.
http://ecoforum.strangeloopgames.com/topic/414/cant-login
|
1.0
|
Can't login - actually I think it's a little more complicated then that.
the password encoding algorithm seems to be different depending on where you setup your account from.
If you click reset password and set a password that way you can't use it to login to eco. Once logged in on the website you can edit your password but you won't be able to login on the website, you will however be able to login on the eco servers.
http://ecoforum.strangeloopgames.com/topic/414/cant-login
|
test
|
can t login actually i think it s a little more complicated then that the password encoding algorithm seems to be different depending on where you setup your account from if you click reset password and set a password that way you can t use it to login to eco once logged in on the website you can edit your password but you won t be able to login on the website you will however be able to login on the eco servers
| 1
|
14,781
| 10,212,351,573
|
IssuesEvent
|
2019-08-14 19:13:01
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
No active Transaction was found for ID
|
Bug Service Attention Service Bus customer-reported customer-response-expected
|
**Describe the bug**
When performing transactional processing (using `TransactionScope` and send-via feature) of messages concurrently, using multiple message senders, the `System.InvalidOperationException` exception is thrown.
***Exception or Stack Trace***
```
System.InvalidOperationException: No active Transaction was found for ID 'txn:9e879b829a0e446d97d6ac7625862833:9_G5'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:02:00. Reference:fbcf536a-d67a-4a0f-8fb7-7532b8360f78, TrackingId:44c4b18d00000014000024ca5c5832b5_G5_B7, SystemTracker:SOMSYSTEMNAME:Queue:SOMEQUEUENAME, Timestamp:2019-02-04T12:40:21
at Microsoft.Azure.ServiceBus.Core.MessageSender.OnSendAsync(IList`1 messageList)
at Microsoft.Azure.ServiceBus.RetryPolicy.RunOperation(Func`1 operation, TimeSpan operationTimeout)
at Microsoft.Azure.ServiceBus.RetryPolicy.RunOperation(Func`1 operation, TimeSpan operationTimeout)
at Microsoft.Azure.ServiceBus.Core.MessageSender.SendAsync(IList`1 messageList)
at NServiceBus.TransportReceiveToPhysicalMessageProcessingConnector.Invoke(ITransportReceiveContext context, Func`2 next)
at NServiceBus.MainPipelineExecutor.Invoke(MessageContext messageContext)
at NServiceBus.Transport.AzureServiceBus.MessagePump.ProcessMessage(Task`1 receiveTask)
```
**To Reproduce**
Repro code is located in https://github.com/SeanFeldman/TransactionIssueRepro, TransactionIssueRepro.sln
- `Sender` project is seeding the receiver's queue with messages
- `Receiver` project is logging message processing.
- For each processed message, ten outgoing messages are emitted
***Code Snippet***
See the previous section
**Expected behavior**
There should be no exceptions.
**Screenshots**
N/A

**Setup (please complete the following information):**
- OS: Windows 10 Pro 1903
- IDE : VS 2019 16.1.3
- Azure Service Bus 3.4.0 / netcoreapp2.2
**Additional context**
When concurrency is set to one and a single outgoing `MessageSender` is used, this exception is not thrown. Whenever concurrency is higher than one, the exception is thrown.
Originally reported by NServiceBus customers using Azure Service Bus transport [here](https://github.com/Particular/NServiceBus.Transport.AzureServiceBus/issues/48). My investigation led to the native repro **w/o** NServiceBus code in it.
All NServiceBus customers on [ASB transport](https://docs.particular.net/transports/azure-service-bus/) with transactions enabled are affected 🐼
|
2.0
|
No active Transaction was found for ID - **Describe the bug**
When performing transactional processing (using `TransactionScope` and send-via feature) of messages concurrently, using multiple message senders, the `System.InvalidOperationException` exception is thrown.
***Exception or Stack Trace***
```
System.InvalidOperationException: No active Transaction was found for ID 'txn:9e879b829a0e446d97d6ac7625862833:9_G5'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:02:00. Reference:fbcf536a-d67a-4a0f-8fb7-7532b8360f78, TrackingId:44c4b18d00000014000024ca5c5832b5_G5_B7, SystemTracker:SOMSYSTEMNAME:Queue:SOMEQUEUENAME, Timestamp:2019-02-04T12:40:21
at Microsoft.Azure.ServiceBus.Core.MessageSender.OnSendAsync(IList`1 messageList)
at Microsoft.Azure.ServiceBus.RetryPolicy.RunOperation(Func`1 operation, TimeSpan operationTimeout)
at Microsoft.Azure.ServiceBus.RetryPolicy.RunOperation(Func`1 operation, TimeSpan operationTimeout)
at Microsoft.Azure.ServiceBus.Core.MessageSender.SendAsync(IList`1 messageList)
at NServiceBus.TransportReceiveToPhysicalMessageProcessingConnector.Invoke(ITransportReceiveContext context, Func`2 next)
at NServiceBus.MainPipelineExecutor.Invoke(MessageContext messageContext)
at NServiceBus.Transport.AzureServiceBus.MessagePump.ProcessMessage(Task`1 receiveTask)
```
**To Reproduce**
Repro code is located in https://github.com/SeanFeldman/TransactionIssueRepro, TransactionIssueRepro.sln
- `Sender` project is seeding the receiver's queue with messages
- `Receiver` project is logging message processing.
- For each processed message, ten outgoing messages are emitted
***Code Snippet***
See the previous section
**Expected behavior**
There should be no exceptions.
**Screenshots**
N/A

**Setup (please complete the following information):**
- OS: Windows 10 Pro 1903
- IDE : VS 2019 16.1.3
- Azure Service Bus 3.4.0 / netcoreapp2.2
**Additional context**
When concurrency is set to one and a single outgoing `MessageSender` is used, this exception is not thrown. Whenever concurrency is higher than one, the exception is thrown.
Originally reported by NServiceBus customers using Azure Service Bus transport [here](https://github.com/Particular/NServiceBus.Transport.AzureServiceBus/issues/48). My investigation led to the native repro **w/o** NServiceBus code in it.
All NServiceBus customers on [ASB transport](https://docs.particular.net/transports/azure-service-bus/) with transactions enabled are affected 🐼
|
non_test
|
no active transaction was found for id describe the bug when performing transactional processing using transactionscope and send via feature of messages concurrently using multiple message senders the system invalidoperationexception exception is thrown exception or stack trace system invalidoperationexception no active transaction was found for id txn the transaction may have timed out or attempted to span multiple top level entities such as queue or topic the server transaction timeout is reference trackingid systemtracker somsystemname queue somequeuename timestamp at microsoft azure servicebus core messagesender onsendasync ilist messagelist at microsoft azure servicebus retrypolicy runoperation func operation timespan operationtimeout at microsoft azure servicebus retrypolicy runoperation func operation timespan operationtimeout at microsoft azure servicebus core messagesender sendasync ilist messagelist at nservicebus transportreceivetophysicalmessageprocessingconnector invoke itransportreceivecontext context func next at nservicebus mainpipelineexecutor invoke messagecontext messagecontext at nservicebus transport azureservicebus messagepump processmessage task receivetask to reproduce repro code is located in transactionissuerepro sln sender project is seeding the receiver s queue with messages receiver project is logging message processing for each processed message ten outgoing messages are emitted code snippet see the previous section expected behavior there should be no exceptions screenshots n a setup please complete the following information os windows pro ide vs azure service bus additional context when concurrency is set to one and a single outgoing messagesender is used this exception is not thrown whenever concurrency is higher than one the exception is thrown originally reported by nservicebus customers using azure service bus transport my investigation led to the native repro w o nservicebus code in it all nservicebus customers on with transactions enabled are affected 🐼
| 0
|
131,230
| 10,685,797,968
|
IssuesEvent
|
2019-10-22 13:21:44
|
dbrownukk/EFD_v2
|
https://api.github.com/repos/dbrownukk/EFD_v2
|
closed
|
Age & Year of birth are not cross-validated in Spreadsheet Validation
|
For Testing bug
|
Instance: EFD_HM
Study: test Config Questions
HH Name: Test Issue 300
HH Number: 11
The ages & years of birth of the HH members of this HH are inconsistent.
They are loaded & Parsed without happily enough - no problem with that.
But Validate should not allow the data to pass.
We have options:
1. require the user to correct manually if Age & YoB are inconsistent
1. give one value (age or YoB) precedence and correct the other automatically as part of Parsing
(Could Have) with a message to the user
|
1.0
|
Age & Year of birth are not cross-validated in Spreadsheet Validation - Instance: EFD_HM
Study: test Config Questions
HH Name: Test Issue 300
HH Number: 11
The ages & years of birth of the HH members of this HH are inconsistent.
They are loaded & Parsed without happily enough - no problem with that.
But Validate should not allow the data to pass.
We have options:
1. require the user to correct manually if Age & YoB are inconsistent
1. give one value (age or YoB) precedence and correct the other automatically as part of Parsing
(Could Have) with a message to the user
|
test
|
age year of birth are not cross validated in spreadsheet validation instance efd hm study test config questions hh name test issue hh number the ages years of birth of the hh members of this hh are inconsistent they are loaded parsed without happily enough no problem with that but validate should not allow the data to pass we have options require the user to correct manually if age yob are inconsistent give one value age or yob precedence and correct the other automatically as part of parsing could have with a message to the user
| 1
|
115,951
| 17,348,746,959
|
IssuesEvent
|
2021-07-29 05:26:25
|
Thanraj/sqlite-v3.22.0_
|
https://api.github.com/repos/Thanraj/sqlite-v3.22.0_
|
opened
|
CVE-2020-13630 (High) detected in sqliteversion-3.22.0
|
security vulnerability
|
## CVE-2020-13630 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sqliteversion-3.22.0</b></p></summary>
<p>
<p>Official Git mirror of the SQLite source tree</p>
<p>Library home page: <a href=https://github.com/sqlite/sqlite.git>https://github.com/sqlite/sqlite.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Thanraj/sqlite-v3.22.0_/commit/f3f12a23d72b61bdd28731bfb61b341fa6f8e264">f3f12a23d72b61bdd28731bfb61b341fa6f8e264</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>sqlite-v3.22.0_/ext/fts3/fts3.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ext/fts3/fts3.c in SQLite before 3.32.0 has a use-after-free in fts3EvalNextRow, related to the snippet feature.
<p>Publish Date: 2020-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13630>CVE-2020-13630</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630</a></p>
<p>Release Date: 2020-05-27</p>
<p>Fix Resolution: 3.32.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-13630 (High) detected in sqliteversion-3.22.0 - ## CVE-2020-13630 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sqliteversion-3.22.0</b></p></summary>
<p>
<p>Official Git mirror of the SQLite source tree</p>
<p>Library home page: <a href=https://github.com/sqlite/sqlite.git>https://github.com/sqlite/sqlite.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Thanraj/sqlite-v3.22.0_/commit/f3f12a23d72b61bdd28731bfb61b341fa6f8e264">f3f12a23d72b61bdd28731bfb61b341fa6f8e264</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>sqlite-v3.22.0_/ext/fts3/fts3.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ext/fts3/fts3.c in SQLite before 3.32.0 has a use-after-free in fts3EvalNextRow, related to the snippet feature.
<p>Publish Date: 2020-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13630>CVE-2020-13630</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13630</a></p>
<p>Release Date: 2020-05-27</p>
<p>Fix Resolution: 3.32.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in sqliteversion cve high severity vulnerability vulnerable library sqliteversion official git mirror of the sqlite source tree library home page a href found in head commit a href found in base branch master vulnerable source files sqlite ext c vulnerability details ext c in sqlite before has a use after free in related to the snippet feature publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
417,255
| 28,110,250,161
|
IssuesEvent
|
2023-03-31 06:29:26
|
axmszr/ped
|
https://api.github.com/repos/axmszr/ped
|
opened
|
Slightly repetitive User Guide (Section 6.2.5)
|
severity.Low type.DocumentationBug
|
Sections 6.2.4 and 6.2.5 both include the placeholder and prefix for various function inputs. It feels like unnecessarily repeated information that definitely could have been condensed, perhaps into one table or multiple smaller tables.
Section 6.2.5, as a screenshot of a table instead of written text, is also notably a bit messier and difficult to read. It's a lot to scroll through too and can be a bit overwhelming.
<!--session: 1680242647550-b023bf1e-340f-4a31-8b4d-5e392a9f6496-->
<!--Version: Web v3.4.7-->
|
1.0
|
Slightly repetitive User Guide (Section 6.2.5) - Sections 6.2.4 and 6.2.5 both include the placeholder and prefix for various function inputs. It feels like unnecessarily repeated information that definitely could have been condensed, perhaps into one table or multiple smaller tables.
Section 6.2.5, as a screenshot of a table instead of written text, is also notably a bit messier and difficult to read. It's a lot to scroll through too and can be a bit overwhelming.
<!--session: 1680242647550-b023bf1e-340f-4a31-8b4d-5e392a9f6496-->
<!--Version: Web v3.4.7-->
|
non_test
|
slightly repetitive user guide section sections and both include the placeholder and prefix for various function inputs it feels like unnecessarily repeated information that definitely could have been condensed perhaps into one table or multiple smaller tables section as a screenshot of a table instead of written text is also notably a bit messier and difficult to read it s a lot to scroll through too and can be a bit overwhelming
| 0
|
247,746
| 20,988,072,692
|
IssuesEvent
|
2022-03-29 06:34:24
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: scrub/all-checks/tpcc/w=100 failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker
|
roachtest.scrub/all-checks/tpcc/w=100 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/scrub/all-checks/tpcc/w=100) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /artifacts/scrub/all-checks/tpcc/w=100/run_1
cluster.go:1868,tpcc.go:141,tpcc.go:146,tpcc.go:176,tpcc.go:223,scrub.go:61,test_runner.go:875: one or more parallel execution failure
(1) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).ParallelE
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2042
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Parallel
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1923
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Start
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cockroach.go:167
| github.com/cockroachdb/cockroach/pkg/roachprod.Start
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:660
| main.(*clusterImpl).StartE
| main/pkg/cmd/roachtest/cluster.go:1826
| main.(*clusterImpl).Start
| main/pkg/cmd/roachtest/cluster.go:1867
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupTPCC.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:141
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupTPCC.func2
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:146
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:176
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:223
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.makeScrubTPCCTest.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/scrub.go:61
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:875
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) one or more parallel execution failure
Error types: (1) *withstack.withStack (2) *errutil.leafError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*scrub/all-checks/tpcc/w=100.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: scrub/all-checks/tpcc/w=100 failed - roachtest.scrub/all-checks/tpcc/w=100 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/scrub/all-checks/tpcc/w=100) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /artifacts/scrub/all-checks/tpcc/w=100/run_1
cluster.go:1868,tpcc.go:141,tpcc.go:146,tpcc.go:176,tpcc.go:223,scrub.go:61,test_runner.go:875: one or more parallel execution failure
(1) attached stack trace
-- stack trace:
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).ParallelE
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2042
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Parallel
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:1923
| github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Start
| github.com/cockroachdb/cockroach/pkg/roachprod/install/cockroach.go:167
| github.com/cockroachdb/cockroach/pkg/roachprod.Start
| github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:660
| main.(*clusterImpl).StartE
| main/pkg/cmd/roachtest/cluster.go:1826
| main.(*clusterImpl).Start
| main/pkg/cmd/roachtest/cluster.go:1867
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupTPCC.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:141
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupTPCC.func2
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:146
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.setupTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:176
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTPCC
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tpcc.go:223
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.makeScrubTPCCTest.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/scrub.go:61
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:875
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) one or more parallel execution failure
Error types: (1) *withstack.withStack (2) *errutil.leafError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*scrub/all-checks/tpcc/w=100.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest scrub all checks tpcc w failed roachtest scrub all checks tpcc w with on master the test failed on branch master cloud gce test artifacts and logs in artifacts scrub all checks tpcc w run cluster go tpcc go tpcc go tpcc go tpcc go scrub go test runner go one or more parallel execution failure attached stack trace stack trace github com cockroachdb cockroach pkg roachprod install syncedcluster parallele github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster parallel github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster start github com cockroachdb cockroach pkg roachprod install cockroach go github com cockroachdb cockroach pkg roachprod start github com cockroachdb cockroach pkg roachprod roachprod go main clusterimpl starte main pkg cmd roachtest cluster go main clusterimpl start main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests setuptpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests setuptpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests setuptpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests runtpcc github com cockroachdb cockroach pkg cmd roachtest tests tpcc go github com cockroachdb cockroach pkg cmd roachtest tests makescrubtpcctest github com cockroachdb cockroach pkg cmd roachtest tests scrub go main testrunner runtest main pkg cmd roachtest test runner go runtime goexit goroot src runtime asm s wraps one or more parallel execution failure error types withstack withstack errutil leaferror help see see cc cockroachdb sql queries
| 1
|
190,711
| 14,570,239,616
|
IssuesEvent
|
2020-12-17 14:08:49
|
zeebe-io/zeebe
|
https://api.github.com/repos/zeebe-io/zeebe
|
closed
|
SingleBrokerDataDeletionTest.shouldNotCompactNotExportedEvents flaky
|
Priority: Mid Release: 0.24.0 Status: Planned Type: Unstable Test
|
**Summary**
- How often does the test fail? - top 3 flaky tests for CW 20, CW 21
- Does it block your work? - not really
- Do we suspect that it is a real failure? - unknown
**Failures**
<details><summary>Example assertion failure</summary>
<pre>
Error Message
Segment not open
Stacktrace
java.util.concurrent.ExecutionException: Segment not open
at io.zeebe.util.sched.future.CompletableActorFuture.get(CompletableActorFuture.java:141)
at io.zeebe.util.sched.future.CompletableActorFuture.get(CompletableActorFuture.java:108)
at io.zeebe.util.sched.FutureUtil.join(FutureUtil.java:21)
at io.zeebe.util.sched.future.CompletableActorFuture.join(CompletableActorFuture.java:196)
at io.zeebe.broker.it.clustering.SingleBrokerDataDeletionTest.shouldNotCompactNotExportedEvents(SingleBrokerDataDeletionTest.java:79)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: java.lang.IllegalStateException: Segment not open
at com.google.common.base.Preconditions.checkState(Preconditions.java:508)
at io.atomix.storage.journal.JournalSegment.checkOpen(JournalSegment.java:234)
at io.atomix.storage.journal.JournalSegment.createReader(JournalSegment.java:211)
at io.atomix.storage.journal.SegmentedJournalReader.initialize(SegmentedJournalReader.java:41)
at io.atomix.storage.journal.SegmentedJournalReader.<init>(SegmentedJournalReader.java:34)
at io.atomix.storage.journal.SegmentedJournal.openReader(SegmentedJournal.java:227)
at io.atomix.raft.storage.log.RaftLog.openReader(RaftLog.java:63)
at io.atomix.raft.partition.impl.RaftPartitionServer.openReader(RaftPartitionServer.java:198)
at io.zeebe.logstreams.storage.atomix.AtomixRaftServer.create(AtomixRaftServer.java:30)
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:18)
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:22)
at io.zeebe.logstreams.storage.atomix.AtomixLogStorage.newReader(AtomixLogStorage.java:40)
at io.zeebe.logstreams.impl.log.LogStreamReaderImpl.<init>(LogStreamReaderImpl.java:41)
at io.zeebe.logstreams.impl.log.LogStreamImpl.lambda$newLogStreamReader$2(LogStreamImpl.java:117)
at io.zeebe.util.sched.ActorJob.invoke(ActorJob.java:62)
at io.zeebe.util.sched.ActorJob.execute(ActorJob.java:39)
at io.zeebe.util.sched.ActorTask.execute(ActorTask.java:118)
at io.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:107)
at io.zeebe.util.sched.ActorThread.doWork(ActorThread.java:91)
at io.zeebe.util.sched.ActorThread.run(ActorThread.java:204)
</pre>
</details>
**Hypotheses**
**Logs**
<details><summary>Logs</summary>
<pre>
00:16:18.453 [] [main] INFO io.zeebe.test - Test started: shouldNotCompactNotExportedEvents(io.zeebe.broker.it.clustering.SingleBrokerDataDeletionTest)
00:16:18.458 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27822 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=23}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27823 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=24}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27824 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=25}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27825 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=26}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27826 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=27}
00:16:18.459 [] [main] DEBUG io.zeebe.broker.system - Initializing system with base path /tmp/junit5766860902055733790/0
00:16:18.461 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27827 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=28}
00:16:18.461 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27828 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=29}
00:16:18.461 [] [Thread-559] INFO io.zeebe.broker.system - Version: 0.24.0-SNAPSHOT
00:16:18.466 [] [Thread-559] INFO io.zeebe.broker.system - Starting broker 0 with configuration {
"network" : {
"host" : "0.0.0.0",
"portOffset" : 0,
"maxMessageSize" : "0MB",
"advertisedHost" : "0.0.0.0",
"commandApi" : {
"host" : "0.0.0.0",
"port" : 27824,
"advertisedHost" : "0.0.0.0",
"advertisedPort" : 27824,
"address" : "0.0.0.0:27824",
"advertisedAddress" : "0.0.0.0:27824"
},
"internalApi" : {
"host" : "0.0.0.0",
"port" : 27825,
"advertisedHost" : "0.0.0.0",
"advertisedPort" : 27825,
"address" : "0.0.0.0:27825",
"advertisedAddress" : "0.0.0.0:27825"
},
"monitoringApi" : {
"host" : "0.0.0.0",
"port" : 27826,
"advertisedHost" : "0.0.0.0",
"advertisedPort" : 27826,
"address" : "0.0.0.0:27826",
"advertisedAddress" : "0.0.0.0:27826"
},
"maxMessageSizeInBytes" : 8192
},
"cluster" : {
"initialContactPoints" : [ ],
"partitionIds" : [ 1 ],
"nodeId" : 0,
"partitionsCount" : 1,
"replicationFactor" : 1,
"clusterSize" : 1,
"clusterName" : "zeebe-cluster-1",
"gossipFailureTimeout" : 2000,
"gossipInterval" : 150,
"gossipProbeInterval" : 250
},
"threads" : {
"cpuThreadCount" : 2,
"ioThreadCount" : 2
},
"data" : {
"directories" : [ "/tmp/junit5766860902055733790/0/data" ],
"logSegmentSize" : "0MB",
"snapshotPeriod" : "PT1M",
"logIndexDensity" : 5,
"logSegmentSizeInBytes" : 8192
},
"exporters" : {
"snapshot-test-exporter" : {
"jarPath" : null,
"className" : "io.zeebe.broker.it.clustering.SingleBrokerDataDeletionTest$ControllableExporter",
"args" : null,
"external" : false
}
},
"gateway" : {
"network" : {
"host" : "0.0.0.0",
"port" : 27822,
"minKeepAliveInterval" : "PT30S"
},
"cluster" : {
"contactPoint" : "0.0.0.0:27825",
"requestTimeout" : "PT15S",
"clusterName" : "zeebe-cluster",
"memberId" : "gateway",
"host" : "0.0.0.0",
"port" : 27823
},
"threads" : {
"managementThreads" : 1
},
"monitoring" : {
"enabled" : false,
"host" : "0.0.0.0",
"port" : 9600
},
"security" : {
"enabled" : false,
"certificateChainPath" : null,
"privateKeyPath" : null
},
"enable" : false
},
"backpressure" : {
"enabled" : true,
"algorithm" : "VEGAS"
},
"stepTimeout" : "PT5M",
"executionMetricsExporterEnabled" : false
}
00:16:18.467 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [1/10]: actor scheduler
00:16:18.469 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [1/10]: actor scheduler started in 2 ms
00:16:18.469 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [2/10]: membership and replication protocol
00:16:18.478 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [2/10]: membership and replication protocol started in 9 ms
00:16:18.478 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [3/10]: command api transport
00:16:18.530 [] [Thread-559] DEBUG io.zeebe.broker.system - Bound command API to 0.0.0.0:27824
00:16:18.531 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [3/10]: command api transport started in 53 ms
00:16:18.531 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [4/10]: command api handler
00:16:18.533 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [4/10]: command api handler started in 1 ms
00:16:18.534 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [5/10]: subscription api
00:16:18.534 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [5/10]: subscription api started in 0 ms
00:16:18.534 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [6/10]: cluster services
00:16:18.556 [] [atomix-0] WARN io.atomix.primitive.partition.impl.DefaultPartitionGroupMembershipService - Failed to locate partition group(s) via bootstrap nodes. Please ensure partition groups are configured either locally or remotely and the node is able to reach partition group members.
00:16:18.559 [] [atomix-0] INFO io.atomix.raft.partition.impl.RaftPartitionServer - Starting server for partition PartitionId{id=1, group=raft-partition}
00:16:18.633 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to FOLLOWER
00:16:18.634 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.roles.FollowerRole - RaftServer{raft-partition-partition-1}{role=FOLLOWER} - Single member cluster. Transitioning directly to candidate.
00:16:18.634 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to CANDIDATE
00:16:18.634 [] [raft-server-0-raft-partition-partition-1] WARN io.atomix.utils.event.ListenerRegistry - Listener io.atomix.raft.roles.FollowerRole$$Lambda$387/0x0000000840424840@70419b60 not registered
00:16:18.636 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.roles.CandidateRole - RaftServer{raft-partition-partition-1}{role=CANDIDATE} - Single member cluster. Transitioning directly to leader.
00:16:18.645 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to LEADER
00:16:18.646 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Found leader 0
00:16:18.653 [] [raft-partition-group-raft-partition-0] INFO io.atomix.raft.partition.RaftPartitionGroup - Started
00:16:18.653 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [6/10]: cluster services started in 118 ms
00:16:18.654 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [7/10]: topology manager
00:16:18.654 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [7/10]: topology manager started in 0 ms
00:16:18.654 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [8/10]: metric's server
00:16:18.657 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [8/10]: metric's server started in 3 ms
00:16:18.657 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [9/10]: leader management request handler
00:16:18.657 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [9/10]: leader management request handler started in 0 ms
00:16:18.657 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [10/10]: zeebe partitions
00:16:18.658 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 partitions [1/1]: partition 1
00:16:18.658 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Removing follower partition service for partition PartitionId{id=1, group=raft-partition}
00:16:18.659 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Partition role transitioning from null to LEADER
00:16:18.659 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Installing leader partition service for partition PartitionId{id=1, group=raft-partition}
00:16:18.660 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams.snapshot - Available snapshots: []
00:16:19.582 [] [raft-partition-group-raft-partition-0] INFO io.atomix.raft.partition.RaftPartitionGroup - Started
00:16:19.635 [] [main] INFO io.zeebe.gateway - Version: 0.24.0-SNAPSHOT
00:16:19.644 [] [main] INFO io.zeebe.gateway - Starting gateway with configuration {
"network" : {
"host" : "0.0.0.0",
"port" : 27827,
"minKeepAliveInterval" : "PT30S"
},
"cluster" : {
"contactPoint" : "0.0.0.0:27825",
"requestTimeout" : "PT45S",
"clusterName" : "zeebe-cluster-1",
"memberId" : "gateway",
"host" : "0.0.0.0",
"port" : 27828
},
"threads" : {
"managementThreads" : 1
},
"monitoring" : {
"enabled" : false,
"host" : "0.0.0.0",
"port" : 9600
},
"security" : {
"enabled" : false,
"certificateChainPath" : null,
"privateKeyPath" : null
}
}
00:16:19.652 [GatewayTopologyManager] [gateway-scheduler-zb-actors-0] DEBUG io.zeebe.gateway - Received new broker BrokerInfo{nodeId=0, partitionsCount=1, clusterSize=1, replicationFactor=1, partitionRoles={}, partitionLeaderTerms={}, version=0.24.0-SNAPSHOT}.
00:16:19.960 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams.snapshot - Opened database from '/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/runtime'.
00:16:19.962 [Broker-0-LogStream-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 1 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
00:16:19.962 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
00:16:19.963 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@791945ba)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@526fb264, configuration: Configuration(false)]
00:16:19.964 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
00:16:19.965 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6afeb026)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@59f91c56, configuration: Configuration(false)]
00:16:19.965 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@762a799f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@15784dde, configuration: Configuration(false)]
00:16:19.969 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Recovering exporter from snapshot
00:16:19.970 [Broker-0-HealthCheckService] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - All partitions are installed. Broker is ready!
00:16:19.971 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 partitions [1/1]: partition 1 started in 1313 ms
00:16:19.971 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Recovered exporter 'Broker-0-Exporter-1' from snapshot at lastExportedPosition -1
00:16:19.971 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 partitions succeeded. Started 1 steps in 1313 ms.
00:16:19.971 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Configure exporter with id 'snapshot-test-exporter'
00:16:19.971 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [10/10]: zeebe partitions started in 1313 ms
00:16:19.971 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 succeeded. Started 10 steps in 1504 ms.
00:16:19.972 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Set event filter for exporters: ExporterEventFilter{acceptRecordTypes={COMMAND_REJECTION=true, SBE_UNKNOWN=true, COMMAND=true, EVENT=true, NULL_VAL=true}, acceptValueTypes={INCIDENT=true, JOB_BATCH=true, VARIABLE=true, WORKFLOW_INSTANCE_SUBSCRIPTION=true, JOB=true, NULL_VAL=true, WORKFLOW_INSTANCE_CREATION=true, DEPLOYMENT=true, MESSAGE_START_EVENT_SUBSCRIPTION=true, WORKFLOW_INSTANCE_RESULT=true, WORKFLOW_INSTANCE=true, MESSAGE=true, SBE_UNKNOWN=true, MESSAGE_SUBSCRIPTION=true, VARIABLE_DOCUMENT=true, ERROR=true, TIMER=true}}
00:16:19.972 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Open exporter with id 'snapshot-test-exporter'
00:16:20.058 [GatewayTopologyManager] [gateway-scheduler-zb-actors-0] DEBUG io.zeebe.gateway - Received metadata change from Broker 0, partitions {1=LEADER} and terms {1=1}.
00:16:20.146 [] [main] INFO io.zeebe.broker.system - Full replication factor
00:16:20.236 [] [main] INFO io.zeebe.broker.system - All brokers in topology TopologyImpl{brokers=[BrokerInfoImpl{nodeId=0, host='0.0.0.0', port=27824, version=0.24.0-SNAPSHOT, partitions=[PartitionInfoImpl{partitionId=1, role=LEADER}]}], clusterSize=1, partitionsCount=1, replicationFactor=1, gatewayVersion=0.24.0-SNAPSHOT}
00:16:22.773 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Detected unhealthy components. The current health status of components: {Raft-1=HEALTHY, Broker-0-StreamProcessor-1=UNHEALTHY, logStream=HEALTHY}
00:16:22.774 [Broker-0-HealthCheckService] [Broker-0-zb-actors-1] ERROR io.zeebe.broker.system - Partition-1 failed, marking it as unhealthy
00:16:22.777 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Taking temporary snapshot into /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/pushed-pending/84-1-1588292182776.
00:16:23.004 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Created snapshot for Broker-0-StreamProcessor-1
00:16:23.006 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] INFO io.zeebe.logstreams.snapshot - Finished taking snapshot, need to wait until last written event position 8589939224 is committed, current commit position is 8589939224. After that snapshot can be marked as valid.
00:16:23.006 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] INFO io.zeebe.logstreams.snapshot - Current commit position 8589939224 is greater then 8589939224, snapshot is valid.
00:16:23.039 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.AtomixSnapshotStorage - Purging snapshots older than DbSnapshot{directory=/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776, metadata=DbSnapshotMetadata{index=84, term=1, timestamp=2020-05-01 12:16:22,776}}
00:16:23.041 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.AtomixSnapshotStorage - Search for orphaned snapshots below oldest valid snapshot with index 84 in /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/pushed-pending
00:16:23.044 [Broker-0-DeletionService-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.logstreams.delete - Compacting Atomix log up to index 84
00:16:23.045 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.DbSnapshotStore - Committed new snapshot DbSnapshot{directory=/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776, metadata=DbSnapshotMetadata{index=84, term=1, timestamp=2020-05-01 12:16:22,776}}
00:16:23.045 [] [raft-server-0-raft-partition-partition-1] DEBUG io.zeebe.broker.clustering.atomix.ZeebeRaftStateMachine - ZeebeRaftStateMachine1{partition=raft-partition-partition-1} - Compacting log up from 45 up to {}
00:16:23.046 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Start replicating latest snapshot /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776
00:16:23.129 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] ERROR io.zeebe.util.actor - Uncaught exception in 'Broker-0-LogStream-1' in phase 'STARTED'. Continuing with next job.
java.lang.IllegalStateException: Segment not open
at com.google.common.base.Preconditions.checkState(Preconditions.java:508) ~[guava-29.0-jre.jar:?]
at io.atomix.storage.journal.JournalSegment.checkOpen(JournalSegment.java:234) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.JournalSegment.createReader(JournalSegment.java:211) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.SegmentedJournalReader.initialize(SegmentedJournalReader.java:41) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.SegmentedJournalReader.<init>(SegmentedJournalReader.java:34) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.SegmentedJournal.openReader(SegmentedJournal.java:227) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.raft.storage.log.RaftLog.openReader(RaftLog.java:63) ~[atomix-cluster-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.raft.partition.impl.RaftPartitionServer.openReader(RaftPartitionServer.java:198) ~[atomix-cluster-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixRaftServer.create(AtomixRaftServer.java:30) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:18) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:22) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixLogStorage.newReader(AtomixLogStorage.java:40) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.impl.log.LogStreamReaderImpl.<init>(LogStreamReaderImpl.java:41) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.impl.log.LogStreamImpl.lambda$newLogStreamReader$2(LogStreamImpl.java:117) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorJob.invoke(ActorJob.java:62) ~[zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorJob.execute(ActorJob.java:39) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorTask.execute(ActorTask.java:118) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:107) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorThread.doWork(ActorThread.java:91) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorThread.run(ActorThread.java:204) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
00:16:23.131 [] [main] INFO io.zeebe.test.records - Test failed, following records where exported:
00:16:23.132 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000092.sst
00:16:23.133 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000093.sst
00:16:23.134 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000094.sst
00:16:23.134 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000095.sst
00:16:23.135 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000096.sst
00:16:23.136 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000097.sst
00:16:23.136 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/CURRENT
00:16:23.137 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/MANIFEST-000004
00:16:23.137 [] [ForkJoinPool.commonPool-worker-5] INFO io.atomix.raft.partition.RaftPartitionGroup - Stopped
00:16:23.138 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/OPTIONS-000090
00:16:23.139 [] [main] DEBUG io.zeebe.gateway - Closing gateway broker client ...
00:16:23.144 [] [main] DEBUG io.zeebe.gateway - topology manager closed
00:16:23.145 [] [main] DEBUG io.zeebe.gateway - Gateway broker client closed.
00:16:23.146 [] [main] DEBUG io.zeebe.broker.system - Closing ClusteringRule...
00:16:23.146 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [1/10]: zeebe partitions
00:16:23.146 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 partitions [1/1]: partition 1
00:16:23.146 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Closing Broker-0-Exporter-1
00:16:23.147 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Closed exporter director 'Broker-0-Exporter-1'.
00:16:23.147 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-Exporter-1 successfully
00:16:23.147 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closing Broker-0-SnapshotDirector-1
00:16:23.147 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-SnapshotDirector-1 successfully
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closing Broker-0-StreamProcessor-1
00:16:23.148 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-StreamProcessor-1 successfully
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closing Broker-0-DeletionService-1
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-DeletionService-1 successfully
00:16:23.252 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.logstreams.snapshot - Closed database from '/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/runtime'.
00:16:23.330 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] INFO io.zeebe.logstreams - Close appender for log stream raft-partition-partition-1
00:16:23.331 [raft-partition-partition-1-write-buffer] [Broker-0-zb-actors-0] DEBUG io.zeebe.dispatcher - Dispatcher closed
00:16:23.331 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] INFO io.zeebe.logstreams - On closing logstream raft-partition-partition-1 close 3 readers
00:16:23.331 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] INFO io.zeebe.logstreams - Close log storage with name raft-partition-partition-1
00:16:23.332 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 partitions [1/1]: partition 1 closed in 186 ms
00:16:23.332 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 partitions succeeded. Closed 1 steps in 186 ms.
00:16:23.332 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [1/10]: zeebe partitions closed in 186 ms
00:16:23.332 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [2/10]: leader management request handler
00:16:23.332 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [2/10]: leader management request handler closed in 0 ms
00:16:23.332 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [3/10]: metric's server
00:16:23.337 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [3/10]: metric's server closed in 5 ms
00:16:23.337 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [4/10]: topology manager
00:16:23.338 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [4/10]: topology manager closed in 0 ms
00:16:23.338 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [5/10]: cluster services
00:16:23.338 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [5/10]: cluster services closed in 0 ms
00:16:23.338 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [6/10]: subscription api
00:16:23.338 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [6/10]: subscription api closed in 0 ms
00:16:23.338 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [7/10]: command api handler
00:16:23.339 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [7/10]: command api handler closed in 1 ms
00:16:23.339 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [8/10]: command api transport
00:16:25.558 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [8/10]: command api transport closed in 2218 ms
00:16:25.558 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [9/10]: membership and replication protocol
00:16:25.559 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to INACTIVE
00:16:25.634 [] [ForkJoinPool.commonPool-worker-3] INFO io.atomix.raft.partition.RaftPartitionGroup - Stopped
00:16:27.848 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [9/10]: membership and replication protocol closed in 2290 ms
00:16:27.848 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [10/10]: actor scheduler
00:16:27.849 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [10/10]: actor scheduler closed in 1 ms
00:16:27.849 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 succeeded. Closed 10 steps in 4703 ms.
00:16:27.849 [] [main] INFO io.zeebe.broker.system - Broker shut down.
</pre>
</details>
|
1.0
|
SingleBrokerDataDeletionTest.shouldNotCompactNotExportedEvents flaky - **Summary**
- How often does the test fail? - top 3 flaky tests for CW 20, CW 21
- Does it block your work? - not really
- Do we suspect that it is a real failure? - unknown
**Failures**
<details><summary>Example assertion failure</summary>
<pre>
Error Message
Segment not open
Stacktrace
java.util.concurrent.ExecutionException: Segment not open
at io.zeebe.util.sched.future.CompletableActorFuture.get(CompletableActorFuture.java:141)
at io.zeebe.util.sched.future.CompletableActorFuture.get(CompletableActorFuture.java:108)
at io.zeebe.util.sched.FutureUtil.join(FutureUtil.java:21)
at io.zeebe.util.sched.future.CompletableActorFuture.join(CompletableActorFuture.java:196)
at io.zeebe.broker.it.clustering.SingleBrokerDataDeletionTest.shouldNotCompactNotExportedEvents(SingleBrokerDataDeletionTest.java:79)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: java.lang.IllegalStateException: Segment not open
at com.google.common.base.Preconditions.checkState(Preconditions.java:508)
at io.atomix.storage.journal.JournalSegment.checkOpen(JournalSegment.java:234)
at io.atomix.storage.journal.JournalSegment.createReader(JournalSegment.java:211)
at io.atomix.storage.journal.SegmentedJournalReader.initialize(SegmentedJournalReader.java:41)
at io.atomix.storage.journal.SegmentedJournalReader.<init>(SegmentedJournalReader.java:34)
at io.atomix.storage.journal.SegmentedJournal.openReader(SegmentedJournal.java:227)
at io.atomix.raft.storage.log.RaftLog.openReader(RaftLog.java:63)
at io.atomix.raft.partition.impl.RaftPartitionServer.openReader(RaftPartitionServer.java:198)
at io.zeebe.logstreams.storage.atomix.AtomixRaftServer.create(AtomixRaftServer.java:30)
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:18)
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:22)
at io.zeebe.logstreams.storage.atomix.AtomixLogStorage.newReader(AtomixLogStorage.java:40)
at io.zeebe.logstreams.impl.log.LogStreamReaderImpl.<init>(LogStreamReaderImpl.java:41)
at io.zeebe.logstreams.impl.log.LogStreamImpl.lambda$newLogStreamReader$2(LogStreamImpl.java:117)
at io.zeebe.util.sched.ActorJob.invoke(ActorJob.java:62)
at io.zeebe.util.sched.ActorJob.execute(ActorJob.java:39)
at io.zeebe.util.sched.ActorTask.execute(ActorTask.java:118)
at io.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:107)
at io.zeebe.util.sched.ActorThread.doWork(ActorThread.java:91)
at io.zeebe.util.sched.ActorThread.run(ActorThread.java:204)
</pre>
</details>
**Hypotheses**
**Logs**
<details><summary>Logs</summary>
<pre>
00:16:18.453 [] [main] INFO io.zeebe.test - Test started: shouldNotCompactNotExportedEvents(io.zeebe.broker.it.clustering.SingleBrokerDataDeletionTest)
00:16:18.458 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27822 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=23}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27823 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=24}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27824 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=25}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27825 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=26}
00:16:18.459 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27826 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=27}
00:16:18.459 [] [main] DEBUG io.zeebe.broker.system - Initializing system with base path /tmp/junit5766860902055733790/0
00:16:18.461 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27827 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=28}
00:16:18.461 [] [main] INFO io.zeebe.test.util.SocketUtil - Choosing next port 27828 for test fork 2 with range PortRange{host='localhost', basePort=27800, maxOffset=100, currentOffset=29}
00:16:18.461 [] [Thread-559] INFO io.zeebe.broker.system - Version: 0.24.0-SNAPSHOT
00:16:18.466 [] [Thread-559] INFO io.zeebe.broker.system - Starting broker 0 with configuration {
"network" : {
"host" : "0.0.0.0",
"portOffset" : 0,
"maxMessageSize" : "0MB",
"advertisedHost" : "0.0.0.0",
"commandApi" : {
"host" : "0.0.0.0",
"port" : 27824,
"advertisedHost" : "0.0.0.0",
"advertisedPort" : 27824,
"address" : "0.0.0.0:27824",
"advertisedAddress" : "0.0.0.0:27824"
},
"internalApi" : {
"host" : "0.0.0.0",
"port" : 27825,
"advertisedHost" : "0.0.0.0",
"advertisedPort" : 27825,
"address" : "0.0.0.0:27825",
"advertisedAddress" : "0.0.0.0:27825"
},
"monitoringApi" : {
"host" : "0.0.0.0",
"port" : 27826,
"advertisedHost" : "0.0.0.0",
"advertisedPort" : 27826,
"address" : "0.0.0.0:27826",
"advertisedAddress" : "0.0.0.0:27826"
},
"maxMessageSizeInBytes" : 8192
},
"cluster" : {
"initialContactPoints" : [ ],
"partitionIds" : [ 1 ],
"nodeId" : 0,
"partitionsCount" : 1,
"replicationFactor" : 1,
"clusterSize" : 1,
"clusterName" : "zeebe-cluster-1",
"gossipFailureTimeout" : 2000,
"gossipInterval" : 150,
"gossipProbeInterval" : 250
},
"threads" : {
"cpuThreadCount" : 2,
"ioThreadCount" : 2
},
"data" : {
"directories" : [ "/tmp/junit5766860902055733790/0/data" ],
"logSegmentSize" : "0MB",
"snapshotPeriod" : "PT1M",
"logIndexDensity" : 5,
"logSegmentSizeInBytes" : 8192
},
"exporters" : {
"snapshot-test-exporter" : {
"jarPath" : null,
"className" : "io.zeebe.broker.it.clustering.SingleBrokerDataDeletionTest$ControllableExporter",
"args" : null,
"external" : false
}
},
"gateway" : {
"network" : {
"host" : "0.0.0.0",
"port" : 27822,
"minKeepAliveInterval" : "PT30S"
},
"cluster" : {
"contactPoint" : "0.0.0.0:27825",
"requestTimeout" : "PT15S",
"clusterName" : "zeebe-cluster",
"memberId" : "gateway",
"host" : "0.0.0.0",
"port" : 27823
},
"threads" : {
"managementThreads" : 1
},
"monitoring" : {
"enabled" : false,
"host" : "0.0.0.0",
"port" : 9600
},
"security" : {
"enabled" : false,
"certificateChainPath" : null,
"privateKeyPath" : null
},
"enable" : false
},
"backpressure" : {
"enabled" : true,
"algorithm" : "VEGAS"
},
"stepTimeout" : "PT5M",
"executionMetricsExporterEnabled" : false
}
00:16:18.467 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [1/10]: actor scheduler
00:16:18.469 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [1/10]: actor scheduler started in 2 ms
00:16:18.469 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [2/10]: membership and replication protocol
00:16:18.478 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [2/10]: membership and replication protocol started in 9 ms
00:16:18.478 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [3/10]: command api transport
00:16:18.530 [] [Thread-559] DEBUG io.zeebe.broker.system - Bound command API to 0.0.0.0:27824
00:16:18.531 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [3/10]: command api transport started in 53 ms
00:16:18.531 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [4/10]: command api handler
00:16:18.533 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [4/10]: command api handler started in 1 ms
00:16:18.534 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [5/10]: subscription api
00:16:18.534 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [5/10]: subscription api started in 0 ms
00:16:18.534 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [6/10]: cluster services
00:16:18.556 [] [atomix-0] WARN io.atomix.primitive.partition.impl.DefaultPartitionGroupMembershipService - Failed to locate partition group(s) via bootstrap nodes. Please ensure partition groups are configured either locally or remotely and the node is able to reach partition group members.
00:16:18.559 [] [atomix-0] INFO io.atomix.raft.partition.impl.RaftPartitionServer - Starting server for partition PartitionId{id=1, group=raft-partition}
00:16:18.633 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to FOLLOWER
00:16:18.634 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.roles.FollowerRole - RaftServer{raft-partition-partition-1}{role=FOLLOWER} - Single member cluster. Transitioning directly to candidate.
00:16:18.634 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to CANDIDATE
00:16:18.634 [] [raft-server-0-raft-partition-partition-1] WARN io.atomix.utils.event.ListenerRegistry - Listener io.atomix.raft.roles.FollowerRole$$Lambda$387/0x0000000840424840@70419b60 not registered
00:16:18.636 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.roles.CandidateRole - RaftServer{raft-partition-partition-1}{role=CANDIDATE} - Single member cluster. Transitioning directly to leader.
00:16:18.645 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to LEADER
00:16:18.646 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Found leader 0
00:16:18.653 [] [raft-partition-group-raft-partition-0] INFO io.atomix.raft.partition.RaftPartitionGroup - Started
00:16:18.653 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [6/10]: cluster services started in 118 ms
00:16:18.654 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [7/10]: topology manager
00:16:18.654 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [7/10]: topology manager started in 0 ms
00:16:18.654 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [8/10]: metric's server
00:16:18.657 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [8/10]: metric's server started in 3 ms
00:16:18.657 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [9/10]: leader management request handler
00:16:18.657 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [9/10]: leader management request handler started in 0 ms
00:16:18.657 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 [10/10]: zeebe partitions
00:16:18.658 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 partitions [1/1]: partition 1
00:16:18.658 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Removing follower partition service for partition PartitionId{id=1, group=raft-partition}
00:16:18.659 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Partition role transitioning from null to LEADER
00:16:18.659 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Installing leader partition service for partition PartitionId{id=1, group=raft-partition}
00:16:18.660 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams.snapshot - Available snapshots: []
00:16:19.582 [] [raft-partition-group-raft-partition-0] INFO io.atomix.raft.partition.RaftPartitionGroup - Started
00:16:19.635 [] [main] INFO io.zeebe.gateway - Version: 0.24.0-SNAPSHOT
00:16:19.644 [] [main] INFO io.zeebe.gateway - Starting gateway with configuration {
"network" : {
"host" : "0.0.0.0",
"port" : 27827,
"minKeepAliveInterval" : "PT30S"
},
"cluster" : {
"contactPoint" : "0.0.0.0:27825",
"requestTimeout" : "PT45S",
"clusterName" : "zeebe-cluster-1",
"memberId" : "gateway",
"host" : "0.0.0.0",
"port" : 27828
},
"threads" : {
"managementThreads" : 1
},
"monitoring" : {
"enabled" : false,
"host" : "0.0.0.0",
"port" : 9600
},
"security" : {
"enabled" : false,
"certificateChainPath" : null,
"privateKeyPath" : null
}
}
00:16:19.652 [GatewayTopologyManager] [gateway-scheduler-zb-actors-0] DEBUG io.zeebe.gateway - Received new broker BrokerInfo{nodeId=0, partitionsCount=1, clusterSize=1, replicationFactor=1, partitionRoles={}, partitionLeaderTerms={}, version=0.24.0-SNAPSHOT}.
00:16:19.960 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams.snapshot - Opened database from '/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/runtime'.
00:16:19.962 [Broker-0-LogStream-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 1 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
00:16:19.962 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
00:16:19.963 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@791945ba)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@526fb264, configuration: Configuration(false)]
00:16:19.964 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
00:16:19.965 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6afeb026)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@59f91c56, configuration: Configuration(false)]
00:16:19.965 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@762a799f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@15784dde, configuration: Configuration(false)]
00:16:19.969 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Recovering exporter from snapshot
00:16:19.970 [Broker-0-HealthCheckService] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - All partitions are installed. Broker is ready!
00:16:19.971 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 partitions [1/1]: partition 1 started in 1313 ms
00:16:19.971 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Recovered exporter 'Broker-0-Exporter-1' from snapshot at lastExportedPosition -1
00:16:19.971 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 partitions succeeded. Started 1 steps in 1313 ms.
00:16:19.971 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Configure exporter with id 'snapshot-test-exporter'
00:16:19.971 [] [Thread-559] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [10/10]: zeebe partitions started in 1313 ms
00:16:19.971 [] [Thread-559] INFO io.zeebe.broker.system - Bootstrap Broker-0 succeeded. Started 10 steps in 1504 ms.
00:16:19.972 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Set event filter for exporters: ExporterEventFilter{acceptRecordTypes={COMMAND_REJECTION=true, SBE_UNKNOWN=true, COMMAND=true, EVENT=true, NULL_VAL=true}, acceptValueTypes={INCIDENT=true, JOB_BATCH=true, VARIABLE=true, WORKFLOW_INSTANCE_SUBSCRIPTION=true, JOB=true, NULL_VAL=true, WORKFLOW_INSTANCE_CREATION=true, DEPLOYMENT=true, MESSAGE_START_EVENT_SUBSCRIPTION=true, WORKFLOW_INSTANCE_RESULT=true, WORKFLOW_INSTANCE=true, MESSAGE=true, SBE_UNKNOWN=true, MESSAGE_SUBSCRIPTION=true, VARIABLE_DOCUMENT=true, ERROR=true, TIMER=true}}
00:16:19.972 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Open exporter with id 'snapshot-test-exporter'
00:16:20.058 [GatewayTopologyManager] [gateway-scheduler-zb-actors-0] DEBUG io.zeebe.gateway - Received metadata change from Broker 0, partitions {1=LEADER} and terms {1=1}.
00:16:20.146 [] [main] INFO io.zeebe.broker.system - Full replication factor
00:16:20.236 [] [main] INFO io.zeebe.broker.system - All brokers in topology TopologyImpl{brokers=[BrokerInfoImpl{nodeId=0, host='0.0.0.0', port=27824, version=0.24.0-SNAPSHOT, partitions=[PartitionInfoImpl{partitionId=1, role=LEADER}]}], clusterSize=1, partitionsCount=1, replicationFactor=1, gatewayVersion=0.24.0-SNAPSHOT}
00:16:22.773 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Detected unhealthy components. The current health status of components: {Raft-1=HEALTHY, Broker-0-StreamProcessor-1=UNHEALTHY, logStream=HEALTHY}
00:16:22.774 [Broker-0-HealthCheckService] [Broker-0-zb-actors-1] ERROR io.zeebe.broker.system - Partition-1 failed, marking it as unhealthy
00:16:22.777 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Taking temporary snapshot into /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/pushed-pending/84-1-1588292182776.
00:16:23.004 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Created snapshot for Broker-0-StreamProcessor-1
00:16:23.006 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] INFO io.zeebe.logstreams.snapshot - Finished taking snapshot, need to wait until last written event position 8589939224 is committed, current commit position is 8589939224. After that snapshot can be marked as valid.
00:16:23.006 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] INFO io.zeebe.logstreams.snapshot - Current commit position 8589939224 is greater then 8589939224, snapshot is valid.
00:16:23.039 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.AtomixSnapshotStorage - Purging snapshots older than DbSnapshot{directory=/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776, metadata=DbSnapshotMetadata{index=84, term=1, timestamp=2020-05-01 12:16:22,776}}
00:16:23.041 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.AtomixSnapshotStorage - Search for orphaned snapshots below oldest valid snapshot with index 84 in /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/pushed-pending
00:16:23.044 [Broker-0-DeletionService-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.logstreams.delete - Compacting Atomix log up to index 84
00:16:23.045 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.DbSnapshotStore - Committed new snapshot DbSnapshot{directory=/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776, metadata=DbSnapshotMetadata{index=84, term=1, timestamp=2020-05-01 12:16:22,776}}
00:16:23.045 [] [raft-server-0-raft-partition-partition-1] DEBUG io.zeebe.broker.clustering.atomix.ZeebeRaftStateMachine - ZeebeRaftStateMachine1{partition=raft-partition-partition-1} - Compacting log up from 45 up to {}
00:16:23.046 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Start replicating latest snapshot /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776
00:16:23.129 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] ERROR io.zeebe.util.actor - Uncaught exception in 'Broker-0-LogStream-1' in phase 'STARTED'. Continuing with next job.
java.lang.IllegalStateException: Segment not open
at com.google.common.base.Preconditions.checkState(Preconditions.java:508) ~[guava-29.0-jre.jar:?]
at io.atomix.storage.journal.JournalSegment.checkOpen(JournalSegment.java:234) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.JournalSegment.createReader(JournalSegment.java:211) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.SegmentedJournalReader.initialize(SegmentedJournalReader.java:41) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.SegmentedJournalReader.<init>(SegmentedJournalReader.java:34) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.storage.journal.SegmentedJournal.openReader(SegmentedJournal.java:227) ~[atomix-storage-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.raft.storage.log.RaftLog.openReader(RaftLog.java:63) ~[atomix-cluster-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.atomix.raft.partition.impl.RaftPartitionServer.openReader(RaftPartitionServer.java:198) ~[atomix-cluster-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixRaftServer.create(AtomixRaftServer.java:30) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:18) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixReaderFactory.create(AtomixReaderFactory.java:22) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.storage.atomix.AtomixLogStorage.newReader(AtomixLogStorage.java:40) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.impl.log.LogStreamReaderImpl.<init>(LogStreamReaderImpl.java:41) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.logstreams.impl.log.LogStreamImpl.lambda$newLogStreamReader$2(LogStreamImpl.java:117) ~[zeebe-logstreams-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorJob.invoke(ActorJob.java:62) ~[zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorJob.execute(ActorJob.java:39) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorTask.execute(ActorTask.java:118) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:107) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorThread.doWork(ActorThread.java:91) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
at io.zeebe.util.sched.ActorThread.run(ActorThread.java:204) [zeebe-util-0.24.0-SNAPSHOT.jar:0.24.0-SNAPSHOT]
00:16:23.131 [] [main] INFO io.zeebe.test.records - Test failed, following records where exported:
00:16:23.132 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000092.sst
00:16:23.133 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000093.sst
00:16:23.134 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000094.sst
00:16:23.134 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000095.sst
00:16:23.135 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000096.sst
00:16:23.136 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/000097.sst
00:16:23.136 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/CURRENT
00:16:23.137 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/MANIFEST-000004
00:16:23.137 [] [ForkJoinPool.commonPool-worker-5] INFO io.atomix.raft.partition.RaftPartitionGroup - Stopped
00:16:23.138 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG io.zeebe.logstreams.snapshot - Replicate snapshot chunk /tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/snapshots/84-1-1588292182776/OPTIONS-000090
00:16:23.139 [] [main] DEBUG io.zeebe.gateway - Closing gateway broker client ...
00:16:23.144 [] [main] DEBUG io.zeebe.gateway - topology manager closed
00:16:23.145 [] [main] DEBUG io.zeebe.gateway - Gateway broker client closed.
00:16:23.146 [] [main] DEBUG io.zeebe.broker.system - Closing ClusteringRule...
00:16:23.146 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [1/10]: zeebe partitions
00:16:23.146 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 partitions [1/1]: partition 1
00:16:23.146 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Closing Broker-0-Exporter-1
00:16:23.147 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] DEBUG io.zeebe.broker.exporter - Closed exporter director 'Broker-0-Exporter-1'.
00:16:23.147 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-Exporter-1 successfully
00:16:23.147 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closing Broker-0-SnapshotDirector-1
00:16:23.147 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-SnapshotDirector-1 successfully
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closing Broker-0-StreamProcessor-1
00:16:23.148 [Broker-0-StreamProcessor-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-StreamProcessor-1 successfully
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closing Broker-0-DeletionService-1
00:16:23.148 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.broker.system - Closed Broker-0-DeletionService-1 successfully
00:16:23.252 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-0] DEBUG io.zeebe.logstreams.snapshot - Closed database from '/tmp/junit5766860902055733790/0/data/raft-partition/partitions/1/runtime'.
00:16:23.330 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] INFO io.zeebe.logstreams - Close appender for log stream raft-partition-partition-1
00:16:23.331 [raft-partition-partition-1-write-buffer] [Broker-0-zb-actors-0] DEBUG io.zeebe.dispatcher - Dispatcher closed
00:16:23.331 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] INFO io.zeebe.logstreams - On closing logstream raft-partition-partition-1 close 3 readers
00:16:23.331 [Broker-0-LogStream-1] [Broker-0-zb-actors-0] INFO io.zeebe.logstreams - Close log storage with name raft-partition-partition-1
00:16:23.332 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 partitions [1/1]: partition 1 closed in 186 ms
00:16:23.332 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 partitions succeeded. Closed 1 steps in 186 ms.
00:16:23.332 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [1/10]: zeebe partitions closed in 186 ms
00:16:23.332 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [2/10]: leader management request handler
00:16:23.332 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [2/10]: leader management request handler closed in 0 ms
00:16:23.332 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [3/10]: metric's server
00:16:23.337 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [3/10]: metric's server closed in 5 ms
00:16:23.337 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [4/10]: topology manager
00:16:23.338 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [4/10]: topology manager closed in 0 ms
00:16:23.338 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [5/10]: cluster services
00:16:23.338 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [5/10]: cluster services closed in 0 ms
00:16:23.338 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [6/10]: subscription api
00:16:23.338 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [6/10]: subscription api closed in 0 ms
00:16:23.338 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [7/10]: command api handler
00:16:23.339 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [7/10]: command api handler closed in 1 ms
00:16:23.339 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [8/10]: command api transport
00:16:25.558 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [8/10]: command api transport closed in 2218 ms
00:16:25.558 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [9/10]: membership and replication protocol
00:16:25.559 [] [raft-server-0-raft-partition-partition-1] INFO io.atomix.raft.impl.RaftContext - RaftServer{raft-partition-partition-1} - Transitioning to INACTIVE
00:16:25.634 [] [ForkJoinPool.commonPool-worker-3] INFO io.atomix.raft.partition.RaftPartitionGroup - Stopped
00:16:27.848 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [9/10]: membership and replication protocol closed in 2290 ms
00:16:27.848 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 [10/10]: actor scheduler
00:16:27.849 [] [main] DEBUG io.zeebe.broker.system - Closing Broker-0 [10/10]: actor scheduler closed in 1 ms
00:16:27.849 [] [main] INFO io.zeebe.broker.system - Closing Broker-0 succeeded. Closed 10 steps in 4703 ms.
00:16:27.849 [] [main] INFO io.zeebe.broker.system - Broker shut down.
</pre>
</details>
|
test
|
singlebrokerdatadeletiontest shouldnotcompactnotexportedevents flaky summary how often does the test fail top flaky tests for cw cw does it block your work not really do we suspect that it is a real failure unknown failures example assertion failure error message segment not open stacktrace java util concurrent executionexception segment not open at io zeebe util sched future completableactorfuture get completableactorfuture java at io zeebe util sched future completableactorfuture get completableactorfuture java at io zeebe util sched futureutil join futureutil java at io zeebe util sched future completableactorfuture join completableactorfuture java at io zeebe broker it clustering singlebrokerdatadeletiontest shouldnotcompactnotexportedevents singlebrokerdatadeletiontest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit rules testwatcher evaluate testwatcher java at org junit rules externalresource evaluate externalresource java at org junit rules externalresource evaluate externalresource java at org junit rules externalresource evaluate externalresource java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executelazy junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by java lang illegalstateexception segment not open at com google common base preconditions checkstate preconditions java at io atomix storage journal journalsegment checkopen journalsegment java at io atomix storage journal journalsegment createreader journalsegment java at io atomix storage journal segmentedjournalreader initialize segmentedjournalreader java at io atomix storage journal segmentedjournalreader segmentedjournalreader java at io atomix storage journal segmentedjournal openreader segmentedjournal java at io atomix raft storage log raftlog openreader raftlog java at io atomix raft partition impl raftpartitionserver openreader raftpartitionserver java at io zeebe logstreams storage atomix atomixraftserver create atomixraftserver java at io zeebe logstreams storage atomix atomixreaderfactory create atomixreaderfactory java at io zeebe logstreams storage atomix atomixreaderfactory create atomixreaderfactory java at io zeebe logstreams storage atomix atomixlogstorage newreader atomixlogstorage java at io zeebe logstreams impl log logstreamreaderimpl logstreamreaderimpl java at io zeebe logstreams impl log logstreamimpl lambda newlogstreamreader logstreamimpl java at io zeebe util sched actorjob invoke actorjob java at io zeebe util sched actorjob execute actorjob java at io zeebe util sched actortask execute actortask java at io zeebe util sched actorthread executecurrenttask actorthread java at io zeebe util sched actorthread dowork actorthread java at io zeebe util sched actorthread run actorthread java hypotheses logs logs info io zeebe test test started shouldnotcompactnotexportedevents io zeebe broker it clustering singlebrokerdatadeletiontest info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset debug io zeebe broker system initializing system with base path tmp info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe test util socketutil choosing next port for test fork with range portrange host localhost baseport maxoffset currentoffset info io zeebe broker system version snapshot info io zeebe broker system starting broker with configuration network host portoffset maxmessagesize advertisedhost commandapi host port advertisedhost advertisedport address advertisedaddress internalapi host port advertisedhost advertisedport address advertisedaddress monitoringapi host port advertisedhost advertisedport address advertisedaddress maxmessagesizeinbytes cluster initialcontactpoints partitionids nodeid partitionscount replicationfactor clustersize clustername zeebe cluster gossipfailuretimeout gossipinterval gossipprobeinterval threads cputhreadcount iothreadcount data directories logsegmentsize snapshotperiod logindexdensity logsegmentsizeinbytes exporters snapshot test exporter jarpath null classname io zeebe broker it clustering singlebrokerdatadeletiontest controllableexporter args null external false gateway network host port minkeepaliveinterval cluster contactpoint requesttimeout clustername zeebe cluster memberid gateway host port threads managementthreads monitoring enabled false host port security enabled false certificatechainpath null privatekeypath null enable false backpressure enabled true algorithm vegas steptimeout executionmetricsexporterenabled false info io zeebe broker system bootstrap broker actor scheduler debug io zeebe broker system bootstrap broker actor scheduler started in ms info io zeebe broker system bootstrap broker membership and replication protocol debug io zeebe broker system bootstrap broker membership and replication protocol started in ms info io zeebe broker system bootstrap broker command api transport debug io zeebe broker system bound command api to debug io zeebe broker system bootstrap broker command api transport started in ms info io zeebe broker system bootstrap broker command api handler debug io zeebe broker system bootstrap broker command api handler started in ms info io zeebe broker system bootstrap broker subscription api debug io zeebe broker system bootstrap broker subscription api started in ms info io zeebe broker system bootstrap broker cluster services warn io atomix primitive partition impl defaultpartitiongroupmembershipservice failed to locate partition group s via bootstrap nodes please ensure partition groups are configured either locally or remotely and the node is able to reach partition group members info io atomix raft partition impl raftpartitionserver starting server for partition partitionid id group raft partition info io atomix raft impl raftcontext raftserver raft partition partition transitioning to follower info io atomix raft roles followerrole raftserver raft partition partition role follower single member cluster transitioning directly to candidate info io atomix raft impl raftcontext raftserver raft partition partition transitioning to candidate warn io atomix utils event listenerregistry listener io atomix raft roles followerrole lambda not registered info io atomix raft roles candidaterole raftserver raft partition partition role candidate single member cluster transitioning directly to leader info io atomix raft impl raftcontext raftserver raft partition partition transitioning to leader info io atomix raft impl raftcontext raftserver raft partition partition found leader info io atomix raft partition raftpartitiongroup started debug io zeebe broker system bootstrap broker cluster services started in ms info io zeebe broker system bootstrap broker topology manager debug io zeebe broker system bootstrap broker topology manager started in ms info io zeebe broker system bootstrap broker metric s server debug io zeebe broker system bootstrap broker metric s server started in ms info io zeebe broker system bootstrap broker leader management request handler debug io zeebe broker system bootstrap broker leader management request handler started in ms info io zeebe broker system bootstrap broker zeebe partitions info io zeebe broker system bootstrap broker partitions partition debug io zeebe broker system removing follower partition service for partition partitionid id group raft partition debug io zeebe broker system partition role transitioning from null to leader debug io zeebe broker system installing leader partition service for partition partitionid id group raft partition debug io zeebe logstreams snapshot available snapshots info io atomix raft partition raftpartitiongroup started info io zeebe gateway version snapshot info io zeebe gateway starting gateway with configuration network host port minkeepaliveinterval cluster contactpoint requesttimeout clustername zeebe cluster memberid gateway host port threads managementthreads monitoring enabled false host port security enabled false certificatechainpath null privatekeypath null debug io zeebe gateway received new broker brokerinfo nodeid partitionscount clustersize replicationfactor partitionroles partitionleaderterms version snapshot debug io zeebe logstreams snapshot opened database from tmp data raft partition partitions runtime debug io zeebe logstreams configured log appender back pressure at partition as appendervegascfg initiallimit maxconcurrency alphalimit betalimit window limiting is disabled debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created debug io zeebe broker exporter recovering exporter from snapshot debug io zeebe broker system all partitions are installed broker is ready debug io zeebe broker system bootstrap broker partitions partition started in ms debug io zeebe broker exporter recovered exporter broker exporter from snapshot at lastexportedposition info io zeebe broker system bootstrap broker partitions succeeded started steps in ms debug io zeebe broker exporter configure exporter with id snapshot test exporter debug io zeebe broker system bootstrap broker zeebe partitions started in ms info io zeebe broker system bootstrap broker succeeded started steps in ms debug io zeebe broker exporter set event filter for exporters exportereventfilter acceptrecordtypes command rejection true sbe unknown true command true event true null val true acceptvaluetypes incident true job batch true variable true workflow instance subscription true job true null val true workflow instance creation true deployment true message start event subscription true workflow instance result true workflow instance true message true sbe unknown true message subscription true variable document true error true timer true debug io zeebe broker exporter open exporter with id snapshot test exporter debug io zeebe gateway received metadata change from broker partitions leader and terms info io zeebe broker system full replication factor info io zeebe broker system all brokers in topology topologyimpl brokers clustersize partitionscount replicationfactor gatewayversion snapshot debug io zeebe broker system detected unhealthy components the current health status of components raft healthy broker streamprocessor unhealthy logstream healthy error io zeebe broker system partition failed marking it as unhealthy debug io zeebe logstreams snapshot taking temporary snapshot into tmp data raft partition partitions pushed pending debug io zeebe logstreams snapshot created snapshot for broker streamprocessor info io zeebe logstreams snapshot finished taking snapshot need to wait until last written event position is committed current commit position is after that snapshot can be marked as valid info io zeebe logstreams snapshot current commit position is greater then snapshot is valid debug io zeebe broker clustering atomix storage snapshot atomixsnapshotstorage purging snapshots older than dbsnapshot directory tmp data raft partition partitions snapshots metadata dbsnapshotmetadata index term timestamp debug io zeebe broker clustering atomix storage snapshot atomixsnapshotstorage search for orphaned snapshots below oldest valid snapshot with index in tmp data raft partition partitions pushed pending debug io zeebe broker logstreams delete compacting atomix log up to index debug io zeebe broker clustering atomix storage snapshot dbsnapshotstore committed new snapshot dbsnapshot directory tmp data raft partition partitions snapshots metadata dbsnapshotmetadata index term timestamp debug io zeebe broker clustering atomix zeeberaftstatemachine partition raft partition partition compacting log up from up to debug io zeebe logstreams snapshot start replicating latest snapshot tmp data raft partition partitions snapshots error io zeebe util actor uncaught exception in broker logstream in phase started continuing with next job java lang illegalstateexception segment not open at com google common base preconditions checkstate preconditions java at io atomix storage journal journalsegment checkopen journalsegment java at io atomix storage journal journalsegment createreader journalsegment java at io atomix storage journal segmentedjournalreader initialize segmentedjournalreader java at io atomix storage journal segmentedjournalreader segmentedjournalreader java at io atomix storage journal segmentedjournal openreader segmentedjournal java at io atomix raft storage log raftlog openreader raftlog java at io atomix raft partition impl raftpartitionserver openreader raftpartitionserver java at io zeebe logstreams storage atomix atomixraftserver create atomixraftserver java at io zeebe logstreams storage atomix atomixreaderfactory create atomixreaderfactory java at io zeebe logstreams storage atomix atomixreaderfactory create atomixreaderfactory java at io zeebe logstreams storage atomix atomixlogstorage newreader atomixlogstorage java at io zeebe logstreams impl log logstreamreaderimpl logstreamreaderimpl java at io zeebe logstreams impl log logstreamimpl lambda newlogstreamreader logstreamimpl java at io zeebe util sched actorjob invoke actorjob java at io zeebe util sched actorjob execute actorjob java at io zeebe util sched actortask execute actortask java at io zeebe util sched actorthread executecurrenttask actorthread java at io zeebe util sched actorthread dowork actorthread java at io zeebe util sched actorthread run actorthread java info io zeebe test records test failed following records where exported debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots sst debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots sst debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots sst debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots sst debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots sst debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots sst debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots current debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots manifest info io atomix raft partition raftpartitiongroup stopped debug io zeebe logstreams snapshot replicate snapshot chunk tmp data raft partition partitions snapshots options debug io zeebe gateway closing gateway broker client debug io zeebe gateway topology manager closed debug io zeebe gateway gateway broker client closed debug io zeebe broker system closing clusteringrule info io zeebe broker system closing broker zeebe partitions info io zeebe broker system closing broker partitions partition debug io zeebe broker system closing broker exporter debug io zeebe broker exporter closed exporter director broker exporter debug io zeebe broker system closed broker exporter successfully debug io zeebe broker system closing broker snapshotdirector debug io zeebe broker system closed broker snapshotdirector successfully debug io zeebe broker system closing broker streamprocessor debug io zeebe logstreams closed stream processor controller broker streamprocessor debug io zeebe broker system closed broker streamprocessor successfully debug io zeebe broker system closing broker deletionservice debug io zeebe broker system closed broker deletionservice successfully debug io zeebe logstreams snapshot closed database from tmp data raft partition partitions runtime info io zeebe logstreams close appender for log stream raft partition partition debug io zeebe dispatcher dispatcher closed info io zeebe logstreams on closing logstream raft partition partition close readers info io zeebe logstreams close log storage with name raft partition partition debug io zeebe broker system closing broker partitions partition closed in ms info io zeebe broker system closing broker partitions succeeded closed steps in ms debug io zeebe broker system closing broker zeebe partitions closed in ms info io zeebe broker system closing broker leader management request handler debug io zeebe broker system closing broker leader management request handler closed in ms info io zeebe broker system closing broker metric s server debug io zeebe broker system closing broker metric s server closed in ms info io zeebe broker system closing broker topology manager debug io zeebe broker system closing broker topology manager closed in ms info io zeebe broker system closing broker cluster services debug io zeebe broker system closing broker cluster services closed in ms info io zeebe broker system closing broker subscription api debug io zeebe broker system closing broker subscription api closed in ms info io zeebe broker system closing broker command api handler debug io zeebe broker system closing broker command api handler closed in ms info io zeebe broker system closing broker command api transport debug io zeebe broker system closing broker command api transport closed in ms info io zeebe broker system closing broker membership and replication protocol info io atomix raft impl raftcontext raftserver raft partition partition transitioning to inactive info io atomix raft partition raftpartitiongroup stopped debug io zeebe broker system closing broker membership and replication protocol closed in ms info io zeebe broker system closing broker actor scheduler debug io zeebe broker system closing broker actor scheduler closed in ms info io zeebe broker system closing broker succeeded closed steps in ms info io zeebe broker system broker shut down
| 1
|
210,969
| 16,136,622,558
|
IssuesEvent
|
2021-04-29 12:41:34
|
Polkadex-Substrate/Polkadex
|
https://api.github.com/repos/Polkadex-Substrate/Polkadex
|
closed
|
Remove Balances Pallet
|
Network: Mainnet Network: Testnet enhancement
|
**Is your feature request related to a problem? Please describe.**
Balances pallet is a relic from the Substrate node template, and we are not using it anymore in Polkadex.
**Describe the solution you'd like**
We have to remove the balances pallet and replace all instances of that with NativeAssetCurrency<Self> of Polkadex Custom Assets
**Additional context**
Give importance to transaction payment pallet, staking, treasury, and any other module dealing with balances pallet and native currency
|
1.0
|
Remove Balances Pallet - **Is your feature request related to a problem? Please describe.**
Balances pallet is a relic from the Substrate node template, and we are not using it anymore in Polkadex.
**Describe the solution you'd like**
We have to remove the balances pallet and replace all instances of that with NativeAssetCurrency<Self> of Polkadex Custom Assets
**Additional context**
Give importance to transaction payment pallet, staking, treasury, and any other module dealing with balances pallet and native currency
|
test
|
remove balances pallet is your feature request related to a problem please describe balances pallet is a relic from the substrate node template and we are not using it anymore in polkadex describe the solution you d like we have to remove the balances pallet and replace all instances of that with nativeassetcurrency of polkadex custom assets additional context give importance to transaction payment pallet staking treasury and any other module dealing with balances pallet and native currency
| 1
|
124,629
| 10,318,882,594
|
IssuesEvent
|
2019-08-30 15:58:27
|
microsoft/localizationkit
|
https://api.github.com/repos/microsoft/localizationkit
|
opened
|
Test for ensuring required languages exist
|
Test Request
|
A test should be added to ensure that all required languages are covered. A list of languages could be provided and we check that each language in that list is covered.
|
1.0
|
Test for ensuring required languages exist - A test should be added to ensure that all required languages are covered. A list of languages could be provided and we check that each language in that list is covered.
|
test
|
test for ensuring required languages exist a test should be added to ensure that all required languages are covered a list of languages could be provided and we check that each language in that list is covered
| 1
|
11,405
| 30,349,367,554
|
IssuesEvent
|
2023-07-11 17:42:44
|
MicrosoftDocs/architecture-center
|
https://api.github.com/repos/MicrosoftDocs/architecture-center
|
closed
|
PowerPoint find need extra permission after download
|
doc-enhancement assigned-to-author triaged architecture-center/svc Pri2 azure-guide/subsvc
|
Architecture PowerPoint file that mentioned for download needed extra permission after download!
https://arch-center.azureedge.net/conversation-summarization-overview.pptx
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: aa815b05-e4be-75fd-04bb-9136b6228502
* Version Independent ID: aa815b05-e4be-75fd-04bb-9136b6228502
* Content: [Conversation summarization - Azure Architecture Center](https://learn.microsoft.com/en-us/azure/architecture/guide/ai/conversation-summarization)
* Content Source: [docs/guide/ai/conversation-summarization.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/guide/ai/conversation-summarization.yml)
* Service: **architecture-center**
* Sub-service: **azure-guide**
* GitHub Login: @mejani
* Microsoft Alias: **mejani**
|
1.0
|
PowerPoint find need extra permission after download - Architecture PowerPoint file that mentioned for download needed extra permission after download!
https://arch-center.azureedge.net/conversation-summarization-overview.pptx
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: aa815b05-e4be-75fd-04bb-9136b6228502
* Version Independent ID: aa815b05-e4be-75fd-04bb-9136b6228502
* Content: [Conversation summarization - Azure Architecture Center](https://learn.microsoft.com/en-us/azure/architecture/guide/ai/conversation-summarization)
* Content Source: [docs/guide/ai/conversation-summarization.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/guide/ai/conversation-summarization.yml)
* Service: **architecture-center**
* Sub-service: **azure-guide**
* GitHub Login: @mejani
* Microsoft Alias: **mejani**
|
non_test
|
powerpoint find need extra permission after download architecture powerpoint file that mentioned for download needed extra permission after download document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service azure guide github login mejani microsoft alias mejani
| 0
|
224,696
| 17,769,666,194
|
IssuesEvent
|
2021-08-30 12:12:01
|
milvus-io/milvus
|
https://api.github.com/repos/milvus-io/milvus
|
closed
|
Test case "test_release_partition_during_searching" should be updated for the behavior change for searching released partitions
|
area/test
|
<!-- Please state your issue using the following template and, most importantly, in English. -->
#### Steps/Code to reproduce:
For milvus 2.0 version, searching released partitions should return error.
while the original case could search 0 result.
#### Expected result:
#### Actual results:
#### Environment:
- Milvus version(e.g. v2.0.0-RC2 or 8b23a93):
- Deployment mode(standalone or cluster):
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
#### Configuration file:
#### Additional context:
|
1.0
|
Test case "test_release_partition_during_searching" should be updated for the behavior change for searching released partitions - <!-- Please state your issue using the following template and, most importantly, in English. -->
#### Steps/Code to reproduce:
For milvus 2.0 version, searching released partitions should return error.
while the original case could search 0 result.
#### Expected result:
#### Actual results:
#### Environment:
- Milvus version(e.g. v2.0.0-RC2 or 8b23a93):
- Deployment mode(standalone or cluster):
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
#### Configuration file:
#### Additional context:
|
test
|
test case test release partition during searching should be updated for the behavior change for searching released partitions steps code to reproduce for milvus version searching released partitions should return error while the original case could search result expected result actual results environment milvus version e g or deployment mode standalone or cluster sdk version e g pymilvus os ubuntu or centos cpu memory gpu others configuration file additional context
| 1
|
137,455
| 11,137,785,930
|
IssuesEvent
|
2019-12-20 20:21:04
|
prestosql/presto
|
https://api.github.com/repos/prestosql/presto
|
opened
|
Flaky TestParquetReader.testComplexNestedStructs
|
bug test
|
```
2019-12-20T13:38:22.1711546Z [ERROR] Tests run: 1545, Failures: 1, Errors: 0, Skipped: 58, Time elapsed: 1,451.957 s <<< FAILURE! - in TestSuite
2019-12-20T13:38:22.1721158Z [ERROR] testComplexNestedStructs(io.prestosql.plugin.hive.parquet.TestParquetReader) Time elapsed: 0.021 s <<< FAILURE!
2019-12-20T13:38:22.1726596Z java.lang.IllegalArgumentException: struct field values cannot be empty
2019-12-20T13:38:22.1732906Z at com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
2019-12-20T13:38:22.1742356Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.lambda$createTestStructs$6(AbstractTestParquetReader.java:1620)
2019-12-20T13:38:22.1746637Z at java.util.ArrayList.forEach(ArrayList.java:1257)
2019-12-20T13:38:22.1755530Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.createTestStructs(AbstractTestParquetReader.java:1620)
2019-12-20T13:38:22.1764461Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.createNullableTestStructs(AbstractTestParquetReader.java:1629)
2019-12-20T13:38:22.1773090Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.testComplexNestedStructs(AbstractTestParquetReader.java:580)
2019-12-20T13:38:22.1778165Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2019-12-20T13:38:22.1784414Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2019-12-20T13:38:22.1791215Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2019-12-20T13:38:22.1795469Z at java.lang.reflect.Method.invoke(Method.java:498)
2019-12-20T13:38:22.1802445Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
2019-12-20T13:38:22.1805479Z at org.testng.internal.Invoker.invokeMethod(Invoker.java:645)
2019-12-20T13:38:22.1807152Z at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851)
2019-12-20T13:38:22.1808955Z at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177)
2019-12-20T13:38:22.1810881Z at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129)
2019-12-20T13:38:22.1812673Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112)
2019-12-20T13:38:22.1814622Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2019-12-20T13:38:22.1816476Z at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2019-12-20T13:38:22.1818082Z at java.lang.Thread.run(Thread.java:748)
2019-12-20T13:38:22.1818353Z
```
|
1.0
|
Flaky TestParquetReader.testComplexNestedStructs - ```
2019-12-20T13:38:22.1711546Z [ERROR] Tests run: 1545, Failures: 1, Errors: 0, Skipped: 58, Time elapsed: 1,451.957 s <<< FAILURE! - in TestSuite
2019-12-20T13:38:22.1721158Z [ERROR] testComplexNestedStructs(io.prestosql.plugin.hive.parquet.TestParquetReader) Time elapsed: 0.021 s <<< FAILURE!
2019-12-20T13:38:22.1726596Z java.lang.IllegalArgumentException: struct field values cannot be empty
2019-12-20T13:38:22.1732906Z at com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
2019-12-20T13:38:22.1742356Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.lambda$createTestStructs$6(AbstractTestParquetReader.java:1620)
2019-12-20T13:38:22.1746637Z at java.util.ArrayList.forEach(ArrayList.java:1257)
2019-12-20T13:38:22.1755530Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.createTestStructs(AbstractTestParquetReader.java:1620)
2019-12-20T13:38:22.1764461Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.createNullableTestStructs(AbstractTestParquetReader.java:1629)
2019-12-20T13:38:22.1773090Z at io.prestosql.plugin.hive.parquet.AbstractTestParquetReader.testComplexNestedStructs(AbstractTestParquetReader.java:580)
2019-12-20T13:38:22.1778165Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2019-12-20T13:38:22.1784414Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2019-12-20T13:38:22.1791215Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2019-12-20T13:38:22.1795469Z at java.lang.reflect.Method.invoke(Method.java:498)
2019-12-20T13:38:22.1802445Z at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
2019-12-20T13:38:22.1805479Z at org.testng.internal.Invoker.invokeMethod(Invoker.java:645)
2019-12-20T13:38:22.1807152Z at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851)
2019-12-20T13:38:22.1808955Z at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177)
2019-12-20T13:38:22.1810881Z at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129)
2019-12-20T13:38:22.1812673Z at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112)
2019-12-20T13:38:22.1814622Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2019-12-20T13:38:22.1816476Z at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2019-12-20T13:38:22.1818082Z at java.lang.Thread.run(Thread.java:748)
2019-12-20T13:38:22.1818353Z
```
|
test
|
flaky testparquetreader testcomplexnestedstructs tests run failures errors skipped time elapsed s failure in testsuite testcomplexnestedstructs io prestosql plugin hive parquet testparquetreader time elapsed s failure java lang illegalargumentexception struct field values cannot be empty at com google common base preconditions checkargument preconditions java at io prestosql plugin hive parquet abstracttestparquetreader lambda createteststructs abstracttestparquetreader java at java util arraylist foreach arraylist java at io prestosql plugin hive parquet abstracttestparquetreader createteststructs abstracttestparquetreader java at io prestosql plugin hive parquet abstracttestparquetreader createnullableteststructs abstracttestparquetreader java at io prestosql plugin hive parquet abstracttestparquetreader testcomplexnestedstructs abstracttestparquetreader java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invoker invokemethod invoker java at org testng internal invoker invoketestmethod invoker java at org testng internal invoker invoketestmethods invoker java at org testng internal testmethodworker invoketestmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
| 1
|
629,978
| 20,073,227,476
|
IssuesEvent
|
2022-02-04 09:45:45
|
Soulcialize/souldragonknight
|
https://api.github.com/repos/Soulcialize/souldragonknight
|
opened
|
Add basic movement to Dragon player
|
type.Enhancement priority.High
|
The Dragon player should be able to:
- Fly in all 4 directions (up, down, left, right)
|
1.0
|
Add basic movement to Dragon player - The Dragon player should be able to:
- Fly in all 4 directions (up, down, left, right)
|
non_test
|
add basic movement to dragon player the dragon player should be able to fly in all directions up down left right
| 0
|
108,575
| 16,778,536,690
|
IssuesEvent
|
2021-06-15 02:50:41
|
S69y/flight-manual.atom.io
|
https://api.github.com/repos/S69y/flight-manual.atom.io
|
opened
|
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz
|
security vulnerability
|
## CVE-2020-7598 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: flight-manual.atom.io/node_modules/mkdirp/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- node-sass-3.13.1.tgz
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: flight-manual.atom.io/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- gulp-babel-6.1.2.tgz (Root Library)
- gulp-util-3.0.8.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/S69y/flight-manual.atom.io/commit/dc229a9f59fdfe6153b16c2f9456017e48115716">dc229a9f59fdfe6153b16c2f9456017e48115716</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz - ## CVE-2020-7598 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: flight-manual.atom.io/node_modules/mkdirp/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- node-sass-3.13.1.tgz
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: flight-manual.atom.io/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- gulp-babel-6.1.2.tgz (Root Library)
- gulp-util-3.0.8.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/S69y/flight-manual.atom.io/commit/dc229a9f59fdfe6153b16c2f9456017e48115716">dc229a9f59fdfe6153b16c2f9456017e48115716</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in minimist tgz minimist tgz cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file flight manual atom io package json path to vulnerable library flight manual atom io node modules mkdirp node modules minimist package json dependency hierarchy gulp sass tgz root library node sass tgz mkdirp tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file flight manual atom io package json path to vulnerable library flight manual atom io node modules minimist package json dependency hierarchy gulp babel tgz root library gulp util tgz x minimist tgz vulnerable library found in head commit a href found in base branch master vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist step up your open source security game with whitesource
| 0
|
173,723
| 13,439,193,906
|
IssuesEvent
|
2020-09-07 20:19:25
|
vmiklos/osm-gimmisn
|
https://api.github.com/repos/vmiklos/osm-gimmisn
|
closed
|
robots.txt update-result
|
confirmed enhancement needs testing
|
Nem tudom mennyire jó ötlet, hogy a keresőrobotok hívásai frissítik az állományokat meghívva az update-result oldalt...
robots.txt-ben ki lehetne őket erről a részről tiltani
valami ilyesmi
```
User-agent: *
Disallow: /*update-result$
```
|
1.0
|
robots.txt update-result - Nem tudom mennyire jó ötlet, hogy a keresőrobotok hívásai frissítik az állományokat meghívva az update-result oldalt...
robots.txt-ben ki lehetne őket erről a részről tiltani
valami ilyesmi
```
User-agent: *
Disallow: /*update-result$
```
|
test
|
robots txt update result nem tudom mennyire jó ötlet hogy a keresőrobotok hívásai frissítik az állományokat meghívva az update result oldalt robots txt ben ki lehetne őket erről a részről tiltani valami ilyesmi user agent disallow update result
| 1
|
342,175
| 10,313,064,565
|
IssuesEvent
|
2019-08-29 21:30:58
|
xaptum/go-enf
|
https://api.github.com/repos/xaptum/go-enf
|
opened
|
Add network resource support to NetworkService
|
Priority: High Type: Enhancement
|
(Depends on #3 and #5)
A "network" is a ::/64 block inside a customer's ::/48 domain. Add methods to the `NetworkService` for listing, getting, and creating networks. The API currently does not support deleting a network.
- [ ] List Networks
Method signature:
```go
func (s *NetworkService) ListNetworks(ctx context.Context, domain string) ([]*Network, *http.Response, error)`
```
API endpoint:
`GET /api/xcr/v2/domains/<domain_id>/nws`
Response Format:
```json
{ data : [ {<network1>}, {<network2>} ],
pages : {}
}
```
See `auth.go` for how to handle this response format.
- [ ] Get Network
Method signature:
func (s *NetworkService) GetNetwork(ctx context.Context, domain string, network string) (*Network, *http.Response, error)`
```
API endpoint:
`GET /api/xcr/v2/domains/<domain_id>/nws/<network_id>`
I'm not sure if this API endpoint exists. Check with Venkat. If not, skip this one for now, but open an issue on xaptum/xcr to add this endpoint.
Response Format:
```json
{ data : [ {<network1>} ],
pages : {}
}
```
- [ ] CreateNetwork
Method signature:
func (s *NetworkService) GetNetwork(ctx context.Context, domain string) (*Network, *http.Response, error)`
```
API endpoint:
`POST /api/xcr/v2/domains/<domain_id>/nws`
Response Format:
```json
{ data : [ {<network1>} ],
pages : {}
}
```
|
1.0
|
Add network resource support to NetworkService - (Depends on #3 and #5)
A "network" is a ::/64 block inside a customer's ::/48 domain. Add methods to the `NetworkService` for listing, getting, and creating networks. The API currently does not support deleting a network.
- [ ] List Networks
Method signature:
```go
func (s *NetworkService) ListNetworks(ctx context.Context, domain string) ([]*Network, *http.Response, error)`
```
API endpoint:
`GET /api/xcr/v2/domains/<domain_id>/nws`
Response Format:
```json
{ data : [ {<network1>}, {<network2>} ],
pages : {}
}
```
See `auth.go` for how to handle this response format.
- [ ] Get Network
Method signature:
func (s *NetworkService) GetNetwork(ctx context.Context, domain string, network string) (*Network, *http.Response, error)`
```
API endpoint:
`GET /api/xcr/v2/domains/<domain_id>/nws/<network_id>`
I'm not sure if this API endpoint exists. Check with Venkat. If not, skip this one for now, but open an issue on xaptum/xcr to add this endpoint.
Response Format:
```json
{ data : [ {<network1>} ],
pages : {}
}
```
- [ ] CreateNetwork
Method signature:
func (s *NetworkService) GetNetwork(ctx context.Context, domain string) (*Network, *http.Response, error)`
```
API endpoint:
`POST /api/xcr/v2/domains/<domain_id>/nws`
Response Format:
```json
{ data : [ {<network1>} ],
pages : {}
}
```
|
non_test
|
add network resource support to networkservice depends on and a network is a block inside a customer s domain add methods to the networkservice for listing getting and creating networks the api currently does not support deleting a network list networks method signature go func s networkservice listnetworks ctx context context domain string network http response error api endpoint get api xcr domains nws response format json data pages see auth go for how to handle this response format get network method signature func s networkservice getnetwork ctx context context domain string network string network http response error api endpoint get api xcr domains nws i m not sure if this api endpoint exists check with venkat if not skip this one for now but open an issue on xaptum xcr to add this endpoint response format json data pages createnetwork method signature func s networkservice getnetwork ctx context context domain string network http response error api endpoint post api xcr domains nws response format json data pages
| 0
|
772,676
| 27,131,585,236
|
IssuesEvent
|
2023-02-16 10:09:42
|
shaka-project/shaka-player
|
https://api.github.com/repos/shaka-project/shaka-player
|
closed
|
Seek never completes with target before buffer and inside small gap limit
|
type: bug status: waiting on response priority: P2
|
<!-- NOTE: If you ignore this template, we will send it again and ask you to fill it out anyway. -->
**Have you read the [FAQ](https://bit.ly/ShakaFAQ) and checked for duplicate open issues?**
yes
**What version of Shaka Player are you using?**
2.5.6
**Can you reproduce the issue with our latest release version?**
yes
**Can you reproduce the issue with the latest code from `master`?**
(I am not building the player from source)
**Are you using the demo app or your own custom app?**
Custom
**If custom app, can you reproduce the issue using our demo app?**
This issue would be difficult to reproduce using manual controls.
**What browser and OS are you using?**
Windows 10, Chrome latest
**For embedded devices (smart TVs, etc.), what model and firmware version are you using?**
**What are the manifest and license server URIs?**
<!-- NOTE:
You can send the URIs to <shaka-player-issues@google.com> instead,
but please use GitHub and the template for the rest.
A copy of the manifest text or an attached manifest will **not** be
enough to reproduce your issue, and we **will** ask you to send a
URI instead. You can copy the URI of the demo app to send us the
exact asset, licence server, and settings you have selected there.
-->
unencrypted
**What did you do?**
<!-- Steps to reproduce the bug -->
See [This Repo](https://github.com/btsimonh/shakatest) for reproduction.
seek to ~1 minute, then seek backwards 1s at a time (arrow left...) until the video freezes but the reported time continues to change by -1s.
**What did you expect to happen?**
The video should not freeze. The player should load buffers and seek in video.
**What actually happened?**
<!-- A clear and concise description of what the bug is -->
<!-- If applicable, you may add screenshots to help explain your problem. -->
When the requested time is before the start of the first buffer loaded by up to config.streaming.smallGapLimit, buffers are cleared but not loaded, and the seek does not complete, but the currentTime changes. (i.e. the video freezes).
When the seeked time is < (time of last first buffer - smallGapLimit) or > (time of last first buffer), normal seeking is resumed.
Note: I have not checked the exact numbers :).
|
1.0
|
Seek never completes with target before buffer and inside small gap limit - <!-- NOTE: If you ignore this template, we will send it again and ask you to fill it out anyway. -->
**Have you read the [FAQ](https://bit.ly/ShakaFAQ) and checked for duplicate open issues?**
yes
**What version of Shaka Player are you using?**
2.5.6
**Can you reproduce the issue with our latest release version?**
yes
**Can you reproduce the issue with the latest code from `master`?**
(I am not building the player from source)
**Are you using the demo app or your own custom app?**
Custom
**If custom app, can you reproduce the issue using our demo app?**
This issue would be difficult to reproduce using manual controls.
**What browser and OS are you using?**
Windows 10, Chrome latest
**For embedded devices (smart TVs, etc.), what model and firmware version are you using?**
**What are the manifest and license server URIs?**
<!-- NOTE:
You can send the URIs to <shaka-player-issues@google.com> instead,
but please use GitHub and the template for the rest.
A copy of the manifest text or an attached manifest will **not** be
enough to reproduce your issue, and we **will** ask you to send a
URI instead. You can copy the URI of the demo app to send us the
exact asset, licence server, and settings you have selected there.
-->
unencrypted
**What did you do?**
<!-- Steps to reproduce the bug -->
See [This Repo](https://github.com/btsimonh/shakatest) for reproduction.
seek to ~1 minute, then seek backwards 1s at a time (arrow left...) until the video freezes but the reported time continues to change by -1s.
**What did you expect to happen?**
The video should not freeze. The player should load buffers and seek in video.
**What actually happened?**
<!-- A clear and concise description of what the bug is -->
<!-- If applicable, you may add screenshots to help explain your problem. -->
When the requested time is before the start of the first buffer loaded by up to config.streaming.smallGapLimit, buffers are cleared but not loaded, and the seek does not complete, but the currentTime changes. (i.e. the video freezes).
When the seeked time is < (time of last first buffer - smallGapLimit) or > (time of last first buffer), normal seeking is resumed.
Note: I have not checked the exact numbers :).
|
non_test
|
seek never completes with target before buffer and inside small gap limit have you read the and checked for duplicate open issues yes what version of shaka player are you using can you reproduce the issue with our latest release version yes can you reproduce the issue with the latest code from master i am not building the player from source are you using the demo app or your own custom app custom if custom app can you reproduce the issue using our demo app this issue would be difficult to reproduce using manual controls what browser and os are you using windows chrome latest for embedded devices smart tvs etc what model and firmware version are you using what are the manifest and license server uris note you can send the uris to instead but please use github and the template for the rest a copy of the manifest text or an attached manifest will not be enough to reproduce your issue and we will ask you to send a uri instead you can copy the uri of the demo app to send us the exact asset licence server and settings you have selected there unencrypted what did you do see for reproduction seek to minute then seek backwards at a time arrow left until the video freezes but the reported time continues to change by what did you expect to happen the video should not freeze the player should load buffers and seek in video what actually happened when the requested time is before the start of the first buffer loaded by up to config streaming smallgaplimit buffers are cleared but not loaded and the seek does not complete but the currenttime changes i e the video freezes when the seeked time is time of last first buffer normal seeking is resumed note i have not checked the exact numbers
| 0
|
10,406
| 6,714,465,732
|
IssuesEvent
|
2017-10-13 17:02:04
|
AdamsLair/duality
|
https://api.github.com/repos/AdamsLair/duality
|
opened
|
Introduce a Subdirectory for Non-Plugin, Non-Executable Binaries
|
Breaking Change Core Editor Feature Usability
|
### Summary
In the course of issue #574, it became apparent that the NuGet update will introduce a large number of additional binaries to the Duality runtime. Keeping them all in the main directory is not very nice to work with, but as the project grows and matures, this might not remain restricted to NuGet only. To fix this, introduce a new binary subfolder next to `Plugins` where all non-plugin, non-executable binaries are stored.
### Analysis
- The `DefaultAssemblyLoader` will need to be updated with regard to the assembly search paths.
- The existing `AssemblyResolve` handler needs to be extended to search in the new subfolder as well.
- Make sure to load `.pdb` files too, where available.
- Update the package manager code and tests with the new file mappings.
- Need to update the `Publish Game` dialog / publish script.
- Potential names:
- `Assemblies`, too generic. Plugins and executables are assemblies too.
- `Binaries`, too generic. Plugins and executables are binaries too.
- `Libraries`, probably the best one so far.
- ?
|
True
|
Introduce a Subdirectory for Non-Plugin, Non-Executable Binaries - ### Summary
In the course of issue #574, it became apparent that the NuGet update will introduce a large number of additional binaries to the Duality runtime. Keeping them all in the main directory is not very nice to work with, but as the project grows and matures, this might not remain restricted to NuGet only. To fix this, introduce a new binary subfolder next to `Plugins` where all non-plugin, non-executable binaries are stored.
### Analysis
- The `DefaultAssemblyLoader` will need to be updated with regard to the assembly search paths.
- The existing `AssemblyResolve` handler needs to be extended to search in the new subfolder as well.
- Make sure to load `.pdb` files too, where available.
- Update the package manager code and tests with the new file mappings.
- Need to update the `Publish Game` dialog / publish script.
- Potential names:
- `Assemblies`, too generic. Plugins and executables are assemblies too.
- `Binaries`, too generic. Plugins and executables are binaries too.
- `Libraries`, probably the best one so far.
- ?
|
non_test
|
introduce a subdirectory for non plugin non executable binaries summary in the course of issue it became apparent that the nuget update will introduce a large number of additional binaries to the duality runtime keeping them all in the main directory is not very nice to work with but as the project grows and matures this might not remain restricted to nuget only to fix this introduce a new binary subfolder next to plugins where all non plugin non executable binaries are stored analysis the defaultassemblyloader will need to be updated with regard to the assembly search paths the existing assemblyresolve handler needs to be extended to search in the new subfolder as well make sure to load pdb files too where available update the package manager code and tests with the new file mappings need to update the publish game dialog publish script potential names assemblies too generic plugins and executables are assemblies too binaries too generic plugins and executables are binaries too libraries probably the best one so far
| 0
|
109,439
| 4,387,416,435
|
IssuesEvent
|
2016-08-08 15:43:57
|
RobotLocomotion/drake
|
https://api.github.com/repos/RobotLocomotion/drake
|
closed
|
Mysterious CI Failure - Marking the build as aborted.
|
priority: backlog team: software core type: bug type: continuous integration
|
# The Error
```
[drake:make] -- Installing: /home/ubuntu/workspace/linux-gcc-exBuild timed out (after 240 minutes). Marking the build as aborted.
20:06:49 Build was aborted
```
# Example Logs
* https://drake-jenkins.csail.mit.edu/job/linux-gcc-experimental-ros/75/console
|
1.0
|
Mysterious CI Failure - Marking the build as aborted. - # The Error
```
[drake:make] -- Installing: /home/ubuntu/workspace/linux-gcc-exBuild timed out (after 240 minutes). Marking the build as aborted.
20:06:49 Build was aborted
```
# Example Logs
* https://drake-jenkins.csail.mit.edu/job/linux-gcc-experimental-ros/75/console
|
non_test
|
mysterious ci failure marking the build as aborted the error installing home ubuntu workspace linux gcc exbuild timed out after minutes marking the build as aborted build was aborted example logs
| 0
|
85,442
| 7,969,800,740
|
IssuesEvent
|
2018-07-16 10:21:17
|
italia/spid
|
https://api.github.com/repos/italia/spid
|
closed
|
Controllo metadata Comune di Pero
|
aggiornamento md test metadata
|
Buongiorno,
Per conto del comune di Pero, abbiamo predisposto i metadata e pubblicati nella cartella
https://pero.comune-online.it/serviziSPID/metatada.xml
questi metadata sono frutto dell'accodamento a precedenti metadata
e sono stati predisposti salvaguardando entityId e precedenti certificati ed endpoint
cordiali saluti
Facondini Stefano
Maggioli spa
|
1.0
|
Controllo metadata Comune di Pero - Buongiorno,
Per conto del comune di Pero, abbiamo predisposto i metadata e pubblicati nella cartella
https://pero.comune-online.it/serviziSPID/metatada.xml
questi metadata sono frutto dell'accodamento a precedenti metadata
e sono stati predisposti salvaguardando entityId e precedenti certificati ed endpoint
cordiali saluti
Facondini Stefano
Maggioli spa
|
test
|
controllo metadata comune di pero buongiorno per conto del comune di pero abbiamo predisposto i metadata e pubblicati nella cartella questi metadata sono frutto dell accodamento a precedenti metadata e sono stati predisposti salvaguardando entityid e precedenti certificati ed endpoint cordiali saluti facondini stefano maggioli spa
| 1
|
42,831
| 7,005,342,337
|
IssuesEvent
|
2017-12-19 01:30:50
|
MightyPirates/OpenComputers
|
https://api.github.com/repos/MightyPirates/OpenComputers
|
closed
|
Loot disks cannot be found in chests
|
documentation
|
Playing on MC 1.10.2 OC 1.7.0.124 and I cannot find loot disks at all. At first I thought one of my other mods were interfering with the chest loot tables but after a fresh install with the only mod being OC I still cannot find any, even in creative. I realize that I can simply craft them but that's no fun for RP purposes.
For the record, my previous foray into a earlier opencomputers version on MC 1.7 yielded the disks easily.
|
1.0
|
Loot disks cannot be found in chests - Playing on MC 1.10.2 OC 1.7.0.124 and I cannot find loot disks at all. At first I thought one of my other mods were interfering with the chest loot tables but after a fresh install with the only mod being OC I still cannot find any, even in creative. I realize that I can simply craft them but that's no fun for RP purposes.
For the record, my previous foray into a earlier opencomputers version on MC 1.7 yielded the disks easily.
|
non_test
|
loot disks cannot be found in chests playing on mc oc and i cannot find loot disks at all at first i thought one of my other mods were interfering with the chest loot tables but after a fresh install with the only mod being oc i still cannot find any even in creative i realize that i can simply craft them but that s no fun for rp purposes for the record my previous foray into a earlier opencomputers version on mc yielded the disks easily
| 0
|
35,574
| 7,781,543,737
|
IssuesEvent
|
2018-06-06 00:56:01
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Integration tests and application routing
|
Defect plugins routing
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.6.x
* Platform and Target: Any
### What you did
When relying on the Application class's `route()` hook to load routes, OR using a plugin with routes integration tests fail when trying to use array based routes.
A fully working example application is available: https://github.com/dakota/plugin-test/tree/application-routing
### What happened
`Cake\Routing\Exception\MissingRouteException` is thrown as demonstrated https://travis-ci.com/dakota/plugin-test/builds/74640142
### What you expected to happen
Integration tests should work.
The cause of failure is https://github.com/cakephp/cakephp/blob/master/src/TestSuite/IntegrationTestCase.php#L671
An attempt is made to match the route before the Router middleware is loaded, and so no routes have been loaded at this time. Trying to load the routes manually before the tests works unless named routes are used (In which case a duplicate route exception is thrown)
The server class will need to be refactored to load middleware/routes before execution. Alternatively an `AppBuilder` needs to be created that can build the application state (Including middleware and routing) before a server instance is created.
|
1.0
|
Integration tests and application routing - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.6.x
* Platform and Target: Any
### What you did
When relying on the Application class's `route()` hook to load routes, OR using a plugin with routes integration tests fail when trying to use array based routes.
A fully working example application is available: https://github.com/dakota/plugin-test/tree/application-routing
### What happened
`Cake\Routing\Exception\MissingRouteException` is thrown as demonstrated https://travis-ci.com/dakota/plugin-test/builds/74640142
### What you expected to happen
Integration tests should work.
The cause of failure is https://github.com/cakephp/cakephp/blob/master/src/TestSuite/IntegrationTestCase.php#L671
An attempt is made to match the route before the Router middleware is loaded, and so no routes have been loaded at this time. Trying to load the routes manually before the tests works unless named routes are used (In which case a duplicate route exception is thrown)
The server class will need to be refactored to load middleware/routes before execution. Alternatively an `AppBuilder` needs to be created that can build the application state (Including middleware and routing) before a server instance is created.
|
non_test
|
integration tests and application routing this is a multiple allowed bug enhancement feature discussion rfc cakephp version x platform and target any what you did when relying on the application class s route hook to load routes or using a plugin with routes integration tests fail when trying to use array based routes a fully working example application is available what happened cake routing exception missingrouteexception is thrown as demonstrated what you expected to happen integration tests should work the cause of failure is an attempt is made to match the route before the router middleware is loaded and so no routes have been loaded at this time trying to load the routes manually before the tests works unless named routes are used in which case a duplicate route exception is thrown the server class will need to be refactored to load middleware routes before execution alternatively an appbuilder needs to be created that can build the application state including middleware and routing before a server instance is created
| 0
|
229,614
| 18,394,030,927
|
IssuesEvent
|
2021-10-12 09:18:11
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
opened
|
Fail to execute the action ''Clone and Rehydrate' in one attached blob container which SAS URL is created by an access policy (All permissions)
|
🧪 testing :gear: blobs :gear: sas
|
**Storage Explorer Version:** 1.22.0-dev
**Build Number:** 20211012.1
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Big Sur 11.6
**Architecture:** ia32\x64
**How Found:** Ad-hoc testing
**Regression From:** Not a regression
## Steps to Reproduce ##
1. Expand one non-ADLS Gen2 storage account -> Blob Containers.
2. Create a blob container -> Upload one blob -> Update the access tier to 'Archive'.
3. Right click the blob container -> Click 'Manage Access Policies...'.
4. Add one access policy with all permissions -> Generate a SAS URL via the access policy.
5. Attach the blob container via the SAS URL.
6. Switch to the attached blob container.
7. Right click the blob -> Click 'Clone and Rehydrate'.
8. Input a valid destination blob name -> Click 'Apply'.
9. Check whether succeed to execute the action 'Clone and Rehydrate'.
## Expected Experience ##
Succeed to execute the action 'Clone and Rehydrate'.
## Actual Experience ##
Fail to execute the action 'Clone and Rehydrate'.

## Additional Context ##
1. This issue also reproduces when attaching via SAS permissions (Read, Add, Create, Write, Delete, Delete version, List).
2. This issue doesn't reproduce when attaching via SAS permissions (All permissions).
3. Error details:
```
"name": "RestError",
"message": "This request is not authorized to perform this operation using this permission".
```
|
1.0
|
Fail to execute the action ''Clone and Rehydrate' in one attached blob container which SAS URL is created by an access policy (All permissions) - **Storage Explorer Version:** 1.22.0-dev
**Build Number:** 20211012.1
**Branch:** main
**Platform/OS:** Windows 10/Linux Ubuntu 20.04/MacOS Big Sur 11.6
**Architecture:** ia32\x64
**How Found:** Ad-hoc testing
**Regression From:** Not a regression
## Steps to Reproduce ##
1. Expand one non-ADLS Gen2 storage account -> Blob Containers.
2. Create a blob container -> Upload one blob -> Update the access tier to 'Archive'.
3. Right click the blob container -> Click 'Manage Access Policies...'.
4. Add one access policy with all permissions -> Generate a SAS URL via the access policy.
5. Attach the blob container via the SAS URL.
6. Switch to the attached blob container.
7. Right click the blob -> Click 'Clone and Rehydrate'.
8. Input a valid destination blob name -> Click 'Apply'.
9. Check whether succeed to execute the action 'Clone and Rehydrate'.
## Expected Experience ##
Succeed to execute the action 'Clone and Rehydrate'.
## Actual Experience ##
Fail to execute the action 'Clone and Rehydrate'.

## Additional Context ##
1. This issue also reproduces when attaching via SAS permissions (Read, Add, Create, Write, Delete, Delete version, List).
2. This issue doesn't reproduce when attaching via SAS permissions (All permissions).
3. Error details:
```
"name": "RestError",
"message": "This request is not authorized to perform this operation using this permission".
```
|
test
|
fail to execute the action clone and rehydrate in one attached blob container which sas url is created by an access policy all permissions storage explorer version dev build number branch main platform os windows linux ubuntu macos big sur architecture how found ad hoc testing regression from not a regression steps to reproduce expand one non adls storage account blob containers create a blob container upload one blob update the access tier to archive right click the blob container click manage access policies add one access policy with all permissions generate a sas url via the access policy attach the blob container via the sas url switch to the attached blob container right click the blob click clone and rehydrate input a valid destination blob name click apply check whether succeed to execute the action clone and rehydrate expected experience succeed to execute the action clone and rehydrate actual experience fail to execute the action clone and rehydrate additional context this issue also reproduces when attaching via sas permissions read add create write delete delete version list this issue doesn t reproduce when attaching via sas permissions all permissions error details name resterror message this request is not authorized to perform this operation using this permission
| 1
|
98,336
| 8,675,490,476
|
IssuesEvent
|
2018-11-30 11:02:15
|
shahkhan40/shantestrep
|
https://api.github.com/repos/shahkhan40/shantestrep
|
closed
|
fxscantest : ApiV1OrgsOrgidOrgUserOrguseridGetPathParamOrguseridNullValue
|
fxscantest
|
Project : fxscantest
Job : uatenv
Env : uatenv
Region : US_WEST
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/{orgId}/org-user/null
Request :
Response :
Not enough variable values available to expand 'orgId'
Logs :
Assertion [@StatusCode != 401] resolved-to [500 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [500 != 500] result [Failed]Assertion [@StatusCode != 404] resolved-to [500 != 404] result [Passed]Assertion [@StatusCode != 200] resolved-to [500 != 200] result [Passed]
--- FX Bot ---
|
1.0
|
fxscantest : ApiV1OrgsOrgidOrgUserOrguseridGetPathParamOrguseridNullValue - Project : fxscantest
Job : uatenv
Env : uatenv
Region : US_WEST
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/api/v1/orgs/{orgId}/org-user/null
Request :
Response :
Not enough variable values available to expand 'orgId'
Logs :
Assertion [@StatusCode != 401] resolved-to [500 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [500 != 500] result [Failed]Assertion [@StatusCode != 404] resolved-to [500 != 404] result [Passed]Assertion [@StatusCode != 200] resolved-to [500 != 200] result [Passed]
--- FX Bot ---
|
test
|
fxscantest project fxscantest job uatenv env uatenv region us west result fail status code headers endpoint request response not enough variable values available to expand orgid logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot
| 1
|
335,868
| 30,089,478,314
|
IssuesEvent
|
2023-06-29 11:11:06
|
harvester/harvester
|
https://api.github.com/repos/harvester/harvester
|
closed
|
[ENHANCEMENT] Expose more shortcut keys
|
kind/enhancement area/ui require-ui/small not-require/test-plan required-for-rc/v1.2.0
|
**Is your enhancement related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We should expose more key sequences in vnc console.
- Trigger kernel memory dump via crash in Windows Server (CTRL-SCROLL LOCK-SCROLL LOCK) See https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/forcing-a-system-crash-from-the-keyboard?redirectedfrom=MSDN
- Trigger kernel memory dump via crash in Linux (ALT-SYSRQ-c or depending on keyboard it's ALT-PRINT SCREEN-c) See https://www.kernel.org/doc/html/v4.11/admin-guide/sysrq.html
- Restart X11 Server (CTRL-ALT-BACKSPACE) See https://manpages.opensuse.org/Tumbleweed/xorg-x11-server/xorg.conf.5.en.html
- Access Linux console at tty1 from X11 Server (CTRL-ALT-F1, also CTRL-ALT-F2, etc for other TTYs) See https://manpages.opensuse.org/Tumbleweed/xorg-x11-server/xorg.conf.5.en.html
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->

|
1.0
|
[ENHANCEMENT] Expose more shortcut keys - **Is your enhancement related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We should expose more key sequences in vnc console.
- Trigger kernel memory dump via crash in Windows Server (CTRL-SCROLL LOCK-SCROLL LOCK) See https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/forcing-a-system-crash-from-the-keyboard?redirectedfrom=MSDN
- Trigger kernel memory dump via crash in Linux (ALT-SYSRQ-c or depending on keyboard it's ALT-PRINT SCREEN-c) See https://www.kernel.org/doc/html/v4.11/admin-guide/sysrq.html
- Restart X11 Server (CTRL-ALT-BACKSPACE) See https://manpages.opensuse.org/Tumbleweed/xorg-x11-server/xorg.conf.5.en.html
- Access Linux console at tty1 from X11 Server (CTRL-ALT-F1, also CTRL-ALT-F2, etc for other TTYs) See https://manpages.opensuse.org/Tumbleweed/xorg-x11-server/xorg.conf.5.en.html
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->

|
test
|
expose more shortcut keys is your enhancement related to a problem please describe we should expose more key sequences in vnc console trigger kernel memory dump via crash in windows server ctrl scroll lock scroll lock see trigger kernel memory dump via crash in linux alt sysrq c or depending on keyboard it s alt print screen c see restart server ctrl alt backspace see access linux console at from server ctrl alt also ctrl alt etc for other ttys see describe the solution you d like describe alternatives you ve considered additional context
| 1
|
298,910
| 22,579,107,883
|
IssuesEvent
|
2022-06-28 09:57:54
|
ONSdigital/design-system
|
https://api.github.com/repos/ONSdigital/design-system
|
closed
|
[Bug]: `content` parameter of feedback component documented incorrectly
|
Bug Documentation
|
Currently the documentation reads:
> The URL for the action of the feedback form
Which doesn't make sense for the `content` parameter since it is output as a paragraph in the rendered HTML.
|
1.0
|
[Bug]: `content` parameter of feedback component documented incorrectly - Currently the documentation reads:
> The URL for the action of the feedback form
Which doesn't make sense for the `content` parameter since it is output as a paragraph in the rendered HTML.
|
non_test
|
content parameter of feedback component documented incorrectly currently the documentation reads the url for the action of the feedback form which doesn t make sense for the content parameter since it is output as a paragraph in the rendered html
| 0
|
234,646
| 19,213,547,632
|
IssuesEvent
|
2021-12-07 06:35:16
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
IT unstable test `TestSocketAndIp`
|
type/bug component/test sig/sql-infra severity/major
|
## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
in ci https://ci.pingcap.net/blue/organizations/jenkins/tidb_ghpr_coverage/detail/tidb_ghpr_coverage/1389/pipeline
```bash
[2021-11-29T10:36:01.747Z] --- FAIL: TestSocketAndIp (2.67s)
[2021-11-29T10:36:01.747Z] dbtestkit.go:81:
[2021-11-29T10:36:01.747Z] Error Trace: dbtestkit.go:81
[2021-11-29T10:36:01.747Z] tidb_test.go:583
[2021-11-29T10:36:01.747Z] server_test.go:117
[2021-11-29T10:36:01.747Z] tidb_test.go:576
[2021-11-29T10:36:01.747Z] Error: Received unexpected error:
[2021-11-29T10:36:01.747Z] Error 9012: TiFlash server timeout
[2021-11-29T10:36:01.747Z] Test: TestSocketAndIp
[2021-11-29T10:36:01.747Z] Messages: sql:select user(), args:[]
[2021-11-29T10:36:01.747Z] [2021/11/29 18:33:17.851 +08:00] [ERROR] [http_status.go:470] ["start status/rpc server error"] [error="accept tcp 127.0.0.1:46228: use of closed network connection"]
[2021-11-29T10:36:01.747Z] [2021/11/29 18:33:17.852 +08:00] [ERROR] [http_status.go:465] ["http server error"] [error="http: Server closed"]
```
<!-- a step by step guide for reproducing the bug. -->
### 2. What did you expect to see? (Required)
### 3. What did you see instead (Required)
### 4. What is your TiDB version? (Required)
<!-- Paste the output of SELECT tidb_version() -->
|
1.0
|
IT unstable test `TestSocketAndIp` - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
in ci https://ci.pingcap.net/blue/organizations/jenkins/tidb_ghpr_coverage/detail/tidb_ghpr_coverage/1389/pipeline
```bash
[2021-11-29T10:36:01.747Z] --- FAIL: TestSocketAndIp (2.67s)
[2021-11-29T10:36:01.747Z] dbtestkit.go:81:
[2021-11-29T10:36:01.747Z] Error Trace: dbtestkit.go:81
[2021-11-29T10:36:01.747Z] tidb_test.go:583
[2021-11-29T10:36:01.747Z] server_test.go:117
[2021-11-29T10:36:01.747Z] tidb_test.go:576
[2021-11-29T10:36:01.747Z] Error: Received unexpected error:
[2021-11-29T10:36:01.747Z] Error 9012: TiFlash server timeout
[2021-11-29T10:36:01.747Z] Test: TestSocketAndIp
[2021-11-29T10:36:01.747Z] Messages: sql:select user(), args:[]
[2021-11-29T10:36:01.747Z] [2021/11/29 18:33:17.851 +08:00] [ERROR] [http_status.go:470] ["start status/rpc server error"] [error="accept tcp 127.0.0.1:46228: use of closed network connection"]
[2021-11-29T10:36:01.747Z] [2021/11/29 18:33:17.852 +08:00] [ERROR] [http_status.go:465] ["http server error"] [error="http: Server closed"]
```
<!-- a step by step guide for reproducing the bug. -->
### 2. What did you expect to see? (Required)
### 3. What did you see instead (Required)
### 4. What is your TiDB version? (Required)
<!-- Paste the output of SELECT tidb_version() -->
|
test
|
it unstable test testsocketandip bug report please answer these questions before submitting your issue thanks minimal reproduce step required in ci bash fail testsocketandip dbtestkit go error trace dbtestkit go tidb test go server test go tidb test go error received unexpected error error tiflash server timeout test testsocketandip messages sql select user args what did you expect to see required what did you see instead required what is your tidb version required
| 1
|
320,760
| 27,456,277,838
|
IssuesEvent
|
2023-03-02 21:39:27
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
reopened
|
DISABLED test_optional_list (test_jit.TestScript)
|
oncall: jit module: flaky-tests skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_optional_list) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_optional_list`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
|
1.0
|
DISABLED test_optional_list (test_jit.TestScript) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_optional_list) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10335682113).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_optional_list`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
|
test
|
disabled test optional list test jit testscript platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has flakily failed in workflow s debugging instructions after clicking on the recent samples link to find relevant log snippets click on the workflow logs linked above grep for test optional list cc eikanwang wenzhe nrv sanchitintel
| 1
|
244,067
| 26,348,331,580
|
IssuesEvent
|
2023-01-11 01:05:25
|
opentok/OpenTok-PHP-SDK
|
https://api.github.com/repos/opentok/OpenTok-PHP-SDK
|
closed
|
phpunit/phpunit-8.5.31: 2 vulnerabilities (highest severity is: 6.1) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>phpunit/phpunit-8.5.31</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opentok/OpenTok-PHP-SDK/commit/1493c01d5435adf3cd4c1902d1963d0e40922821">1493c01d5435adf3cd4c1902d1963d0e40922821</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (phpunit/phpunit version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2020-11023](https://www.mend.io/vulnerability-database/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | phpunit/php-code-coverage-7.0.15 | Transitive | N/A* | ❌ |
| [CVE-2020-11022](https://www.mend.io/vulnerability-database/CVE-2020-11022) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | phpunit/php-code-coverage-7.0.15 | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>phpunit/php-code-coverage-7.0.15</b></p>
<p>Library that provides collection, processing, and rendering functionality for PHP code coverage information.</p>
<p>Library home page: <a href="https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48">https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48</a></p>
<p>
Dependency Hierarchy:
- phpunit/phpunit-8.5.31 (Root Library)
- :x: **phpunit/php-code-coverage-7.0.15** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opentok/OpenTok-PHP-SDK/commit/1493c01d5435adf3cd4c1902d1963d0e40922821">1493c01d5435adf3cd4c1902d1963d0e40922821</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11022</summary>
### Vulnerable Library - <b>phpunit/php-code-coverage-7.0.15</b></p>
<p>Library that provides collection, processing, and rendering functionality for PHP code coverage information.</p>
<p>Library home page: <a href="https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48">https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48</a></p>
<p>
Dependency Hierarchy:
- phpunit/phpunit-8.5.31 (Root Library)
- :x: **phpunit/php-code-coverage-7.0.15** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opentok/OpenTok-PHP-SDK/commit/1493c01d5435adf3cd4c1902d1963d0e40922821">1493c01d5435adf3cd4c1902d1963d0e40922821</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
<p></p>
</details>
|
True
|
phpunit/phpunit-8.5.31: 2 vulnerabilities (highest severity is: 6.1) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>phpunit/phpunit-8.5.31</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/opentok/OpenTok-PHP-SDK/commit/1493c01d5435adf3cd4c1902d1963d0e40922821">1493c01d5435adf3cd4c1902d1963d0e40922821</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (phpunit/phpunit version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2020-11023](https://www.mend.io/vulnerability-database/CVE-2020-11023) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | phpunit/php-code-coverage-7.0.15 | Transitive | N/A* | ❌ |
| [CVE-2020-11022](https://www.mend.io/vulnerability-database/CVE-2020-11022) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | phpunit/php-code-coverage-7.0.15 | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11023</summary>
### Vulnerable Library - <b>phpunit/php-code-coverage-7.0.15</b></p>
<p>Library that provides collection, processing, and rendering functionality for PHP code coverage information.</p>
<p>Library home page: <a href="https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48">https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48</a></p>
<p>
Dependency Hierarchy:
- phpunit/phpunit-8.5.31 (Root Library)
- :x: **phpunit/php-code-coverage-7.0.15** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opentok/OpenTok-PHP-SDK/commit/1493c01d5435adf3cd4c1902d1963d0e40922821">1493c01d5435adf3cd4c1902d1963d0e40922821</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-11022</summary>
### Vulnerable Library - <b>phpunit/php-code-coverage-7.0.15</b></p>
<p>Library that provides collection, processing, and rendering functionality for PHP code coverage information.</p>
<p>Library home page: <a href="https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48">https://api.github.com/repos/sebastianbergmann/php-code-coverage/zipball/819f92bba8b001d4363065928088de22f25a3a48</a></p>
<p>
Dependency Hierarchy:
- phpunit/phpunit-8.5.31 (Root Library)
- :x: **phpunit/php-code-coverage-7.0.15** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opentok/OpenTok-PHP-SDK/commit/1493c01d5435adf3cd4c1902d1963d0e40922821">1493c01d5435adf3cd4c1902d1963d0e40922821</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11022</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
<p></p>
</details>
|
non_test
|
phpunit phpunit vulnerabilities highest severity is autoclosed vulnerable library phpunit phpunit found in head commit a href vulnerabilities cve severity cvss dependency type fixed in phpunit phpunit version remediation available medium phpunit php code coverage transitive n a medium phpunit php code coverage transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library phpunit php code coverage library that provides collection processing and rendering functionality for php code coverage information library home page a href dependency hierarchy phpunit phpunit root library x phpunit php code coverage vulnerable library found in head commit a href found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails cve vulnerable library phpunit php code coverage library that provides collection processing and rendering functionality for php code coverage information library home page a href dependency hierarchy phpunit phpunit root library x phpunit php code coverage vulnerable library found in head commit a href found in base branch main vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery
| 0
|
268,514
| 23,376,017,612
|
IssuesEvent
|
2022-08-11 03:11:34
|
longhorn/longhorn
|
https://api.github.com/repos/longhorn/longhorn
|
opened
|
[TEST] Add automated test case for Wrong nodeOrDiskEvicted collected in node monitor
|
kind/test
|
## What's the test to develop? Please describe
Add automated test case for https://github.com/longhorn/longhorn/issues/4143
|
1.0
|
[TEST] Add automated test case for Wrong nodeOrDiskEvicted collected in node monitor - ## What's the test to develop? Please describe
Add automated test case for https://github.com/longhorn/longhorn/issues/4143
|
test
|
add automated test case for wrong nodeordiskevicted collected in node monitor what s the test to develop please describe add automated test case for
| 1
|
498,493
| 14,408,223,731
|
IssuesEvent
|
2020-12-03 23:18:48
|
cb-geo/mpm
|
https://api.github.com/repos/cb-geo/mpm
|
closed
|
Contact Algorithm
|
Priority: High Type: Core feature wontfix
|
# Contact Algorithm
## Summary
Implementation of a contact algorithm to handle the frictional interaction between adjacent materials at the region of contact between the two, also known as the multimaterial contact or interface. This implementation is based on the procedures provided by [Bardenhagen _et al._ (2001)](http://test.techscience.com/CMES/v2n4/24749) and [Nairn (2013)](http://www.cof.orst.edu/cof/wse/faculty/Nairn/papers/MMInterfaces.pdf).
## Motivation
This feature allows one to develop simulations where multimaterial interface interaction is essential in the analyses, such as soil-structure interaction, slip surfaces in slope stability analyses and fractures. Applying this feature, the user will be able to assign the desired friction characteristics between whichever pair of materials and the bodies represented by such material model will interact with each other accordingly.
## Design Detail
The multimaterial contact approach to be implemented involves mapping of several properties from the material points to the nodes, where the significant information concerning the contact behaviour must be computed. For this reason, a property pool, namely `nodal_properties`, in the format of a struct and used within the `Mesh` is implemented to store the nodal properties for distinct materials.
```
//! Nodal property pool
std::shared_ptr<mpm::NodalProperties> nodal_properties_{nullptr};
```
In this entity, different properties are handled with a map container, where each property is associated with its own Eigen Matrix, where the rows and columns represent node ids and material ids, respectively.
```
std::map<std::string, Eigen::MatrixXd> properties_;
```
Considering momentum at 2 nodes and for 3 materials as an example in a 2D environment, this is how the information would be stored in this Eigen Matrix:
```
Eigen::Matrix<double,4,3> data2;
data2 << p_110, p_111, p_112,
p_210, p_211, p_212,
p_220, p_221, p_222,
p_220, p_221, p_222;
```
where `p_ijk` is the momentum in the `i` direction, at node `j` and material `k`.
The following are the keys (properties in the plural form) stored in this map object:
- `"masses"`
- `"momenta"`
- `"changes_in_momentum"`
- `"displacements"`
- `"separation_vectors"`: a vector representing relative displacement between adjacent materials
- `"normal_vectors"`: normal vector with respect to the interface plane
- `"nodal_gradient_volumes"`: gradient of the extrapolation of particle volume to nodes, representing a previous step to calculate the normal vector
Within the Node class, a shared pointer will be used in order to access information from the `nodal_properties` without having to pass this entity as argument for the computing functions and steps hereafter described. This shared pointer is initialised in `Mesh` (`initialise_property_handle` function) upon creation of all the key-value pairs in the property pool map:
```
// Create the nodal properties' pool
template <unsigned Tdim>
void mpm::Mesh<Tdim>::create_nodal_properties() {
// Initialise the shared pointer to nodal properties
nodal_properties_ = std::make_shared<mpm::NodalProperties>();
// Check if nodes_ and materials_is empty and throw runtime error if they are
if (nodes_.size() != 0 && materials_.size() != 0) {
// Create pool data for each property in the nodal properties struct
// object. Properties must be named in the plural form
nodal_properties_->create_property("masses", nodes_.size(),
materials_.size());
nodal_properties_->create_property("momenta", nodes_.size() * Tdim,
materials_.size());
// Iterate over all nodes to initialise the property handle in each node
// and assign its node id as the prop id in the nodal property data pool
for (auto nitr = nodes_.cbegin(); nitr != nodes_.cend(); ++nitr)
(*nitr)->initialise_property_handle((*nitr)->id(), nodal_properties_);
} else {
throw std::runtime_error("Number of nodes or number of materials is zero");
}
}
```
The contact law is only applied if a separation between the adjacent materials is identified. This criterion is verified by two multimaterial parameters: normal component of the change in momentum and normal component of the separation vector. If both normal change in momentum and normal separation are negative, then the two materials are in contact; otherwise they are separated. This condition is verified at every time step. The following indicate the steps taken to handle multimaterial contacts:
Before entering the time step loop:
1. Create `nodal_properties` and each key-value pair representing the properties in the map `properties_`.
Within the time step loop:
1. Initialise `nodal_properties`, setting all the parameters to zero.
2. Initialise material ids set in Node Class by iterating over all nodes. This set of material ids is used in future iterations within the node to compute properties that will be stored in `nodal_properties`, thus accounting for the sparsity in the Eigen Matrices.
3. Append material ids to designated set in the Node Class by iterating over all nodes.
4. Map multimaterial mass and momentum to the `nodal_properties` by iterating over all material points.
5. Compute change in momentum by iterating over all nodes.
6. Map displacements to nodes by iterating over all material points.
7. Compute total nodal displacement by iterating over all nodes.
8. Compute separation vector by iterating over all nodes.
9. Compute the nodal gradient volume by iterating over all particles.
10. Compute normal vector by iterating over all nodes.
11. Verify contact or separation condition.
12. Apply contact law if not separated.
Steps 5 to 12 must be implemented after the total nodal velocity has been calculated, which is already implemented in the node.
## Drawbacks
- It will be more expensive computationally.
- It will require a lot of memory to store the information for every node, especially for simulations that comprise a computational grid with many nodes.
- Memory stored might not be used in case of nodes in the mesh that have one or none materials adjacent to it.
- After implementation, future changes to the solver function in the MPMExplicit class must be made while also avoiding any disruption to the steps aforementioned.
## Rationale and Alternatives
Another alternative to the Algorithm implemented would be to directly store the information in the nodes instead of having a property pool (`nodal_properties`). This can even encompass dynamic allocation of memory to avoid the unnecessary usage of memory for the nodes with one or none adjacent materials. However, this alternative would result in a slower algorithm with the possibility of memory leaks happening if no proper release of memory is considered.
## Prior Art
This implementation is highly based in the directions provided by [Nairn (2013)](http://www.cof.orst.edu/cof/wse/faculty/Nairn/papers/MMInterfaces.pdf). This reference also bases part of its theory in the contributions by [Bardenhagen _et al._ (2001)](http://test.techscience.com/CMES/v2n4/24749). In fact, the latter reference is the one that introduces the portion of the contact algorithm envisioned for this design, whereas the former introduced the Imperfect Interface Theory to deal with the cases where adjacent materials are not in contact but are still constitute what the author calls imperfect interface. The details about the imperfect interface contribution is out of the scope of this design.
## Unresolved questions
- A proper algorithm for identifying the interfaces may have to be implemented in order to drastically reduce the amount of memory used and update the size of `nodal_properties` to account only for the nodes where more than one material are identified to be adjacent to the node.
|
1.0
|
Contact Algorithm - # Contact Algorithm
## Summary
Implementation of a contact algorithm to handle the frictional interaction between adjacent materials at the region of contact between the two, also known as the multimaterial contact or interface. This implementation is based on the procedures provided by [Bardenhagen _et al._ (2001)](http://test.techscience.com/CMES/v2n4/24749) and [Nairn (2013)](http://www.cof.orst.edu/cof/wse/faculty/Nairn/papers/MMInterfaces.pdf).
## Motivation
This feature allows one to develop simulations where multimaterial interface interaction is essential in the analyses, such as soil-structure interaction, slip surfaces in slope stability analyses and fractures. Applying this feature, the user will be able to assign the desired friction characteristics between whichever pair of materials and the bodies represented by such material model will interact with each other accordingly.
## Design Detail
The multimaterial contact approach to be implemented involves mapping of several properties from the material points to the nodes, where the significant information concerning the contact behaviour must be computed. For this reason, a property pool, namely `nodal_properties`, in the format of a struct and used within the `Mesh` is implemented to store the nodal properties for distinct materials.
```
//! Nodal property pool
std::shared_ptr<mpm::NodalProperties> nodal_properties_{nullptr};
```
In this entity, different properties are handled with a map container, where each property is associated with its own Eigen Matrix, where the rows and columns represent node ids and material ids, respectively.
```
std::map<std::string, Eigen::MatrixXd> properties_;
```
Considering momentum at 2 nodes and for 3 materials as an example in a 2D environment, this is how the information would be stored in this Eigen Matrix:
```
Eigen::Matrix<double,4,3> data2;
data2 << p_110, p_111, p_112,
p_210, p_211, p_212,
p_220, p_221, p_222,
p_220, p_221, p_222;
```
where `p_ijk` is the momentum in the `i` direction, at node `j` and material `k`.
The following are the keys (properties in the plural form) stored in this map object:
- `"masses"`
- `"momenta"`
- `"changes_in_momentum"`
- `"displacements"`
- `"separation_vectors"`: a vector representing relative displacement between adjacent materials
- `"normal_vectors"`: normal vector with respect to the interface plane
- `"nodal_gradient_volumes"`: gradient of the extrapolation of particle volume to nodes, representing a previous step to calculate the normal vector
Within the Node class, a shared pointer will be used in order to access information from the `nodal_properties` without having to pass this entity as argument for the computing functions and steps hereafter described. This shared pointer is initialised in `Mesh` (`initialise_property_handle` function) upon creation of all the key-value pairs in the property pool map:
```
// Create the nodal properties' pool
template <unsigned Tdim>
void mpm::Mesh<Tdim>::create_nodal_properties() {
// Initialise the shared pointer to nodal properties
nodal_properties_ = std::make_shared<mpm::NodalProperties>();
// Check if nodes_ and materials_is empty and throw runtime error if they are
if (nodes_.size() != 0 && materials_.size() != 0) {
// Create pool data for each property in the nodal properties struct
// object. Properties must be named in the plural form
nodal_properties_->create_property("masses", nodes_.size(),
materials_.size());
nodal_properties_->create_property("momenta", nodes_.size() * Tdim,
materials_.size());
// Iterate over all nodes to initialise the property handle in each node
// and assign its node id as the prop id in the nodal property data pool
for (auto nitr = nodes_.cbegin(); nitr != nodes_.cend(); ++nitr)
(*nitr)->initialise_property_handle((*nitr)->id(), nodal_properties_);
} else {
throw std::runtime_error("Number of nodes or number of materials is zero");
}
}
```
The contact law is only applied if a separation between the adjacent materials is identified. This criterion is verified by two multimaterial parameters: normal component of the change in momentum and normal component of the separation vector. If both normal change in momentum and normal separation are negative, then the two materials are in contact; otherwise they are separated. This condition is verified at every time step. The following indicate the steps taken to handle multimaterial contacts:
Before entering the time step loop:
1. Create `nodal_properties` and each key-value pair representing the properties in the map `properties_`.
Within the time step loop:
1. Initialise `nodal_properties`, setting all the parameters to zero.
2. Initialise material ids set in Node Class by iterating over all nodes. This set of material ids is used in future iterations within the node to compute properties that will be stored in `nodal_properties`, thus accounting for the sparsity in the Eigen Matrices.
3. Append material ids to designated set in the Node Class by iterating over all nodes.
4. Map multimaterial mass and momentum to the `nodal_properties` by iterating over all material points.
5. Compute change in momentum by iterating over all nodes.
6. Map displacements to nodes by iterating over all material points.
7. Compute total nodal displacement by iterating over all nodes.
8. Compute separation vector by iterating over all nodes.
9. Compute the nodal gradient volume by iterating over all particles.
10. Compute normal vector by iterating over all nodes.
11. Verify contact or separation condition.
12. Apply contact law if not separated.
Steps 5 to 12 must be implemented after the total nodal velocity has been calculated, which is already implemented in the node.
## Drawbacks
- It will be more expensive computationally.
- It will require a lot of memory to store the information for every node, especially for simulations that comprise a computational grid with many nodes.
- Memory stored might not be used in case of nodes in the mesh that have one or none materials adjacent to it.
- After implementation, future changes to the solver function in the MPMExplicit class must be made while also avoiding any disruption to the steps aforementioned.
## Rationale and Alternatives
Another alternative to the Algorithm implemented would be to directly store the information in the nodes instead of having a property pool (`nodal_properties`). This can even encompass dynamic allocation of memory to avoid the unnecessary usage of memory for the nodes with one or none adjacent materials. However, this alternative would result in a slower algorithm with the possibility of memory leaks happening if no proper release of memory is considered.
## Prior Art
This implementation is highly based in the directions provided by [Nairn (2013)](http://www.cof.orst.edu/cof/wse/faculty/Nairn/papers/MMInterfaces.pdf). This reference also bases part of its theory in the contributions by [Bardenhagen _et al._ (2001)](http://test.techscience.com/CMES/v2n4/24749). In fact, the latter reference is the one that introduces the portion of the contact algorithm envisioned for this design, whereas the former introduced the Imperfect Interface Theory to deal with the cases where adjacent materials are not in contact but are still constitute what the author calls imperfect interface. The details about the imperfect interface contribution is out of the scope of this design.
## Unresolved questions
- A proper algorithm for identifying the interfaces may have to be implemented in order to drastically reduce the amount of memory used and update the size of `nodal_properties` to account only for the nodes where more than one material are identified to be adjacent to the node.
|
non_test
|
contact algorithm contact algorithm summary implementation of a contact algorithm to handle the frictional interaction between adjacent materials at the region of contact between the two also known as the multimaterial contact or interface this implementation is based on the procedures provided by and motivation this feature allows one to develop simulations where multimaterial interface interaction is essential in the analyses such as soil structure interaction slip surfaces in slope stability analyses and fractures applying this feature the user will be able to assign the desired friction characteristics between whichever pair of materials and the bodies represented by such material model will interact with each other accordingly design detail the multimaterial contact approach to be implemented involves mapping of several properties from the material points to the nodes where the significant information concerning the contact behaviour must be computed for this reason a property pool namely nodal properties in the format of a struct and used within the mesh is implemented to store the nodal properties for distinct materials nodal property pool std shared ptr nodal properties nullptr in this entity different properties are handled with a map container where each property is associated with its own eigen matrix where the rows and columns represent node ids and material ids respectively std map properties considering momentum at nodes and for materials as an example in a environment this is how the information would be stored in this eigen matrix eigen matrix p p p p p p p p p p p p where p ijk is the momentum in the i direction at node j and material k the following are the keys properties in the plural form stored in this map object masses momenta changes in momentum displacements separation vectors a vector representing relative displacement between adjacent materials normal vectors normal vector with respect to the interface plane nodal gradient volumes gradient of the extrapolation of particle volume to nodes representing a previous step to calculate the normal vector within the node class a shared pointer will be used in order to access information from the nodal properties without having to pass this entity as argument for the computing functions and steps hereafter described this shared pointer is initialised in mesh initialise property handle function upon creation of all the key value pairs in the property pool map create the nodal properties pool template void mpm mesh create nodal properties initialise the shared pointer to nodal properties nodal properties std make shared check if nodes and materials is empty and throw runtime error if they are if nodes size materials size create pool data for each property in the nodal properties struct object properties must be named in the plural form nodal properties create property masses nodes size materials size nodal properties create property momenta nodes size tdim materials size iterate over all nodes to initialise the property handle in each node and assign its node id as the prop id in the nodal property data pool for auto nitr nodes cbegin nitr nodes cend nitr nitr initialise property handle nitr id nodal properties else throw std runtime error number of nodes or number of materials is zero the contact law is only applied if a separation between the adjacent materials is identified this criterion is verified by two multimaterial parameters normal component of the change in momentum and normal component of the separation vector if both normal change in momentum and normal separation are negative then the two materials are in contact otherwise they are separated this condition is verified at every time step the following indicate the steps taken to handle multimaterial contacts before entering the time step loop create nodal properties and each key value pair representing the properties in the map properties within the time step loop initialise nodal properties setting all the parameters to zero initialise material ids set in node class by iterating over all nodes this set of material ids is used in future iterations within the node to compute properties that will be stored in nodal properties thus accounting for the sparsity in the eigen matrices append material ids to designated set in the node class by iterating over all nodes map multimaterial mass and momentum to the nodal properties by iterating over all material points compute change in momentum by iterating over all nodes map displacements to nodes by iterating over all material points compute total nodal displacement by iterating over all nodes compute separation vector by iterating over all nodes compute the nodal gradient volume by iterating over all particles compute normal vector by iterating over all nodes verify contact or separation condition apply contact law if not separated steps to must be implemented after the total nodal velocity has been calculated which is already implemented in the node drawbacks it will be more expensive computationally it will require a lot of memory to store the information for every node especially for simulations that comprise a computational grid with many nodes memory stored might not be used in case of nodes in the mesh that have one or none materials adjacent to it after implementation future changes to the solver function in the mpmexplicit class must be made while also avoiding any disruption to the steps aforementioned rationale and alternatives another alternative to the algorithm implemented would be to directly store the information in the nodes instead of having a property pool nodal properties this can even encompass dynamic allocation of memory to avoid the unnecessary usage of memory for the nodes with one or none adjacent materials however this alternative would result in a slower algorithm with the possibility of memory leaks happening if no proper release of memory is considered prior art this implementation is highly based in the directions provided by this reference also bases part of its theory in the contributions by in fact the latter reference is the one that introduces the portion of the contact algorithm envisioned for this design whereas the former introduced the imperfect interface theory to deal with the cases where adjacent materials are not in contact but are still constitute what the author calls imperfect interface the details about the imperfect interface contribution is out of the scope of this design unresolved questions a proper algorithm for identifying the interfaces may have to be implemented in order to drastically reduce the amount of memory used and update the size of nodal properties to account only for the nodes where more than one material are identified to be adjacent to the node
| 0
|
162,206
| 12,627,566,606
|
IssuesEvent
|
2020-06-14 22:11:48
|
ChainSafe/gossamer
|
https://api.github.com/repos/ChainSafe/gossamer
|
closed
|
investigate finalize_block's effects on storage
|
Priority: 2 - High tests
|
when submitting a StorageChange extrinsic, the storage at key is correctly set to the value before finalize_block is called. however, after finalize_block is called, the value gets modified. looking at the runtime, it's not clear exactly why this is happening. needs investigation.
|
1.0
|
investigate finalize_block's effects on storage - when submitting a StorageChange extrinsic, the storage at key is correctly set to the value before finalize_block is called. however, after finalize_block is called, the value gets modified. looking at the runtime, it's not clear exactly why this is happening. needs investigation.
|
test
|
investigate finalize block s effects on storage when submitting a storagechange extrinsic the storage at key is correctly set to the value before finalize block is called however after finalize block is called the value gets modified looking at the runtime it s not clear exactly why this is happening needs investigation
| 1
|
295,621
| 25,489,278,170
|
IssuesEvent
|
2022-11-26 21:11:23
|
kiaconatyx/library-app
|
https://api.github.com/repos/kiaconatyx/library-app
|
closed
|
New Functionality - Delete a Book & Comic From Their Collections
|
enhancement test-driven-development
|
This new functionality should allow the user to delete a book/comic from the books/comics collection. The flow should be:
All Books and comics along with the index number in the arraylist should be printed to the console.
The user should be prompted to enter the Index Number of the book or comic they wish to delete.
The book or comic is then deleted - the user is informed of whether the delete was successful or not (eg if the user entered an index number that was not valid, this will result in the note not being deleted).
The JUnit tests associated with this new functionality should be completed as part of this issue.
|
1.0
|
New Functionality - Delete a Book & Comic From Their Collections - This new functionality should allow the user to delete a book/comic from the books/comics collection. The flow should be:
All Books and comics along with the index number in the arraylist should be printed to the console.
The user should be prompted to enter the Index Number of the book or comic they wish to delete.
The book or comic is then deleted - the user is informed of whether the delete was successful or not (eg if the user entered an index number that was not valid, this will result in the note not being deleted).
The JUnit tests associated with this new functionality should be completed as part of this issue.
|
test
|
new functionality delete a book comic from their collections this new functionality should allow the user to delete a book comic from the books comics collection the flow should be all books and comics along with the index number in the arraylist should be printed to the console the user should be prompted to enter the index number of the book or comic they wish to delete the book or comic is then deleted the user is informed of whether the delete was successful or not eg if the user entered an index number that was not valid this will result in the note not being deleted the junit tests associated with this new functionality should be completed as part of this issue
| 1
|
310,993
| 26,760,240,044
|
IssuesEvent
|
2023-01-31 06:00:29
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Godot 4 Beta 12 unable to install application to USB connected android phone but on Beta 10 it was ok
|
bug platform:android needs testing regression topic:export
|
### Godot version
v4.0.beta12.official [3c9bf4bc2]
### System information
Windows 10, Vulkan
### Issue description
Godot Beta 11 and 12 unable to install application to USB connected android phone but on Beta 10 it was ok using same pc and same project configurations.
### Steps to reproduce
Created a new application with single scene and connected my phone and allowed access to data. Using Godot beta 10 export with remote debug was done ok and works properly but using Godot beta 11 or 12 it builds done success but could not install and run on my phone.
### Minimal reproduction project
[sampleApp.zip](https://github.com/godotengine/godot/files/10433793/sampleApp.zip)
|
1.0
|
Godot 4 Beta 12 unable to install application to USB connected android phone but on Beta 10 it was ok - ### Godot version
v4.0.beta12.official [3c9bf4bc2]
### System information
Windows 10, Vulkan
### Issue description
Godot Beta 11 and 12 unable to install application to USB connected android phone but on Beta 10 it was ok using same pc and same project configurations.
### Steps to reproduce
Created a new application with single scene and connected my phone and allowed access to data. Using Godot beta 10 export with remote debug was done ok and works properly but using Godot beta 11 or 12 it builds done success but could not install and run on my phone.
### Minimal reproduction project
[sampleApp.zip](https://github.com/godotengine/godot/files/10433793/sampleApp.zip)
|
test
|
godot beta unable to install application to usb connected android phone but on beta it was ok godot version official system information windows vulkan issue description godot beta and unable to install application to usb connected android phone but on beta it was ok using same pc and same project configurations steps to reproduce created a new application with single scene and connected my phone and allowed access to data using godot beta export with remote debug was done ok and works properly but using godot beta or it builds done success but could not install and run on my phone minimal reproduction project
| 1
|
163,228
| 20,323,117,965
|
IssuesEvent
|
2022-02-18 01:45:45
|
kapseliboi/crowi
|
https://api.github.com/repos/kapseliboi/crowi
|
opened
|
CVE-2018-20190 (Medium) detected in node-sass-4.14.1.tgz
|
security vulnerability
|
## CVE-2018-20190 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Eval::operator()(Sass::Supports_Operator*) in eval.cpp may cause a Denial of Service (application crash) via a crafted sass input file.
<p>Publish Date: 2018-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20190>CVE-2018-20190</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2018-12-17</p>
<p>Fix Resolution: 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-20190 (Medium) detected in node-sass-4.14.1.tgz - ## CVE-2018-20190 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Eval::operator()(Sass::Supports_Operator*) in eval.cpp may cause a Denial of Service (application crash) via a crafted sass input file.
<p>Publish Date: 2018-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20190>CVE-2018-20190</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p>
<p>Release Date: 2018-12-17</p>
<p>Fix Resolution: 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in node sass tgz cve medium severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file package json path to vulnerable library node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in base branch master vulnerability details in libsass a null pointer dereference in the function sass eval operator sass supports operator in eval cpp may cause a denial of service application crash via a crafted sass input file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.