Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26,971
| 20,969,709,335
|
IssuesEvent
|
2022-03-28 10:10:54
|
florianwulfert/LogManager
|
https://api.github.com/repos/florianwulfert/LogManager
|
closed
|
Infrastructure: Add swagger to project
|
infrastructure Prio: 2
|
Es soll zukünftig eine Swagger-Dokumentation geben, sodass die Schnittstellen beschreibung über Swagger gemacht wird und nicht manuell aufgeschrieben wird.
In diesem Issue soll die Initialisierung von Swagger erfolgen, sodass Swagger im Projekt genutzt werden kann.
|
1.0
|
Infrastructure: Add swagger to project - Es soll zukünftig eine Swagger-Dokumentation geben, sodass die Schnittstellen beschreibung über Swagger gemacht wird und nicht manuell aufgeschrieben wird.
In diesem Issue soll die Initialisierung von Swagger erfolgen, sodass Swagger im Projekt genutzt werden kann.
|
non_test
|
infrastructure add swagger to project es soll zukünftig eine swagger dokumentation geben sodass die schnittstellen beschreibung über swagger gemacht wird und nicht manuell aufgeschrieben wird in diesem issue soll die initialisierung von swagger erfolgen sodass swagger im projekt genutzt werden kann
| 0
|
428,570
| 29,999,020,054
|
IssuesEvent
|
2023-06-26 08:02:58
|
sayasurvey/queque_board_api
|
https://api.github.com/repos/sayasurvey/queque_board_api
|
opened
|
藤谷タスク管理
|
documentation
|
board api
- [ ] boardImageカラムのバリデーション実装
- [ ] s3buket作成
- [ ] s3へのファイル保存処理実装
- [ ] board一覧取得テストコード実装
- [ ] board登録テストコード実装
- [ ] board詳細取得テストコード実装
- [ ] board編集テストコード実装
- [ ] board削除テストコード実装
user api
- [ ] 例外処理の修正
- [ ] cognitoリソース作成
共通
- [ ] 共通エラーハンドリング実装
|
1.0
|
藤谷タスク管理 - board api
- [ ] boardImageカラムのバリデーション実装
- [ ] s3buket作成
- [ ] s3へのファイル保存処理実装
- [ ] board一覧取得テストコード実装
- [ ] board登録テストコード実装
- [ ] board詳細取得テストコード実装
- [ ] board編集テストコード実装
- [ ] board削除テストコード実装
user api
- [ ] 例外処理の修正
- [ ] cognitoリソース作成
共通
- [ ] 共通エラーハンドリング実装
|
non_test
|
藤谷タスク管理 board api boardimageカラムのバリデーション実装 board一覧取得テストコード実装 board登録テストコード実装 board詳細取得テストコード実装 board編集テストコード実装 board削除テストコード実装 user api 例外処理の修正 cognitoリソース作成 共通 共通エラーハンドリング実装
| 0
|
69,724
| 7,158,030,515
|
IssuesEvent
|
2018-01-26 22:20:08
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
[TESTING] XHR error while writing unit tests for Components
|
comp: testing
|
I have following component:
``` typescript
///<reference path="../../node_modules/angular2/typings/browser.d.ts"/>
import { Component, OnInit } from 'angular2/core';
import { ROUTER_DIRECTIVES } from 'angular2/router';
import { Employee } from '../models/employee';
import { EmployeeListServiceComponent } from '../services/employee-list-service.component';
@Component({
selector: 'employee-list',
templateUrl: 'src/pages/employee-list.component.html',
directives: [ROUTER_DIRECTIVES],
providers: [EmployeeListServiceComponent]
})
export class EmployeeListComponent implements OnInit {
public employees: Employee[];
public errorMessage: string;
constructor(
private _listingService: EmployeeListServiceComponent
){}
ngOnInit() {
this._listingService.getEmployees().subscribe(
employees => this.employees = employees,
error => this.errorMessage = <any>error
);
}
}
```
Now I want to test it. So I did the following:
``` typescript
/// <reference path="../../typings/main/ambient/jasmine/jasmine.d.ts" />
import {
it,
describe,
expect,
TestComponentBuilder,
injectAsync,
setBaseTestProviders,
beforeEachProviders
} from "angular2/testing";
import { provide } from "angular2/core";
import {
TEST_BROWSER_PLATFORM_PROVIDERS,
TEST_BROWSER_APPLICATION_PROVIDERS
} from "angular2/platform/testing/browser";
import {
HTTP_PROVIDERS,
XHRBackend,
ResponseOptions,
Response
} from "angular2/http";
import {
MockBackend,
MockConnection
} from "angular2/src/http/backends/mock_backend";
import { ROUTER_PROVIDERS } from 'angular2/router';
import 'rxjs/Rx';
import { EmployeeListComponent } from './list.component';
import { EmployeeListServiceComponent } from '../services/employee-list-service.component';
describe('MyList Tests', () => {
setBaseTestProviders(TEST_BROWSER_PLATFORM_PROVIDERS, TEST_BROWSER_APPLICATION_PROVIDERS);
beforeEachProviders(() => {
return [
HTTP_PROVIDERS,
provide(XHRBackend, {useClass: MockBackend}),
EmployeeListServiceComponent
]
});
it('Should create a component MyList',
injectAsync([XHRBackend, EmployeeListServiceComponent, TestComponentBuilder], (backend, service, tcb) => {
backend.connections.subscribe(
(connection:MockConnection) => {
var options = new ResponseOptions({
body: [
{
"name": "Abhinav Mishra",
"id": 1
},
{
"name": "Abhinav Mishra",
"id": 2
}
]
});
var response = new Response(options);
connection.mockRespond(response);
}
);
return tcb
.overrideProviders(EmployeeListComponent,
[
ROUTER_PROVIDERS,
EmployeeListServiceComponent
]
)
.createAsync(EmployeeListComponent)
.then((fixture) => {
fixture.detectChanges();
expect(true).toBe(true);
});
})
);
});
```
However I keep getting the routing error:
```
Chrome 49.0.2623 (Linux 0.0.0) ERROR
Error: XHR error (404 Not Found) loading http://localhost:9876/angular2/router
at error (/home/abhi/Desktop/angular2-testing/node_modules/systemjs/dist/system.src.js:1026:16)
at XMLHttpRequest.xhr.onreadystatechange (/home/abhi/Desktop/angular2-testing/node_modules/systemjs/dist/system.src.js:1047:13)
at XMLHttpRequest.wrapFn [as _onreadystatechange] (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:771:30)
at ZoneDelegate.invokeTask (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:365:38)
at Zone.runTask (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:263:48)
at XMLHttpRequest.ZoneTask.invoke (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:431:34)
```
Where have I messed up?
|
1.0
|
[TESTING] XHR error while writing unit tests for Components - I have following component:
``` typescript
///<reference path="../../node_modules/angular2/typings/browser.d.ts"/>
import { Component, OnInit } from 'angular2/core';
import { ROUTER_DIRECTIVES } from 'angular2/router';
import { Employee } from '../models/employee';
import { EmployeeListServiceComponent } from '../services/employee-list-service.component';
@Component({
selector: 'employee-list',
templateUrl: 'src/pages/employee-list.component.html',
directives: [ROUTER_DIRECTIVES],
providers: [EmployeeListServiceComponent]
})
export class EmployeeListComponent implements OnInit {
public employees: Employee[];
public errorMessage: string;
constructor(
private _listingService: EmployeeListServiceComponent
){}
ngOnInit() {
this._listingService.getEmployees().subscribe(
employees => this.employees = employees,
error => this.errorMessage = <any>error
);
}
}
```
Now I want to test it. So I did the following:
``` typescript
/// <reference path="../../typings/main/ambient/jasmine/jasmine.d.ts" />
import {
it,
describe,
expect,
TestComponentBuilder,
injectAsync,
setBaseTestProviders,
beforeEachProviders
} from "angular2/testing";
import { provide } from "angular2/core";
import {
TEST_BROWSER_PLATFORM_PROVIDERS,
TEST_BROWSER_APPLICATION_PROVIDERS
} from "angular2/platform/testing/browser";
import {
HTTP_PROVIDERS,
XHRBackend,
ResponseOptions,
Response
} from "angular2/http";
import {
MockBackend,
MockConnection
} from "angular2/src/http/backends/mock_backend";
import { ROUTER_PROVIDERS } from 'angular2/router';
import 'rxjs/Rx';
import { EmployeeListComponent } from './list.component';
import { EmployeeListServiceComponent } from '../services/employee-list-service.component';
describe('MyList Tests', () => {
setBaseTestProviders(TEST_BROWSER_PLATFORM_PROVIDERS, TEST_BROWSER_APPLICATION_PROVIDERS);
beforeEachProviders(() => {
return [
HTTP_PROVIDERS,
provide(XHRBackend, {useClass: MockBackend}),
EmployeeListServiceComponent
]
});
it('Should create a component MyList',
injectAsync([XHRBackend, EmployeeListServiceComponent, TestComponentBuilder], (backend, service, tcb) => {
backend.connections.subscribe(
(connection:MockConnection) => {
var options = new ResponseOptions({
body: [
{
"name": "Abhinav Mishra",
"id": 1
},
{
"name": "Abhinav Mishra",
"id": 2
}
]
});
var response = new Response(options);
connection.mockRespond(response);
}
);
return tcb
.overrideProviders(EmployeeListComponent,
[
ROUTER_PROVIDERS,
EmployeeListServiceComponent
]
)
.createAsync(EmployeeListComponent)
.then((fixture) => {
fixture.detectChanges();
expect(true).toBe(true);
});
})
);
});
```
However I keep getting the routing error:
```
Chrome 49.0.2623 (Linux 0.0.0) ERROR
Error: XHR error (404 Not Found) loading http://localhost:9876/angular2/router
at error (/home/abhi/Desktop/angular2-testing/node_modules/systemjs/dist/system.src.js:1026:16)
at XMLHttpRequest.xhr.onreadystatechange (/home/abhi/Desktop/angular2-testing/node_modules/systemjs/dist/system.src.js:1047:13)
at XMLHttpRequest.wrapFn [as _onreadystatechange] (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:771:30)
at ZoneDelegate.invokeTask (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:365:38)
at Zone.runTask (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:263:48)
at XMLHttpRequest.ZoneTask.invoke (/home/abhi/Desktop/angular2-testing/node_modules/angular2/bundles/angular2-polyfills.js:431:34)
```
Where have I messed up?
|
test
|
xhr error while writing unit tests for components i have following component typescript import component oninit from core import router directives from router import employee from models employee import employeelistservicecomponent from services employee list service component component selector employee list templateurl src pages employee list component html directives providers export class employeelistcomponent implements oninit public employees employee public errormessage string constructor private listingservice employeelistservicecomponent ngoninit this listingservice getemployees subscribe employees this employees employees error this errormessage error now i want to test it so i did the following typescript import it describe expect testcomponentbuilder injectasync setbasetestproviders beforeeachproviders from testing import provide from core import test browser platform providers test browser application providers from platform testing browser import http providers xhrbackend responseoptions response from http import mockbackend mockconnection from src http backends mock backend import router providers from router import rxjs rx import employeelistcomponent from list component import employeelistservicecomponent from services employee list service component describe mylist tests setbasetestproviders test browser platform providers test browser application providers beforeeachproviders return http providers provide xhrbackend useclass mockbackend employeelistservicecomponent it should create a component mylist injectasync backend service tcb backend connections subscribe connection mockconnection var options new responseoptions body name abhinav mishra id name abhinav mishra id var response new response options connection mockrespond response return tcb overrideproviders employeelistcomponent router providers employeelistservicecomponent createasync employeelistcomponent then fixture fixture detectchanges expect true tobe true however i keep getting the routing error chrome linux error error xhr error not found loading at error home abhi desktop testing node modules systemjs dist system src js at xmlhttprequest xhr onreadystatechange home abhi desktop testing node modules systemjs dist system src js at xmlhttprequest wrapfn home abhi desktop testing node modules bundles polyfills js at zonedelegate invoketask home abhi desktop testing node modules bundles polyfills js at zone runtask home abhi desktop testing node modules bundles polyfills js at xmlhttprequest zonetask invoke home abhi desktop testing node modules bundles polyfills js where have i messed up
| 1
|
240,132
| 20,013,536,781
|
IssuesEvent
|
2022-02-01 09:41:41
|
wazuh/wazuh-qa
|
https://api.github.com/repos/wazuh/wazuh-qa
|
opened
|
Vulnerability detector test module refactor: test_alert_vulnerability_removal
|
team/qa type/rework test/integration feature/vuln-detector subteam/qa-thunder
|
It is asked to refactor the test module named `test_alert_vulnerability_removal.py`.
It is disabled for now, as it was failing or unstable, causing false positives.
## Tasks
- [ ] Make a study of the objectives of the test, and what is being tested.
- [ ] Refactor the test. Clean and modularizable code.
- [ ] Check that the test always starts from the same state and restores it completely at the end of the test (independent of the tests previously executed in that environment).
- [ ] Review test documentation, and modify if necessary.
- [ ] Proven that tests **pass** when they have to pass.
- [ ] Proven that tests **fail** when they have to fail.
- [ ] Test in 5-10 rounds of execution that the test always shows the same result.
- [ ] Run all Vulnerability detector integration tests and check the "full green".
## Checks
- [ ] The code complies with the standard PEP-8 format.
- [ ] Python codebase is documented following the Google Style for Python docstrings.
- [ ] The test is processed by `qa-docs` tool without errors.
|
1.0
|
Vulnerability detector test module refactor: test_alert_vulnerability_removal - It is asked to refactor the test module named `test_alert_vulnerability_removal.py`.
It is disabled for now, as it was failing or unstable, causing false positives.
## Tasks
- [ ] Make a study of the objectives of the test, and what is being tested.
- [ ] Refactor the test. Clean and modularizable code.
- [ ] Check that the test always starts from the same state and restores it completely at the end of the test (independent of the tests previously executed in that environment).
- [ ] Review test documentation, and modify if necessary.
- [ ] Proven that tests **pass** when they have to pass.
- [ ] Proven that tests **fail** when they have to fail.
- [ ] Test in 5-10 rounds of execution that the test always shows the same result.
- [ ] Run all Vulnerability detector integration tests and check the "full green".
## Checks
- [ ] The code complies with the standard PEP-8 format.
- [ ] Python codebase is documented following the Google Style for Python docstrings.
- [ ] The test is processed by `qa-docs` tool without errors.
|
test
|
vulnerability detector test module refactor test alert vulnerability removal it is asked to refactor the test module named test alert vulnerability removal py it is disabled for now as it was failing or unstable causing false positives tasks make a study of the objectives of the test and what is being tested refactor the test clean and modularizable code check that the test always starts from the same state and restores it completely at the end of the test independent of the tests previously executed in that environment review test documentation and modify if necessary proven that tests pass when they have to pass proven that tests fail when they have to fail test in rounds of execution that the test always shows the same result run all vulnerability detector integration tests and check the full green checks the code complies with the standard pep format python codebase is documented following the google style for python docstrings the test is processed by qa docs tool without errors
| 1
|
382,677
| 26,510,038,069
|
IssuesEvent
|
2023-01-18 16:25:44
|
graasp/graasp-app-excalidraw
|
https://api.github.com/repos/graasp/graasp-app-excalidraw
|
closed
|
Update README to describe the app
|
documentation
|
- [ ] description of the app
- [ ] screenshort
- [ ] explanations of a few features that are working
- [ ] list of some features that are planned
_Originally posted by @swouf in https://github.com/graasp/graasp-app-excalidraw/pull/3#discussion_r1045592131_
|
1.0
|
Update README to describe the app - - [ ] description of the app
- [ ] screenshort
- [ ] explanations of a few features that are working
- [ ] list of some features that are planned
_Originally posted by @swouf in https://github.com/graasp/graasp-app-excalidraw/pull/3#discussion_r1045592131_
|
non_test
|
update readme to describe the app description of the app screenshort explanations of a few features that are working list of some features that are planned originally posted by swouf in
| 0
|
219,344
| 16,827,747,174
|
IssuesEvent
|
2021-06-17 21:08:05
|
vmware-tanzu/velero
|
https://api.github.com/repos/vmware-tanzu/velero
|
opened
|
velero.io "Latest Release Information" points to the 1.5 blog post
|
Area/Documentation
|
Please re-route the link to the 1.6 blog post.
|
1.0
|
velero.io "Latest Release Information" points to the 1.5 blog post - Please re-route the link to the 1.6 blog post.
|
non_test
|
velero io latest release information points to the blog post please re route the link to the blog post
| 0
|
276,435
| 8,598,460,927
|
IssuesEvent
|
2018-11-15 21:51:57
|
mlr-org/mlr3featsel
|
https://api.github.com/repos/mlr-org/mlr3featsel
|
opened
|
Structure of this pkg
|
Priority: Critical Status: In Progress Type: Enhancement
|
Should/could follow _mlr3tuning_ [structure](https://github.com/mlr-org/mlr3tuning/blob/master/R/Tuner.R).
- Two base classes (`Filter` and `Featsel`) or one class with respective functions?
Arguments:
- `id`
- `settings` (at least for `Filter`)
- `result` -> returns the full `Filter Values` (needed for eventual caching) and the subsetted task in case of `Featsel`
- [...] more?
For filters a subclass per pkg that inherits from the main class?
How should we do the structuring for featsel methods?
@ja-thomas
|
1.0
|
Structure of this pkg - Should/could follow _mlr3tuning_ [structure](https://github.com/mlr-org/mlr3tuning/blob/master/R/Tuner.R).
- Two base classes (`Filter` and `Featsel`) or one class with respective functions?
Arguments:
- `id`
- `settings` (at least for `Filter`)
- `result` -> returns the full `Filter Values` (needed for eventual caching) and the subsetted task in case of `Featsel`
- [...] more?
For filters a subclass per pkg that inherits from the main class?
How should we do the structuring for featsel methods?
@ja-thomas
|
non_test
|
structure of this pkg should could follow two base classes filter and featsel or one class with respective functions arguments id settings at least for filter result returns the full filter values needed for eventual caching and the subsetted task in case of featsel more for filters a subclass per pkg that inherits from the main class how should we do the structuring for featsel methods ja thomas
| 0
|
568,251
| 16,962,845,727
|
IssuesEvent
|
2021-06-29 07:18:22
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.pornhub.com - video or audio doesn't play
|
browser-firefox-ios device-tablet os-ios priority-critical
|
<!-- @browser: Safari 13.1 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/78224 -->
<!-- @extra_labels: browser-firefox-ios -->
**URL**: https://www.pornhub.com/view_video.php?viewkey=ph5f3ada5b92d4b
**Browser / Version**: Safari 13.1
**Operating System**: Mac OS X 10.15.4
**Tested Another Browser**: Yes Other
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
Site doesn’t open on firefox. It keeps on hanging
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.pornhub.com - video or audio doesn't play - <!-- @browser: Safari 13.1 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/78224 -->
<!-- @extra_labels: browser-firefox-ios -->
**URL**: https://www.pornhub.com/view_video.php?viewkey=ph5f3ada5b92d4b
**Browser / Version**: Safari 13.1
**Operating System**: Mac OS X 10.15.4
**Tested Another Browser**: Yes Other
**Problem type**: Video or audio doesn't play
**Description**: The video or audio does not play
**Steps to Reproduce**:
Site doesn’t open on firefox. It keeps on hanging
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
video or audio doesn t play url browser version safari operating system mac os x tested another browser yes other problem type video or audio doesn t play description the video or audio does not play steps to reproduce site doesn’t open on firefox it keeps on hanging browser configuration none from with ❤️
| 0
|
42,391
| 5,435,586,639
|
IssuesEvent
|
2017-03-05 18:14:07
|
Octanis1/Octanis1-Mainboard-Firmware_MSP_EXP432P401RLP
|
https://api.github.com/repos/Octanis1/Octanis1-Mainboard-Firmware_MSP_EXP432P401RLP
|
closed
|
Rockblock module driver
|
help wanted testing
|
A driver for Satellite comms via Rockblock module is required. It encapsulates the AT commands into driver functions to send a message buffer, check for new messages and download if available.
http://www.rock7mobile.com/products-rockblock
AT command reference:
https://www.rock7.com/downloads/IRDM_ISU_ATCommandReferenceMAN0009_Rev2.0_ATCOMM_Oct2012.pdf
Existing Arduino Library: http://arduiniana.org/libraries/iridiumsbd/
|
1.0
|
Rockblock module driver - A driver for Satellite comms via Rockblock module is required. It encapsulates the AT commands into driver functions to send a message buffer, check for new messages and download if available.
http://www.rock7mobile.com/products-rockblock
AT command reference:
https://www.rock7.com/downloads/IRDM_ISU_ATCommandReferenceMAN0009_Rev2.0_ATCOMM_Oct2012.pdf
Existing Arduino Library: http://arduiniana.org/libraries/iridiumsbd/
|
test
|
rockblock module driver a driver for satellite comms via rockblock module is required it encapsulates the at commands into driver functions to send a message buffer check for new messages and download if available at command reference existing arduino library
| 1
|
205,908
| 15,698,934,576
|
IssuesEvent
|
2021-03-26 07:42:40
|
AdoptOpenJDK/openjdk-infrastructure
|
https://api.github.com/repos/AdoptOpenJDK/openjdk-infrastructure
|
closed
|
VPC: Debian10 VM disk too small to build a JDK
|
pbTests
|
Ref: https://ci.adoptopenjdk.net/job/VagrantPlaybookCheck/OS=Debian10,label=vagrant/1104/console
https://ci.adoptopenjdk.net/job/VagrantPlaybookCheck/OS=Debian10,label=vagrant/1103/console
These are both building JDK8 with 'fastmode' on, which installs even less for building the JDK.
By default the Vagrant VM has only 20GB on it, upon boot.
|
1.0
|
VPC: Debian10 VM disk too small to build a JDK - Ref: https://ci.adoptopenjdk.net/job/VagrantPlaybookCheck/OS=Debian10,label=vagrant/1104/console
https://ci.adoptopenjdk.net/job/VagrantPlaybookCheck/OS=Debian10,label=vagrant/1103/console
These are both building JDK8 with 'fastmode' on, which installs even less for building the JDK.
By default the Vagrant VM has only 20GB on it, upon boot.
|
test
|
vpc vm disk too small to build a jdk ref these are both building with fastmode on which installs even less for building the jdk by default the vagrant vm has only on it upon boot
| 1
|
39,626
| 20,113,694,336
|
IssuesEvent
|
2022-02-07 17:17:22
|
facebook/rocksdb
|
https://api.github.com/repos/facebook/rocksdb
|
closed
|
fillseq throughput is 13% slower from PR 6862
|
regression performance
|
fillseq throughput is 13% slower after [PR 6862](https://github.com/facebook/rocksdb/pull/6862) with [git hash e3f953a](https://github.com/facebook/rocksdb/commit/e3f953a). The problem is new CPU overhead (user, not system). The diff landed in v6.11.
The test server is a spare UDB host (many core, fast SSD) and fillseq throughput drops from ~800k/s to ~700k/s with this diff. One example of throughput by version (6.0 to 6.22) [is here](https://gist.github.com/mdcallag/18a03fb56ea5ff640cc7e368b9031bc9). There is also a regression in 6.14 for which I am still searching.
The problem is new CPU overhead that shows up as more user CPU time as measured via /bin/time db_bench ...
The test takes ~1000 seconds and prior to this diff uses ~3000 seconds of user CPU time vs ~3500 seconds of user CPU time at this diff.
I am not sure whether this depends more on the number of files or concurrency because it doesn't show up as a problem on IO-bound or CPU-bound configs on a small server, nor does it show up on a CPU-bound config on this server. The repro here is what I call an IO-bound config and has ~20X more data (and files) than the CPU-bound config.
I don't see an increase in the context switch rate, so mutex contention does not appear to be a problem.
The command line is:
`
/usr/bin/time -f '%e %U %S' -o bm.lc.nt16.cm1.d0/1550.e3f953a/benchmark_fillseq.wal_disabled.v400.log.time numactl --interleave=all ./db_bench --benchmarks=fillseq --allow_concurrent_memtable_write=false --level0_file_num_compaction_trigger=4 --level0_slowdown_writes_trigger=20 --level0_stop_writes_trigger=30 --max_background_jobs=8 --max_write_buffer_number=8 --db=/data/m/rx --wal_dir=/data/m/rx --num=800000000 --num_levels=8 --key_size=20 --value_size=400 --block_size=8192 --cache_size=51539607552 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=lz4 --bytes_per_sync=8388608 --cache_index_and_filter_blocks=1 --cache_high_pri_pool_ratio=0.5 --benchmark_write_rate_limit=0 --write_buffer_size=16777216 --target_file_size_base=16777216 --max_bytes_for_level_base=67108864 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=20 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --subcompactions=1 --compaction_style=0 --min_level_to_compress=3 --level_compaction_dynamic_level_bytes=true --pin_l0_filter_and_index_blocks_in_cache=1 --soft_pending_compaction_bytes_limit=167503724544 --hard_pending_compaction_bytes_limit=335007449088 --min_level_to_compress=0 --use_existing_db=0 --sync=0 --threads=1 --memtablerep=vector --allow_concurrent_memtable_write=false --disable_wal=1 --seed=1641213884
`
|
True
|
fillseq throughput is 13% slower from PR 6862 - fillseq throughput is 13% slower after [PR 6862](https://github.com/facebook/rocksdb/pull/6862) with [git hash e3f953a](https://github.com/facebook/rocksdb/commit/e3f953a). The problem is new CPU overhead (user, not system). The diff landed in v6.11.
The test server is a spare UDB host (many core, fast SSD) and fillseq throughput drops from ~800k/s to ~700k/s with this diff. One example of throughput by version (6.0 to 6.22) [is here](https://gist.github.com/mdcallag/18a03fb56ea5ff640cc7e368b9031bc9). There is also a regression in 6.14 for which I am still searching.
The problem is new CPU overhead that shows up as more user CPU time as measured via /bin/time db_bench ...
The test takes ~1000 seconds and prior to this diff uses ~3000 seconds of user CPU time vs ~3500 seconds of user CPU time at this diff.
I am not sure whether this depends more on the number of files or concurrency because it doesn't show up as a problem on IO-bound or CPU-bound configs on a small server, nor does it show up on a CPU-bound config on this server. The repro here is what I call an IO-bound config and has ~20X more data (and files) than the CPU-bound config.
I don't see an increase in the context switch rate, so mutex contention does not appear to be a problem.
The command line is:
`
/usr/bin/time -f '%e %U %S' -o bm.lc.nt16.cm1.d0/1550.e3f953a/benchmark_fillseq.wal_disabled.v400.log.time numactl --interleave=all ./db_bench --benchmarks=fillseq --allow_concurrent_memtable_write=false --level0_file_num_compaction_trigger=4 --level0_slowdown_writes_trigger=20 --level0_stop_writes_trigger=30 --max_background_jobs=8 --max_write_buffer_number=8 --db=/data/m/rx --wal_dir=/data/m/rx --num=800000000 --num_levels=8 --key_size=20 --value_size=400 --block_size=8192 --cache_size=51539607552 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=lz4 --bytes_per_sync=8388608 --cache_index_and_filter_blocks=1 --cache_high_pri_pool_ratio=0.5 --benchmark_write_rate_limit=0 --write_buffer_size=16777216 --target_file_size_base=16777216 --max_bytes_for_level_base=67108864 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=20 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --subcompactions=1 --compaction_style=0 --min_level_to_compress=3 --level_compaction_dynamic_level_bytes=true --pin_l0_filter_and_index_blocks_in_cache=1 --soft_pending_compaction_bytes_limit=167503724544 --hard_pending_compaction_bytes_limit=335007449088 --min_level_to_compress=0 --use_existing_db=0 --sync=0 --threads=1 --memtablerep=vector --allow_concurrent_memtable_write=false --disable_wal=1 --seed=1641213884
`
|
non_test
|
fillseq throughput is slower from pr fillseq throughput is slower after with the problem is new cpu overhead user not system the diff landed in the test server is a spare udb host many core fast ssd and fillseq throughput drops from s to s with this diff one example of throughput by version to there is also a regression in for which i am still searching the problem is new cpu overhead that shows up as more user cpu time as measured via bin time db bench the test takes seconds and prior to this diff uses seconds of user cpu time vs seconds of user cpu time at this diff i am not sure whether this depends more on the number of files or concurrency because it doesn t show up as a problem on io bound or cpu bound configs on a small server nor does it show up on a cpu bound config on this server the repro here is what i call an io bound config and has more data and files than the cpu bound config i don t see an increase in the context switch rate so mutex contention does not appear to be a problem the command line is usr bin time f e u s o bm lc benchmark fillseq wal disabled log time numactl interleave all db bench benchmarks fillseq allow concurrent memtable write false file num compaction trigger slowdown writes trigger stop writes trigger max background jobs max write buffer number db data m rx wal dir data m rx num num levels key size value size block size cache size cache numshardbits compression max dict bytes compression ratio compression type bytes per sync cache index and filter blocks cache high pri pool ratio benchmark write rate limit write buffer size target file size base max bytes for level base verify checksum delete obsolete files period micros max bytes for level multiplier statistics stats per interval stats interval seconds histogram memtablerep skip list bloom bits open files subcompactions compaction style min level to compress level compaction dynamic level bytes true pin filter and index blocks in cache soft pending compaction bytes limit hard pending compaction bytes limit min level to compress use existing db sync threads memtablerep vector allow concurrent memtable write false disable wal seed
| 0
|
210,791
| 16,116,312,956
|
IssuesEvent
|
2021-04-28 07:52:19
|
WoWManiaUK/Redemption
|
https://api.github.com/repos/WoWManiaUK/Redemption
|
closed
|
[Instances/TOC] Fixed Earth Shield spell of Faction Champions
|
Fixed on PTR - Tester Confirmed
|
**What is Happening:**
Spell was not scripted, so it dont give any healing.
And causes runtime error: AuraEffect::HandleProcTriggerSpellAuraProc: Could not trigger spell 0 from aura 66063 proc, because the spell does not have an entry in Spell.dbc.
**What Should happen:**
Spell should give heal when Faction Champion get damage ✌️
And not causes any runtime error.
|
1.0
|
[Instances/TOC] Fixed Earth Shield spell of Faction Champions - **What is Happening:**
Spell was not scripted, so it dont give any healing.
And causes runtime error: AuraEffect::HandleProcTriggerSpellAuraProc: Could not trigger spell 0 from aura 66063 proc, because the spell does not have an entry in Spell.dbc.
**What Should happen:**
Spell should give heal when Faction Champion get damage ✌️
And not causes any runtime error.
|
test
|
fixed earth shield spell of faction champions what is happening spell was not scripted so it dont give any healing and causes runtime error auraeffect handleproctriggerspellauraproc could not trigger spell from aura proc because the spell does not have an entry in spell dbc what should happen spell should give heal when faction champion get damage ✌️ and not causes any runtime error
| 1
|
116,914
| 9,888,126,835
|
IssuesEvent
|
2019-06-25 10:46:49
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Firefox UI Functional Tests.test/functional/apps/dashboard/panel_controls·js - dashboard app using legacy data dashboard panel controls "after all" hook
|
failed-test
|
A test failed on a tracked branch
```
{ NoSuchSessionError: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
at promise.finally (node_modules/selenium-webdriver/lib/webdriver.js:726:38)
at Object.thenFinally [as finally] (node_modules/selenium-webdriver/lib/promise.js:124:12)
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'NoSuchSessionError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.2/JOB=kibana-ciGroup4,node=immutable/7/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox UI Functional Tests.test/functional/apps/dashboard/panel_controls·js","test.name":"dashboard app using legacy data dashboard panel controls \"after all\" hook","test.failCount":1}} -->
|
1.0
|
Failing test: Firefox UI Functional Tests.test/functional/apps/dashboard/panel_controls·js - dashboard app using legacy data dashboard panel controls "after all" hook - A test failed on a tracked branch
```
{ NoSuchSessionError: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
at promise.finally (node_modules/selenium-webdriver/lib/webdriver.js:726:38)
at Object.thenFinally [as finally] (node_modules/selenium-webdriver/lib/promise.js:124:12)
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'NoSuchSessionError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.2/JOB=kibana-ciGroup4,node=immutable/7/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox UI Functional Tests.test/functional/apps/dashboard/panel_controls·js","test.name":"dashboard app using legacy data dashboard panel controls \"after all\" hook","test.failCount":1}} -->
|
test
|
failing test firefox ui functional tests test functional apps dashboard panel controls·js dashboard app using legacy data dashboard panel controls after all hook a test failed on a tracked branch nosuchsessionerror this driver instance does not have a valid session id did you call webdriver quit and may no longer be used at promise finally node modules selenium webdriver lib webdriver js at object thenfinally node modules selenium webdriver lib promise js at process tickcallback internal process next tick js name nosuchsessionerror remotestacktrace first failure
| 1
|
284,190
| 24,582,458,478
|
IssuesEvent
|
2022-10-13 16:41:09
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[Jepsen][YCQL] Consistency check failure for improved bank workload
|
kind/bug kind/failing-test area/docdb priority/medium area/ycql
|
Jira Link: [DB-923](https://yugabyte.atlassian.net/browse/DB-923)
I've extended coverage for YCQL bank workload, and now it also trigger inserts with same invariant.
Pull request with improved bank workload https://github.com/yugabyte/jepsen/pull/55
Here is source code for inserts. On each dice == "insert" there is automatically incremented counter, so there is no concurrent runs here and each insert statement gets unique value, even when operation is failed.
May occur seven w/o any nemesis running
```Clojure
(let [{:keys [from to amount]} (:value op)
dice (rand-nth ["insert" "update"])]
(cond
(= dice "insert")
(let [insert-key (swap! counter-end inc)]
(do
(cassandra/execute
conn
(str "BEGIN TRANSACTION "
"INSERT INTO " keyspace "." table-name
" (id, balance) values (" insert-key "," amount ");"
"UPDATE " keyspace "." table-name
" SET balance = balance - " amount " WHERE id = " from ";"
"END TRANSACTION;"))
(assoc op :type :ok :value {:from from, :to insert-key, :amount amount})))
(= dice "update")
(do
(cassandra/execute
...
```

Jepsen validation log cut:
```
{:SI
{:valid? false,
:read-count 12015,
:error-count 5,
:first-error
{:type :wrong-total,
:total 101,
:op
{:type :ok,
:f :read,
:process 266,
:time 571865348465,
:value
{0 -116,
1 -333,
2 -334,
```
|
1.0
|
[Jepsen][YCQL] Consistency check failure for improved bank workload - Jira Link: [DB-923](https://yugabyte.atlassian.net/browse/DB-923)
I've extended coverage for YCQL bank workload, and now it also trigger inserts with same invariant.
Pull request with improved bank workload https://github.com/yugabyte/jepsen/pull/55
Here is source code for inserts. On each dice == "insert" there is automatically incremented counter, so there is no concurrent runs here and each insert statement gets unique value, even when operation is failed.
May occur seven w/o any nemesis running
```Clojure
(let [{:keys [from to amount]} (:value op)
dice (rand-nth ["insert" "update"])]
(cond
(= dice "insert")
(let [insert-key (swap! counter-end inc)]
(do
(cassandra/execute
conn
(str "BEGIN TRANSACTION "
"INSERT INTO " keyspace "." table-name
" (id, balance) values (" insert-key "," amount ");"
"UPDATE " keyspace "." table-name
" SET balance = balance - " amount " WHERE id = " from ";"
"END TRANSACTION;"))
(assoc op :type :ok :value {:from from, :to insert-key, :amount amount})))
(= dice "update")
(do
(cassandra/execute
...
```

Jepsen validation log cut:
```
{:SI
{:valid? false,
:read-count 12015,
:error-count 5,
:first-error
{:type :wrong-total,
:total 101,
:op
{:type :ok,
:f :read,
:process 266,
:time 571865348465,
:value
{0 -116,
1 -333,
2 -334,
```
|
test
|
consistency check failure for improved bank workload jira link i ve extended coverage for ycql bank workload and now it also trigger inserts with same invariant pull request with improved bank workload here is source code for inserts on each dice insert there is automatically incremented counter so there is no concurrent runs here and each insert statement gets unique value even when operation is failed may occur seven w o any nemesis running clojure let value op dice rand nth cond dice insert let do cassandra execute conn str begin transaction insert into keyspace table name id balance values insert key amount update keyspace table name set balance balance amount where id from end transaction assoc op type ok value from from to insert key amount amount dice update do cassandra execute jepsen validation log cut si valid false read count error count first error type wrong total total op type ok f read process time value
| 1
|
226,394
| 17,349,706,681
|
IssuesEvent
|
2021-07-29 07:06:01
|
InfyOmLabs/laravel-generator
|
https://api.github.com/repos/InfyOmLabs/laravel-generator
|
closed
|
can we override generators like we can override templates?
|
documentation
|
I find it very useful that you allow us to publish the templates and then override the ones we want with our own templates.
But is it possible to do the same thing with the generators? Right now I am editing the ones inside the vendor package but this is not ideal as my changes are lost everytime I run composer update.
For example I would like to override /vendor/infyomlabs/laravel-generator/src/Generators/Scaffold because I want to generate a DataTable.blade.php as well as a ModelDataTable.php class.
Is there a better way? Like adding my own service provider.. well to be honest it would be much easier if you just let us do it like you do with the templates. Is that possible?
Thanks!
|
1.0
|
can we override generators like we can override templates? - I find it very useful that you allow us to publish the templates and then override the ones we want with our own templates.
But is it possible to do the same thing with the generators? Right now I am editing the ones inside the vendor package but this is not ideal as my changes are lost everytime I run composer update.
For example I would like to override /vendor/infyomlabs/laravel-generator/src/Generators/Scaffold because I want to generate a DataTable.blade.php as well as a ModelDataTable.php class.
Is there a better way? Like adding my own service provider.. well to be honest it would be much easier if you just let us do it like you do with the templates. Is that possible?
Thanks!
|
non_test
|
can we override generators like we can override templates i find it very useful that you allow us to publish the templates and then override the ones we want with our own templates but is it possible to do the same thing with the generators right now i am editing the ones inside the vendor package but this is not ideal as my changes are lost everytime i run composer update for example i would like to override vendor infyomlabs laravel generator src generators scaffold because i want to generate a datatable blade php as well as a modeldatatable php class is there a better way like adding my own service provider well to be honest it would be much easier if you just let us do it like you do with the templates is that possible thanks
| 0
|
25,628
| 4,163,909,418
|
IssuesEvent
|
2016-06-18 12:22:27
|
futurice/android-best-practices
|
https://api.github.com/repos/futurice/android-best-practices
|
closed
|
Espresso
|
enhancement testing todo
|
Hello
Espresso (https://code.google.com/p/android-test-kit/) is not mentioned at all. Did you have any bad experience with it?
|
1.0
|
Espresso - Hello
Espresso (https://code.google.com/p/android-test-kit/) is not mentioned at all. Did you have any bad experience with it?
|
test
|
espresso hello espresso is not mentioned at all did you have any bad experience with it
| 1
|
791,747
| 27,874,427,766
|
IssuesEvent
|
2023-03-21 15:15:18
|
zowe/zowe-explorer-intellij
|
https://api.github.com/repos/zowe/zowe-explorer-intellij
|
closed
|
When dataset member is moved from one DS to another, load more appears instead of it
|
bug priority-high severity-medium
|
Find some DS with one member (ZOSMFAD.TEST.TEST)
Allocate some other DS with Allocate like
Move the member from the source to the new DS
When the member is moved, it appears hidden with "load more" instead of it
Copy/paste - the same

|
1.0
|
When dataset member is moved from one DS to another, load more appears instead of it - Find some DS with one member (ZOSMFAD.TEST.TEST)
Allocate some other DS with Allocate like
Move the member from the source to the new DS
When the member is moved, it appears hidden with "load more" instead of it
Copy/paste - the same

|
non_test
|
when dataset member is moved from one ds to another load more appears instead of it find some ds with one member zosmfad test test allocate some other ds with allocate like move the member from the source to the new ds when the member is moved it appears hidden with load more instead of it copy paste the same
| 0
|
490,213
| 14,116,787,705
|
IssuesEvent
|
2020-11-08 05:28:09
|
AY2021S1-CS2113T-T12-1/tp
|
https://api.github.com/repos/AY2021S1-CS2113T-T12-1/tp
|
closed
|
Name your PPP according to your github username
|
priority.High
|
In order for the dashboard to detect your ppp, change your ppp's file name to GITUSERNAME.md (i.e. slightlyharp.md)

|
1.0
|
Name your PPP according to your github username - In order for the dashboard to detect your ppp, change your ppp's file name to GITUSERNAME.md (i.e. slightlyharp.md)

|
non_test
|
name your ppp according to your github username in order for the dashboard to detect your ppp change your ppp s file name to gitusername md i e slightlyharp md
| 0
|
162,833
| 12,692,674,859
|
IssuesEvent
|
2020-06-22 00:00:58
|
Thy-Vipe/BeastsOfBermuda-issues
|
https://api.github.com/repos/Thy-Vipe/BeastsOfBermuda-issues
|
closed
|
[Bug] Ichthy Swipe Bug
|
Animation Fixed! Potential fix bug tester-team
|
_Originally written by **TheRiversEdge | 76561198107871153**_
Game Version: 1.1.947
*===== System Specs =====
CPU Brand: AMD FX(tm)-8320 Eight-Core Processor
Vendor: AuthenticAMD
GPU Brand: NVIDIA GeForce GTX 1650
GPU Driver Info: Unknown
Num CPU Cores: 4
===================*
Context: **Ichthyovenator**
Map: Rival_Shores
*Expected Results:* Ichthy swipe attack animates every time when used in succession (spammed)
*Actual Results:* Ichthy swipe attack will sometimes consume AP without animating
*Replication:* Spam the ichthy swipe attack (RMB)
|
1.0
|
[Bug] Ichthy Swipe Bug - _Originally written by **TheRiversEdge | 76561198107871153**_
Game Version: 1.1.947
*===== System Specs =====
CPU Brand: AMD FX(tm)-8320 Eight-Core Processor
Vendor: AuthenticAMD
GPU Brand: NVIDIA GeForce GTX 1650
GPU Driver Info: Unknown
Num CPU Cores: 4
===================*
Context: **Ichthyovenator**
Map: Rival_Shores
*Expected Results:* Ichthy swipe attack animates every time when used in succession (spammed)
*Actual Results:* Ichthy swipe attack will sometimes consume AP without animating
*Replication:* Spam the ichthy swipe attack (RMB)
|
test
|
ichthy swipe bug originally written by theriversedge game version system specs cpu brand amd fx tm eight core processor vendor authenticamd gpu brand nvidia geforce gtx gpu driver info unknown num cpu cores context ichthyovenator map rival shores expected results ichthy swipe attack animates every time when used in succession spammed actual results ichthy swipe attack will sometimes consume ap without animating replication spam the ichthy swipe attack rmb
| 1
|
6,883
| 2,867,620,779
|
IssuesEvent
|
2015-06-05 14:22:31
|
psu-stewardship/archivesphere
|
https://api.github.com/repos/psu-stewardship/archivesphere
|
closed
|
Use Poltergeist instead of Selenium as a JS driver
|
testing
|
This should remove the local dependency on the firefox binary.
|
1.0
|
Use Poltergeist instead of Selenium as a JS driver - This should remove the local dependency on the firefox binary.
|
test
|
use poltergeist instead of selenium as a js driver this should remove the local dependency on the firefox binary
| 1
|
9,773
| 3,070,126,810
|
IssuesEvent
|
2015-08-19 00:58:31
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
parallel/test-tick-processor.js fails on ARM
|
arm test
|
Just got [this failure](https://jenkins-iojs.nodesource.com/job/node-test-commit-arm/196/nodes=armv7-wheezy/tapTestReport/test.tap-672/) on an [unrelated test change PR](https://github.com/nodejs/node/pull/2429):
```
not ok 672 - test-tick-processor.js
# nm: /lib/arm-linux-gnueabihf/ld-2.13.so: no symbols
# nm: '[sigpage]': No such file
# nm: /lib/arm-linux-gnueabihf/libdl-2.13.so: no symbols
# nm: /lib/arm-linux-gnueabihf/librt-2.13.so: no symbols
# nm: /usr/lib/arm-linux-gnueabihf/libstdc++.so.6.0.19: no symbols
# nm: /lib/arm-linux-gnueabihf/libm-2.13.so: no symbols
# nm: /lib/arm-linux-gnueabihf/libgcc_s.so.1: no symbols
# nm: /lib/arm-linux-gnueabihf/libc-2.13.so: no symbols
# nm: '[vectors]': No such file
#
# assert.js:89
# throw new assert.AssertionError({
# ^
# AssertionError: null == true
# at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-arm/nodes/armv7-wheezy/test/parallel/test-tick-processor.js:27:1)
# at Module._compile (module.js:430:26)
# at Object.Module._extensions..js (module.js:448:10)
# at Module.load (module.js:355:32)
# at Function.Module._load (module.js:310:12)
# at Function.Module.runMain (module.js:471:10)
# at startup (node.js:117:18)
# at node.js:952:3
```
It passed in the [original PR CI run](https://jenkins-iojs.nodesource.com/job/node-test-commit/117/), so it may be flaky.
cc @matthewloring
|
1.0
|
parallel/test-tick-processor.js fails on ARM - Just got [this failure](https://jenkins-iojs.nodesource.com/job/node-test-commit-arm/196/nodes=armv7-wheezy/tapTestReport/test.tap-672/) on an [unrelated test change PR](https://github.com/nodejs/node/pull/2429):
```
not ok 672 - test-tick-processor.js
# nm: /lib/arm-linux-gnueabihf/ld-2.13.so: no symbols
# nm: '[sigpage]': No such file
# nm: /lib/arm-linux-gnueabihf/libdl-2.13.so: no symbols
# nm: /lib/arm-linux-gnueabihf/librt-2.13.so: no symbols
# nm: /usr/lib/arm-linux-gnueabihf/libstdc++.so.6.0.19: no symbols
# nm: /lib/arm-linux-gnueabihf/libm-2.13.so: no symbols
# nm: /lib/arm-linux-gnueabihf/libgcc_s.so.1: no symbols
# nm: /lib/arm-linux-gnueabihf/libc-2.13.so: no symbols
# nm: '[vectors]': No such file
#
# assert.js:89
# throw new assert.AssertionError({
# ^
# AssertionError: null == true
# at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-arm/nodes/armv7-wheezy/test/parallel/test-tick-processor.js:27:1)
# at Module._compile (module.js:430:26)
# at Object.Module._extensions..js (module.js:448:10)
# at Module.load (module.js:355:32)
# at Function.Module._load (module.js:310:12)
# at Function.Module.runMain (module.js:471:10)
# at startup (node.js:117:18)
# at node.js:952:3
```
It passed in the [original PR CI run](https://jenkins-iojs.nodesource.com/job/node-test-commit/117/), so it may be flaky.
cc @matthewloring
|
test
|
parallel test tick processor js fails on arm just got on an not ok test tick processor js nm lib arm linux gnueabihf ld so no symbols nm no such file nm lib arm linux gnueabihf libdl so no symbols nm lib arm linux gnueabihf librt so no symbols nm usr lib arm linux gnueabihf libstdc so no symbols nm lib arm linux gnueabihf libm so no symbols nm lib arm linux gnueabihf libgcc s so no symbols nm lib arm linux gnueabihf libc so no symbols nm no such file assert js throw new assert assertionerror assertionerror null true at object home iojs build workspace node test commit arm nodes wheezy test parallel test tick processor js at module compile module js at object module extensions js module js at module load module js at function module load module js at function module runmain module js at startup node js at node js it passed in the so it may be flaky cc matthewloring
| 1
|
140,302
| 31,884,810,044
|
IssuesEvent
|
2023-09-16 20:27:45
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
numpy 1.26.0 has 18 GuardDog issues
|
guarddog code-execution exec-base64
| ERROR: type should be string, got "https://pypi.org/project/numpy\nhttps://inspector.pypi.io/project/numpy\n```{\n \"dependency\": \"numpy\",\n \"version\": \"1.26.0\",\n \"result\": {\n \"issues\": 18,\n \"errors\": {},\n \"results\": {\n \"exec-base64\": [\n {\n \"location\": \"numpy-1.26.0/numpy/distutils/exec_command.py:283\",\n \"code\": \" proc = subprocess.Popen(command, shell=use_shell, env=env, text=False,\\n stdout=subprocess.PIPE,\\n stderr=subprocess.STDOUT)\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/auxfuncs.py:613\",\n \"code\": \" return eval('%s:%s' % (l1, ' and '.join(l2)))\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/auxfuncs.py:621\",\n \"code\": \" return eval('%s:%s' % (l1, ' or '.join(l2)))\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/capi_maps.py:314\",\n \"code\": \" ret['size'] = repr(eval(ret['size']))\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/crackfortran.py:1326\",\n \"code\": \" v = eval(initexpr, {}, params)\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/crackfortran.py:2535\",\n \"code\": \" params[n] = eval(v, g_params, params)\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/crackfortran.py:2638\",\n \"code\": \" l = str(eval(l, {}, params))\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/crackfortran.py:2647\",\n \"code\": \" l = str(eval(l, {}, params))\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/numpy/f2py/crackfortran.py:2912\",\n \"code\": \" kindselect['kind'] = eval(\\n kindselect['kind'], {}, params)\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/runtests.py:509\",\n \"code\": \" ret = subprocess.call(cmd, env=env, cwd=ROOT_DIR)\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/runtests.py:514\",\n \"code\": \" p = subprocess.Popen(cmd, env=env, stdout=log, stderr=log,\\n cwd=ROOT_DIR)\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/vendored-meson/meson/mesonbuild/scripts/tags.py:34\",\n \"code\": \" return subprocess.run(['cscope', '-v', '-b', '-i-'], input=ls).returncode\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/vendored-meson/meson/mesonbuild/scripts/tags.py:39\",\n \"code\": \" return subprocess.run(['ctags', '-L-'], input=ls).returncode\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/vendored-meson/meson/mesonbuild/scripts/tags.py:44\",\n \"code\": \" return subprocess.run(['etags', '-'], input=ls).returncode\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/vendored-meson/meson/run_unittests.py:149\",\n \"code\": \" return subprocess.run(python_command + ['-m', 'pytest'] + pytest_args).returncode\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n },\n {\n \"location\": \"numpy-1.26.0/vendored-meson/meson/unittests/linuxliketests.py:817\",\n \"code\": \" self.assertEqual(subprocess.call(installed_exe, env=env), 0)\",\n \"message\": \"This package contains a call to the `eval` function with a `base64` encoded string as argument.\\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\\nstring.\\n\"\n }\n ],\n \"code-execution\": [\n {\n \"location\": \"numpy-1.26.0/numpy/core/setup.py:87\",\n \"code\": \" binutils_ver = os.popen(\\\"ld -v\\\").readlines()[0].strip()\",\n \"message\": \"This package is executing OS commands in the setup.py file\"\n },\n {\n \"location\": \"numpy-1.26.0/setup.py:142\",\n \"code\": \" proc = subprocess.Popen(['git', 'submodule', 'status'],\\n stdout=subprocess.PIPE)\",\n \"message\": \"This package is executing OS commands in the setup.py file\"\n }\n ]\n },\n \"path\": \"/tmp/tmp1_iumoti/numpy\"\n }\n}```"
|
1.0
|
numpy 1.26.0 has 18 GuardDog issues - https://pypi.org/project/numpy
https://inspector.pypi.io/project/numpy
```{
"dependency": "numpy",
"version": "1.26.0",
"result": {
"issues": 18,
"errors": {},
"results": {
"exec-base64": [
{
"location": "numpy-1.26.0/numpy/distutils/exec_command.py:283",
"code": " proc = subprocess.Popen(command, shell=use_shell, env=env, text=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/auxfuncs.py:613",
"code": " return eval('%s:%s' % (l1, ' and '.join(l2)))",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/auxfuncs.py:621",
"code": " return eval('%s:%s' % (l1, ' or '.join(l2)))",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/capi_maps.py:314",
"code": " ret['size'] = repr(eval(ret['size']))",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/crackfortran.py:1326",
"code": " v = eval(initexpr, {}, params)",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/crackfortran.py:2535",
"code": " params[n] = eval(v, g_params, params)",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/crackfortran.py:2638",
"code": " l = str(eval(l, {}, params))",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/crackfortran.py:2647",
"code": " l = str(eval(l, {}, params))",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/numpy/f2py/crackfortran.py:2912",
"code": " kindselect['kind'] = eval(\n kindselect['kind'], {}, params)",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/runtests.py:509",
"code": " ret = subprocess.call(cmd, env=env, cwd=ROOT_DIR)",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/runtests.py:514",
"code": " p = subprocess.Popen(cmd, env=env, stdout=log, stderr=log,\n cwd=ROOT_DIR)",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/vendored-meson/meson/mesonbuild/scripts/tags.py:34",
"code": " return subprocess.run(['cscope', '-v', '-b', '-i-'], input=ls).returncode",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/vendored-meson/meson/mesonbuild/scripts/tags.py:39",
"code": " return subprocess.run(['ctags', '-L-'], input=ls).returncode",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/vendored-meson/meson/mesonbuild/scripts/tags.py:44",
"code": " return subprocess.run(['etags', '-'], input=ls).returncode",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/vendored-meson/meson/run_unittests.py:149",
"code": " return subprocess.run(python_command + ['-m', 'pytest'] + pytest_args).returncode",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
},
{
"location": "numpy-1.26.0/vendored-meson/meson/unittests/linuxliketests.py:817",
"code": " self.assertEqual(subprocess.call(installed_exe, env=env), 0)",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
],
"code-execution": [
{
"location": "numpy-1.26.0/numpy/core/setup.py:87",
"code": " binutils_ver = os.popen(\"ld -v\").readlines()[0].strip()",
"message": "This package is executing OS commands in the setup.py file"
},
{
"location": "numpy-1.26.0/setup.py:142",
"code": " proc = subprocess.Popen(['git', 'submodule', 'status'],\n stdout=subprocess.PIPE)",
"message": "This package is executing OS commands in the setup.py file"
}
]
},
"path": "/tmp/tmp1_iumoti/numpy"
}
}```
|
non_test
|
numpy has guarddog issues dependency numpy version result issues errors results exec location numpy numpy distutils exec command py code proc subprocess popen command shell use shell env env text false n stdout subprocess pipe n stderr subprocess stdout message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy auxfuncs py code return eval s s and join message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy auxfuncs py code return eval s s or join message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy capi maps py code ret repr eval ret message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy crackfortran py code v eval initexpr params message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy crackfortran py code params eval v g params params message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy crackfortran py code l str eval l params message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy crackfortran py code l str eval l params message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy numpy crackfortran py code kindselect eval n kindselect params message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy runtests py code ret subprocess call cmd env env cwd root dir message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy runtests py code p subprocess popen cmd env env stdout log stderr log n cwd root dir message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy vendored meson meson mesonbuild scripts tags py code return subprocess run input ls returncode message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy vendored meson meson mesonbuild scripts tags py code return subprocess run input ls returncode message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy vendored meson meson mesonbuild scripts tags py code return subprocess run input ls returncode message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy vendored meson meson run unittests py code return subprocess run python command pytest args returncode message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n location numpy vendored meson meson unittests linuxliketests py code self assertequal subprocess call installed exe env env message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n code execution location numpy numpy core setup py code binutils ver os popen ld v readlines strip message this package is executing os commands in the setup py file location numpy setup py code proc subprocess popen n stdout subprocess pipe message this package is executing os commands in the setup py file path tmp iumoti numpy
| 0
|
80,481
| 7,748,601,379
|
IssuesEvent
|
2018-05-30 08:50:23
|
netzulo/qacode
|
https://api.github.com/repos/netzulo/qacode
|
opened
|
Handle get_log('client') when remote driver it's opened
|
Bug Testcase
|
# Documentation
- When you call on remote driver to `bot.navigation.get_log('client')` , fails, add messages to `server` **log**
```
>>> bot.navigation.get_log('client')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/qacode-0.5.2-py3.5.egg/qacode/core/bots/modules/nav_base.py", line 238, in get_log
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/selenium-3.12.0-py3.5.egg/selenium/webdriver/remote/webdriver.py", line 1240, in get_log
return self.execute(Command.GET_LOG, {'type': log_type})['value']
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/selenium-3.12.0-py3.5.egg/selenium/webdriver/remote/webdriver.py", line 314, in execute
self.error_handler.check_response(response)
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/selenium-3.12.0-py3.5.egg/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: log type 'client' not found
(Session info: chrome=67.0.3396.62)
(Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.4.0-124-generic x86_64)
>>> msgs = bot.navigation.get_log('client')
>>> msgs = bot.navigation.get_log('server')
>>> msgs
[{'level': 'INFO', 'timestamp': 1527669950611, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}, {'level': 'INFO', 'timestamp': 1527669950613, 'message': 'To upstream: {"sessionId": "9b1d6d7f841a55b6484c4bdc8a47162c", "type": "client"}'}, {'level': 'INFO', 'timestamp': 1527669950618, 'message': 'To downstream: {"sessionId":"9b1d6d7f841a55b6484c4bdc8a47162c","status":13,"value":{"message":"unknown error: log type \'client\' not found\\n (Session info: chrome=67.0.3396.62)\\n (Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.4.0-124-generic x86_64)"}}'}, {'level': 'INFO', 'timestamp': 1527669955170, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}]
>>> def serverp(msgs):
... for msg in msgs:
... print("SELENIUM-REMOTE: {}".format(msg))
...
>>> serverp(msgs)
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669950611, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669950613, 'message': 'To upstream: {"sessionId": "9b1d6d7f841a55b6484c4bdc8a47162c", "type": "client"}'}
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669950618, 'message': 'To downstream: {"sessionId":"9b1d6d7f841a55b6484c4bdc8a47162c","status":13,"value":{"message":"unknown error: log type \'client\' not found\\n (Session info: chrome=67.0.3396.62)\\n (Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.4.0-124-generic x86_64)"}}'}
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669955170, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}
>>>
```
|
1.0
|
Handle get_log('client') when remote driver it's opened - # Documentation
- When you call on remote driver to `bot.navigation.get_log('client')` , fails, add messages to `server` **log**
```
>>> bot.navigation.get_log('client')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/qacode-0.5.2-py3.5.egg/qacode/core/bots/modules/nav_base.py", line 238, in get_log
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/selenium-3.12.0-py3.5.egg/selenium/webdriver/remote/webdriver.py", line 1240, in get_log
return self.execute(Command.GET_LOG, {'type': log_type})['value']
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/selenium-3.12.0-py3.5.egg/selenium/webdriver/remote/webdriver.py", line 314, in execute
self.error_handler.check_response(response)
File "/home/ntz/.virtualenvs/qacp/lib/python3.5/site-packages/selenium-3.12.0-py3.5.egg/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: log type 'client' not found
(Session info: chrome=67.0.3396.62)
(Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.4.0-124-generic x86_64)
>>> msgs = bot.navigation.get_log('client')
>>> msgs = bot.navigation.get_log('server')
>>> msgs
[{'level': 'INFO', 'timestamp': 1527669950611, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}, {'level': 'INFO', 'timestamp': 1527669950613, 'message': 'To upstream: {"sessionId": "9b1d6d7f841a55b6484c4bdc8a47162c", "type": "client"}'}, {'level': 'INFO', 'timestamp': 1527669950618, 'message': 'To downstream: {"sessionId":"9b1d6d7f841a55b6484c4bdc8a47162c","status":13,"value":{"message":"unknown error: log type \'client\' not found\\n (Session info: chrome=67.0.3396.62)\\n (Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.4.0-124-generic x86_64)"}}'}, {'level': 'INFO', 'timestamp': 1527669955170, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}]
>>> def serverp(msgs):
... for msg in msgs:
... print("SELENIUM-REMOTE: {}".format(msg))
...
>>> serverp(msgs)
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669950611, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669950613, 'message': 'To upstream: {"sessionId": "9b1d6d7f841a55b6484c4bdc8a47162c", "type": "client"}'}
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669950618, 'message': 'To downstream: {"sessionId":"9b1d6d7f841a55b6484c4bdc8a47162c","status":13,"value":{"message":"unknown error: log type \'client\' not found\\n (Session info: chrome=67.0.3396.62)\\n (Driver info: chromedriver=2.37.544315 (730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux 4.4.0-124-generic x86_64)"}}'}
SELENIUM-REMOTE: {'level': 'INFO', 'timestamp': 1527669955170, 'message': '/session/9b1d6d7f841a55b6484c4bdc8a47162c/log: Executing POST on /session/9b1d6d7f841a55b6484c4bdc8a47162c/log (handler: GetLogsOfType)'}
>>>
```
|
test
|
handle get log client when remote driver it s opened documentation when you call on remote driver to bot navigation get log client fails add messages to server log bot navigation get log client traceback most recent call last file line in file home ntz virtualenvs qacp lib site packages qacode egg qacode core bots modules nav base py line in get log file home ntz virtualenvs qacp lib site packages selenium egg selenium webdriver remote webdriver py line in get log return self execute command get log type log type file home ntz virtualenvs qacp lib site packages selenium egg selenium webdriver remote webdriver py line in execute self error handler check response response file home ntz virtualenvs qacp lib site packages selenium egg selenium webdriver remote errorhandler py line in check response raise exception class message screen stacktrace selenium common exceptions webdriverexception message unknown error log type client not found session info chrome driver info chromedriver platform linux generic msgs bot navigation get log client msgs bot navigation get log server msgs def serverp msgs for msg in msgs print selenium remote format msg serverp msgs selenium remote level info timestamp message session log executing post on session log handler getlogsoftype selenium remote level info timestamp message to upstream sessionid type client selenium remote level info timestamp message to downstream sessionid status value message unknown error log type client not found n session info chrome n driver info chromedriver platform linux generic selenium remote level info timestamp message session log executing post on session log handler getlogsoftype
| 1
|
104,234
| 22,611,093,515
|
IssuesEvent
|
2022-06-29 17:16:43
|
phetsims/mean-share-and-balance
|
https://api.github.com/repos/phetsims/mean-share-and-balance
|
closed
|
Unnecessary use of `merge`
|
dev:code-review
|
For code review #41 ...
There's an occurrence of `merge` in MeanShareAndBalanceScreenView.ts:
```typescript
this.questionBar = new QuestionBar( this.layoutBounds, this.visibleBoundsProperty, merge( {
tandem: options.tandem.createTandem( 'questionBar' )
}, { labelText: meanShareAndBalanceStrings.levelingOutQuestion, barFill: '#2496D6' } ) );
```
I was going to suggest converting to `optionize`. But both of the arguments are object literals, so can't they be combined, like this?
```typescript
this.questionBar = new QuestionBar( this.layoutBounds, this.visibleBoundsProperty, {
labelText: meanShareAndBalanceStrings.levelingOutQuestion,
barFill: '#2496D6',
tandem: options.tandem.createTandem( 'questionBar' )
} );
```
|
1.0
|
Unnecessary use of `merge` - For code review #41 ...
There's an occurrence of `merge` in MeanShareAndBalanceScreenView.ts:
```typescript
this.questionBar = new QuestionBar( this.layoutBounds, this.visibleBoundsProperty, merge( {
tandem: options.tandem.createTandem( 'questionBar' )
}, { labelText: meanShareAndBalanceStrings.levelingOutQuestion, barFill: '#2496D6' } ) );
```
I was going to suggest converting to `optionize`. But both of the arguments are object literals, so can't they be combined, like this?
```typescript
this.questionBar = new QuestionBar( this.layoutBounds, this.visibleBoundsProperty, {
labelText: meanShareAndBalanceStrings.levelingOutQuestion,
barFill: '#2496D6',
tandem: options.tandem.createTandem( 'questionBar' )
} );
```
|
non_test
|
unnecessary use of merge for code review there s an occurrence of merge in meanshareandbalancescreenview ts typescript this questionbar new questionbar this layoutbounds this visibleboundsproperty merge tandem options tandem createtandem questionbar labeltext meanshareandbalancestrings levelingoutquestion barfill i was going to suggest converting to optionize but both of the arguments are object literals so can t they be combined like this typescript this questionbar new questionbar this layoutbounds this visibleboundsproperty labeltext meanshareandbalancestrings levelingoutquestion barfill tandem options tandem createtandem questionbar
| 0
|
235,479
| 19,348,778,897
|
IssuesEvent
|
2021-12-15 13:42:21
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: jepsen nightlies never run to completion
|
C-bug A-testing A-kv
|
While looking at old issues I got curious why we hadn't seen any additional failures for https://github.com/cockroachdb/cockroach/issues/49360, which used to fail at least every month or two.
Looking at the last nightly roachtest build at the time of writing [https://teamcity.cockroachdb.com/viewLog.html?buildId=3904068&buildTypeId=Cockroach_Nightlies_WorkloadNightly&tab=artifacts#%2Fjepsen%2Fmulti-register%2Fstrobe-skews%2Frun_1%2Fartifacts.zip](here) gives me the impression that due to some SSH misconfiguration, this test never succeeds:
```
ERROR [2021-12-14 11:00:38,609] clojure-agent-send-off-pool-1 - jepsen.control Error opening SSH session. Verify username, password, and node hostnames are correct.
SSH configuration is:
{:dir "/",
:private-key-path "/home/ubuntu/.ssh/id_rsa",
:password "root",
:username "ubuntu",
:port 22,
:strict-host-key-checking false,
:host "10.142.1.113",
:sudo nil,
:dummy nil,
:session nil}
ERROR [2021-12-14 11:00:38,624] main - jepsen.cli Oh jeez, I'm sorry, Jepsen broke. Here's why:
com.jcraft.jsch.JSchException: invalid privatekey: [B@44349e8c
```
but also does not report an error, likely as a result of this code:
https://github.com/cockroachdb/cockroach/blob/5d689c1b62d74689511e3e4efae4c56b00228fb6/pkg/cmd/roachtest/tests/jepsen.go#L252-L281
as we see this in the output:
```
11:00:19 jepsen.go:179: test status: running
11:00:38 jepsen.go:233: failed: output in run_110031.526429021_n6_bash: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-3904068-1639466723-44-n6cpu4:6 -- bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.142.0.110 -n 10.142.1.113 -n 10.142.1.14 -n 10.142.1.12 -n 10.142.0.135 \
--test multi-register --nemesis strobe-skews \
> invoke.log 2>&1 \
" returned: exit status 10
11:00:38 jepsen.go:253: grabbing artifacts from controller. Tail of controller log:
11:00:40 jepsen.go:277: Recognized BrokenBarrier or other known exceptions (see grep output above). Ignoring it and considering the test successful. See #30527 or #26082 for some of the ignored exceptions.
```
We need to fix the underlying SSH problem and reconsider the skip logic above.
|
1.0
|
roachtest: jepsen nightlies never run to completion - While looking at old issues I got curious why we hadn't seen any additional failures for https://github.com/cockroachdb/cockroach/issues/49360, which used to fail at least every month or two.
Looking at the last nightly roachtest build at the time of writing [https://teamcity.cockroachdb.com/viewLog.html?buildId=3904068&buildTypeId=Cockroach_Nightlies_WorkloadNightly&tab=artifacts#%2Fjepsen%2Fmulti-register%2Fstrobe-skews%2Frun_1%2Fartifacts.zip](here) gives me the impression that due to some SSH misconfiguration, this test never succeeds:
```
ERROR [2021-12-14 11:00:38,609] clojure-agent-send-off-pool-1 - jepsen.control Error opening SSH session. Verify username, password, and node hostnames are correct.
SSH configuration is:
{:dir "/",
:private-key-path "/home/ubuntu/.ssh/id_rsa",
:password "root",
:username "ubuntu",
:port 22,
:strict-host-key-checking false,
:host "10.142.1.113",
:sudo nil,
:dummy nil,
:session nil}
ERROR [2021-12-14 11:00:38,624] main - jepsen.cli Oh jeez, I'm sorry, Jepsen broke. Here's why:
com.jcraft.jsch.JSchException: invalid privatekey: [B@44349e8c
```
but also does not report an error, likely as a result of this code:
https://github.com/cockroachdb/cockroach/blob/5d689c1b62d74689511e3e4efae4c56b00228fb6/pkg/cmd/roachtest/tests/jepsen.go#L252-L281
as we see this in the output:
```
11:00:19 jepsen.go:179: test status: running
11:00:38 jepsen.go:233: failed: output in run_110031.526429021_n6_bash: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-3904068-1639466723-44-n6cpu4:6 -- bash -e -c "\
cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
~/lein run test \
--tarball file://${PWD}/cockroach.tgz \
--username ${USER} \
--ssh-private-key ~/.ssh/id_rsa \
--os ubuntu \
--time-limit 300 \
--concurrency 30 \
--recovery-time 25 \
--test-count 1 \
-n 10.142.0.110 -n 10.142.1.113 -n 10.142.1.14 -n 10.142.1.12 -n 10.142.0.135 \
--test multi-register --nemesis strobe-skews \
> invoke.log 2>&1 \
" returned: exit status 10
11:00:38 jepsen.go:253: grabbing artifacts from controller. Tail of controller log:
11:00:40 jepsen.go:277: Recognized BrokenBarrier or other known exceptions (see grep output above). Ignoring it and considering the test successful. See #30527 or #26082 for some of the ignored exceptions.
```
We need to fix the underlying SSH problem and reconsider the skip logic above.
|
test
|
roachtest jepsen nightlies never run to completion while looking at old issues i got curious why we hadn t seen any additional failures for which used to fail at least every month or two looking at the last nightly roachtest build at the time of writing here gives me the impression that due to some ssh misconfiguration this test never succeeds error clojure agent send off pool jepsen control error opening ssh session verify username password and node hostnames are correct ssh configuration is dir private key path home ubuntu ssh id rsa password root username ubuntu port strict host key checking false host sudo nil dummy nil session nil error main jepsen cli oh jeez i m sorry jepsen broke here s why com jcraft jsch jschexception invalid privatekey b but also does not report an error likely as a result of this code as we see this in the output jepsen go test status running jepsen go failed output in run bash home agent work go src github com cockroachdb cockroach bin roachprod run teamcity bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test multi register nemesis strobe skews invoke log returned exit status jepsen go grabbing artifacts from controller tail of controller log jepsen go recognized brokenbarrier or other known exceptions see grep output above ignoring it and considering the test successful see or for some of the ignored exceptions we need to fix the underlying ssh problem and reconsider the skip logic above
| 1
|
303,508
| 26,214,442,222
|
IssuesEvent
|
2023-01-04 09:47:11
|
Mbed-TLS/mbedtls
|
https://api.github.com/repos/Mbed-TLS/mbedtls
|
closed
|
Coding style change: gather and evaluate feedback
|
enhancement size-m component-test
|
Look at what the [coding style change](https://github.com/Mbed-TLS/mbedtls/issues/6346) does on new work in progress. Is the result mergeable? Is it acceptable?
This is a time-bounded task. We'll probably keep the feedback period open for a week or two.
This may lead to further issues if we discover problems at this stage.
|
1.0
|
Coding style change: gather and evaluate feedback - Look at what the [coding style change](https://github.com/Mbed-TLS/mbedtls/issues/6346) does on new work in progress. Is the result mergeable? Is it acceptable?
This is a time-bounded task. We'll probably keep the feedback period open for a week or two.
This may lead to further issues if we discover problems at this stage.
|
test
|
coding style change gather and evaluate feedback look at what the does on new work in progress is the result mergeable is it acceptable this is a time bounded task we ll probably keep the feedback period open for a week or two this may lead to further issues if we discover problems at this stage
| 1
|
380,936
| 26,435,405,093
|
IssuesEvent
|
2023-01-15 10:23:15
|
wagtail/wagtail
|
https://api.github.com/repos/wagtail/wagtail
|
closed
|
Search autocomplete
|
Documentation
|
Upon getting stuck on search autocomplete that acts a bit weird in my opionion. For example I got a list of names in snippets and have index.autocomplete in the search I still have to write full words to get some results. This is for people not the expected behavior. Researching the subject I notice the following ticket: https://github.com/wagtail/wagtail/issues/7720
There they mention the following hack:
from wagtail.search.backends.database.postgres.postgres import PostgresSearchQueryCompiler
PostgresSearchQueryCompiler.LAST_TERM_IS_PREFIX = True
Adding this to you code will result in expected behavior of your autocomplete search. However this is no where mentioned in the documentation. Nor is it explained why the autocomplete function acts so strange.
### Pertinent section of the Wagtail docs
https://docs.wagtail.org/en/latest/topics/search/indexing.html#index-autocompletefield
The above link does not seem true.
### Details
I would like the search documentation explain why index.autocomplete works the way it does now. Why adding the code snippet makes it works as expected?
|
1.0
|
Search autocomplete - Upon getting stuck on search autocomplete that acts a bit weird in my opionion. For example I got a list of names in snippets and have index.autocomplete in the search I still have to write full words to get some results. This is for people not the expected behavior. Researching the subject I notice the following ticket: https://github.com/wagtail/wagtail/issues/7720
There they mention the following hack:
from wagtail.search.backends.database.postgres.postgres import PostgresSearchQueryCompiler
PostgresSearchQueryCompiler.LAST_TERM_IS_PREFIX = True
Adding this to you code will result in expected behavior of your autocomplete search. However this is no where mentioned in the documentation. Nor is it explained why the autocomplete function acts so strange.
### Pertinent section of the Wagtail docs
https://docs.wagtail.org/en/latest/topics/search/indexing.html#index-autocompletefield
The above link does not seem true.
### Details
I would like the search documentation explain why index.autocomplete works the way it does now. Why adding the code snippet makes it works as expected?
|
non_test
|
search autocomplete upon getting stuck on search autocomplete that acts a bit weird in my opionion for example i got a list of names in snippets and have index autocomplete in the search i still have to write full words to get some results this is for people not the expected behavior researching the subject i notice the following ticket there they mention the following hack from wagtail search backends database postgres postgres import postgressearchquerycompiler postgressearchquerycompiler last term is prefix true adding this to you code will result in expected behavior of your autocomplete search however this is no where mentioned in the documentation nor is it explained why the autocomplete function acts so strange pertinent section of the wagtail docs the above link does not seem true details i would like the search documentation explain why index autocomplete works the way it does now why adding the code snippet makes it works as expected
| 0
|
90,486
| 26,114,386,906
|
IssuesEvent
|
2022-12-28 02:50:19
|
dotnet/arcade
|
https://api.github.com/repos/dotnet/arcade
|
closed
|
Build failed: dotnet-helix-service-weekly/main #2022122601
|
Build Failed FROps
|
Build [#2022122601](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2074113) failed
## :x: : internal / dotnet-helix-service-weekly failed
### Summary
**Finished** - Mon, 26 Dec 2022 15:43:27 GMT
**Duration** - 33 minutes
**Requested for** - Microsoft.VisualStudio.Services.TFS
**Reason** - schedule
### Details
#### SynchronizeSecrets
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret '28ec6507-2167-4eaa-a294-34408cf5dd0e' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'AccessToken-dotnet-build-bot-public-repo' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'akams-client-id' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'akams-client-secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'apiscan-service-principal-app-id' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'apiscan-service-principal-app-secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'App-DotNetCoreESRPRelease-Secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'App-PipeBuild-Client-Secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'aspnetmaven-gpg-key-id' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'aspnetmaven-gpg-old-private-key' consider deleting it.
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Unhandled Exception: Password validation failed. The password does not meet policy requirements because it is not complex enough.
### Changes
|
1.0
|
Build failed: dotnet-helix-service-weekly/main #2022122601 - Build [#2022122601](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2074113) failed
## :x: : internal / dotnet-helix-service-weekly failed
### Summary
**Finished** - Mon, 26 Dec 2022 15:43:27 GMT
**Duration** - 33 minutes
**Requested for** - Microsoft.VisualStudio.Services.TFS
**Reason** - schedule
### Details
#### SynchronizeSecrets
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret '28ec6507-2167-4eaa-a294-34408cf5dd0e' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'AccessToken-dotnet-build-bot-public-repo' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'akams-client-id' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'akams-client-secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'apiscan-service-principal-app-id' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'apiscan-service-principal-app-secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'App-DotNetCoreESRPRelease-Secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'App-PipeBuild-Client-Secret' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'aspnetmaven-gpg-key-id' consider deleting it.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Extra secret 'aspnetmaven-gpg-old-private-key' consider deleting it.
- :x: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2074113/logs/23) - Unhandled Exception: Password validation failed. The password does not meet policy requirements because it is not complex enough.
### Changes
|
non_test
|
build failed dotnet helix service weekly main build failed x internal dotnet helix service weekly failed summary finished mon dec gmt duration minutes requested for microsoft visualstudio services tfs reason schedule details synchronizesecrets warning extra secret consider deleting it warning extra secret accesstoken dotnet build bot public repo consider deleting it warning extra secret akams client id consider deleting it warning extra secret akams client secret consider deleting it warning extra secret apiscan service principal app id consider deleting it warning extra secret apiscan service principal app secret consider deleting it warning extra secret app dotnetcoreesrprelease secret consider deleting it warning extra secret app pipebuild client secret consider deleting it warning extra secret aspnetmaven gpg key id consider deleting it warning extra secret aspnetmaven gpg old private key consider deleting it x unhandled exception password validation failed the password does not meet policy requirements because it is not complex enough changes
| 0
|
45,888
| 5,757,214,436
|
IssuesEvent
|
2017-04-26 02:58:21
|
nskins/goby
|
https://api.github.com/repos/nskins/goby
|
closed
|
Refactor Shop Class
|
better test suite enhancement
|
Rewrite the main loop of the running a `Shop` event. Include a suite of tests that cover all those lines and any other functions of the class.
|
1.0
|
Refactor Shop Class - Rewrite the main loop of the running a `Shop` event. Include a suite of tests that cover all those lines and any other functions of the class.
|
test
|
refactor shop class rewrite the main loop of the running a shop event include a suite of tests that cover all those lines and any other functions of the class
| 1
|
289,932
| 25,024,832,376
|
IssuesEvent
|
2022-11-04 06:40:50
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: cdc/mixed-versions failed
|
C-test-failure O-robot O-roachtest release-blocker branch-release-22.2
|
roachtest.cdc/mixed-versions [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7309203?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7309203?buildTab=artifacts#/cdc/mixed-versions) on release-22.2 @ [3acf4ebe119d402fa52688b3c31e2f16be0a6140](https://github.com/cockroachdb/cockroach/commits/3acf4ebe119d402fa52688b3c31e2f16be0a6140):
```
test artifacts and logs in: /artifacts/cdc/mixed-versions/run_1
test_runner.go:1061,test_runner.go:960: test timed out (30m0s)
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #91046 roachtest: cdc/mixed-versions failed [C-test-failure O-roachtest O-robot T-testeng branch-master]
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/mixed-versions.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: cdc/mixed-versions failed - roachtest.cdc/mixed-versions [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7309203?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/7309203?buildTab=artifacts#/cdc/mixed-versions) on release-22.2 @ [3acf4ebe119d402fa52688b3c31e2f16be0a6140](https://github.com/cockroachdb/cockroach/commits/3acf4ebe119d402fa52688b3c31e2f16be0a6140):
```
test artifacts and logs in: /artifacts/cdc/mixed-versions/run_1
test_runner.go:1061,test_runner.go:960: test timed out (30m0s)
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #91046 roachtest: cdc/mixed-versions failed [C-test-failure O-roachtest O-robot T-testeng branch-master]
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/mixed-versions.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest cdc mixed versions failed roachtest cdc mixed versions with on release test artifacts and logs in artifacts cdc mixed versions run test runner go test runner go test timed out parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see same failure on other branches roachtest cdc mixed versions failed cc cockroachdb test eng
| 1
|
322,122
| 27,583,718,226
|
IssuesEvent
|
2023-03-08 18:01:32
|
facebook/react-native
|
https://api.github.com/repos/facebook/react-native
|
closed
|
Crash on email detection in TextInput on Xiaomi devices running android 10
|
Stale Component: TextInput Platform: Android Bug Needs: Author Feedback Needs: Repro Needs: Verify on Latest Version
|
Xiaomi devices running android 10 show a little popup with the Text `Frequent email` when they detect a valid email address in the text input currently selected. This feature causes some crashes on my react native app. Sadly I can't reproduce in on an empty project, but it always happens on some inputs in my app. I have not been able to determine the difference between the textinputs that causes the problem and the textinput that does not cause any crash.
React Native version: 0.61.2
## Steps To Reproduce
1. Select a textinput
2. digit a valid email address (such as `test@test.com`). As soon as I type the last `c` (creating a valid email format), the app crashed
The error:
> java.lang.NullPointerException · Attempt to invoke direct method 'void android.widget.Editor$SelectionModifierCursorController.initDrawables()' on a null object reference
The full stack trace:
```
java.lang.NullPointerException: Attempt to invoke direct method 'void android.widget.Editor$SelectionModifierCursorController.initDrawables()' on a null object reference
at android.widget.Editor$SelectionModifierCursorController.access$300(Editor.java:6696)
at android.widget.Editor.getEmailPopupWindow(Editor.java:1469)
at android.widget.Editor.showEmailPopupWindow(Editor.java:1477)
at android.widget.Editor.handleEmailPopup(Editor.java:1456)
at android.widget.Editor.updateCursorPosition(Editor.java:2099)
at android.widget.TextView.getUpdatedHighlightPath(TextView.java:7813)
at android.widget.TextView.onDraw(TextView.java:7998)
at android.view.View.draw(View.java:21472)
at android.view.View.updateDisplayListIfDirty(View.java:20349)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
2019-11-12 17:19:57.876 20111-20111/com.yourvoice.ccApp.dev E/AndroidRuntime: at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ThreadedRenderer.updateViewTreeDisplayList(ThreadedRenderer.java:575)
at android.view.ThreadedRenderer.updateRootDisplayList(ThreadedRenderer.java:581)
at android.view.ThreadedRenderer.draw(ThreadedRenderer.java:654)
at android.view.ViewRootImpl.draw(ViewRootImpl.java:3687)
at android.view.ViewRootImpl.performDraw(ViewRootImpl.java:3482)
at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:2819)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1782)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:7785)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:1031)
at android.view.Choreographer.doCallbacks(Choreographer.java:854)
at android.view.Choreographer.doFrame(Choreographer.java:789)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:1016)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:221)
at android.app.ActivityThread.main(ActivityThread.java:7520)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:539)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:950)
```
|
1.0
|
Crash on email detection in TextInput on Xiaomi devices running android 10 - Xiaomi devices running android 10 show a little popup with the Text `Frequent email` when they detect a valid email address in the text input currently selected. This feature causes some crashes on my react native app. Sadly I can't reproduce in on an empty project, but it always happens on some inputs in my app. I have not been able to determine the difference between the textinputs that causes the problem and the textinput that does not cause any crash.
React Native version: 0.61.2
## Steps To Reproduce
1. Select a textinput
2. digit a valid email address (such as `test@test.com`). As soon as I type the last `c` (creating a valid email format), the app crashed
The error:
> java.lang.NullPointerException · Attempt to invoke direct method 'void android.widget.Editor$SelectionModifierCursorController.initDrawables()' on a null object reference
The full stack trace:
```
java.lang.NullPointerException: Attempt to invoke direct method 'void android.widget.Editor$SelectionModifierCursorController.initDrawables()' on a null object reference
at android.widget.Editor$SelectionModifierCursorController.access$300(Editor.java:6696)
at android.widget.Editor.getEmailPopupWindow(Editor.java:1469)
at android.widget.Editor.showEmailPopupWindow(Editor.java:1477)
at android.widget.Editor.handleEmailPopup(Editor.java:1456)
at android.widget.Editor.updateCursorPosition(Editor.java:2099)
at android.widget.TextView.getUpdatedHighlightPath(TextView.java:7813)
at android.widget.TextView.onDraw(TextView.java:7998)
at android.view.View.draw(View.java:21472)
at android.view.View.updateDisplayListIfDirty(View.java:20349)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ViewGroup.recreateChildDisplayList(ViewGroup.java:4396)
2019-11-12 17:19:57.876 20111-20111/com.yourvoice.ccApp.dev E/AndroidRuntime: at android.view.ViewGroup.dispatchGetDisplayList(ViewGroup.java:4369)
at android.view.View.updateDisplayListIfDirty(View.java:20309)
at android.view.ThreadedRenderer.updateViewTreeDisplayList(ThreadedRenderer.java:575)
at android.view.ThreadedRenderer.updateRootDisplayList(ThreadedRenderer.java:581)
at android.view.ThreadedRenderer.draw(ThreadedRenderer.java:654)
at android.view.ViewRootImpl.draw(ViewRootImpl.java:3687)
at android.view.ViewRootImpl.performDraw(ViewRootImpl.java:3482)
at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:2819)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1782)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:7785)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:1031)
at android.view.Choreographer.doCallbacks(Choreographer.java:854)
at android.view.Choreographer.doFrame(Choreographer.java:789)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:1016)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:221)
at android.app.ActivityThread.main(ActivityThread.java:7520)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:539)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:950)
```
|
test
|
crash on email detection in textinput on xiaomi devices running android xiaomi devices running android show a little popup with the text frequent email when they detect a valid email address in the text input currently selected this feature causes some crashes on my react native app sadly i can t reproduce in on an empty project but it always happens on some inputs in my app i have not been able to determine the difference between the textinputs that causes the problem and the textinput that does not cause any crash react native version steps to reproduce select a textinput digit a valid email address such as test test com as soon as i type the last c creating a valid email format the app crashed the error java lang nullpointerexception · attempt to invoke direct method void android widget editor selectionmodifiercursorcontroller initdrawables on a null object reference the full stack trace java lang nullpointerexception attempt to invoke direct method void android widget editor selectionmodifiercursorcontroller initdrawables on a null object reference at android widget editor selectionmodifiercursorcontroller access editor java at android widget editor getemailpopupwindow editor java at android widget editor showemailpopupwindow editor java at android widget editor handleemailpopup editor java at android widget editor updatecursorposition editor java at android widget textview getupdatedhighlightpath textview java at android widget textview ondraw textview java at android view view draw view java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view viewgroup recreatechilddisplaylist viewgroup java com yourvoice ccapp dev e androidruntime at android view viewgroup dispatchgetdisplaylist viewgroup java at android view view updatedisplaylistifdirty view java at android view threadedrenderer updateviewtreedisplaylist threadedrenderer java at android view threadedrenderer updaterootdisplaylist threadedrenderer java at android view threadedrenderer draw threadedrenderer java at android view viewrootimpl draw viewrootimpl java at android view viewrootimpl performdraw viewrootimpl java at android view viewrootimpl performtraversals viewrootimpl java at android view viewrootimpl dotraversal viewrootimpl java at android view viewrootimpl traversalrunnable run viewrootimpl java at android view choreographer callbackrecord run choreographer java at android view choreographer docallbacks choreographer java at android view choreographer doframe choreographer java at android view choreographer framedisplayeventreceiver run choreographer java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java
| 1
|
310,803
| 26,745,743,597
|
IssuesEvent
|
2023-01-30 15:52:49
|
scylladb/scylladb
|
https://api.github.com/repos/scylladb/scylladb
|
closed
|
large allocation during double_node_failure_during_mv_insert_3_nodes_test
|
materialized-views dtest
|
Seen in [dtest-release/100/artifact/logs-release.2/1555735815550_materialized_views_test.TestMaterializedViews.double_node_failure_during_mv_insert_3_nodes_test/node1.log](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/100/artifact/logs-release.2/1555735815550_materialized_views_test.TestMaterializedViews.double_node_failure_during_mv_insert_3_nodes_test/node1.log):
```
WARN 2019-04-20 04:49:49,020 [shard 0] seastar_memory - oversized allocation: 1310720 bytes. This is non-fatal, but could lead to latency and/or fragmentation issues. Please report: at 0x438610b
0x3ed8213
0x3edcb1b
0x24a62f5
0x2498414
0x2499ab8
0x249b93e
0x1dd4e11
0x1ddc0a0
0x1ebe22c
0x1de8124
0x1e2c17e
0x1e2b979
0x1e2e3a0
0x3e9946c
0x3f21ff1
0x3f221ee
0x3ff68d5
0x3e94651
0x8b66f4
/jenkins/workspace/scylla-master/dtest-release@2/scylla-dtest/../scylla/dynamic_libs/libc.so.6+0x24412
0x9166ed
$ addr2line -Cfpi -e logs-release.2/scylla
seastar::current_backtrace() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../include/seastar/util/backtrace.hh:55
(inlined by) seastar::current_backtrace() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/util/backtrace.cc:84
seastar::memory::cpu_pages::warn_large_allocation(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:624
operator new(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:632
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:638
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:1209
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:1259
(inlined by) operator new(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:1662
seastar::circular_buffer<seastar::shared_mutex::waiter, std::allocator<seastar::shared_mutex::waiter> >::expand(unsigned long) at /usr/include/c++/8/ext/new_allocator.h:111
(inlined by) seastar::circular_buffer<seastar::shared_mutex::waiter, std::allocator<seastar::shared_mutex::waiter> >::expand(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:301
db::hints::manager::end_point_hints_manager::store_hint(seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr)::{lambda()#1}::operator()() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:295
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:331
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:391
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/shared_mutex.hh:73
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/shared_mutex.hh:137
(inlined by) operator() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/db/hints/manager.cc:166
db::hints::manager::end_point_hints_manager::store_hint(seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1482
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1554
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/gate.hh:123
(inlined by) db::hints::manager::end_point_hints_manager::store_hint(seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/db/hints/manager.cc:160
db::hints::manager::store_hint(gms::inet_address, seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/db/hints/manager.cc:299
service::storage_proxy::hint_to_dead_endpoints<std::vector<gms::inet_address, std::allocator<gms::inet_address> > >(std::unique_ptr<service::mutation_holder, std::default_delete<service::mutation_holder> >&, std::vector<gms::inet_address, std::allocator<gms::inet_address> > const&, db::write_type, tracing::trace_state_ptr)::{lambda(gms::inet_address)#1}::operator()(gms::inet_address) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1625
(inlined by) ?? at /usr/include/c++/8/bits/predefined_ops.h:283
(inlined by) ?? at /usr/include/c++/8/bits/stl_algo.h:3194
(inlined by) ?? at /usr/include/c++/8/bits/stl_algo.h:4105
(inlined by) ?? at /usr/include/boost/range/algorithm/count_if.hpp:44
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1620
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1616
(inlined by) service::storage_proxy::hint_to_dead_endpoints(unsigned long, db::consistency_level) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1068
operator() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1104
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1482
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1554
(inlined by) parallel_for_each<__gnu_cxx::__normal_iterator<service::storage_proxy::unique_response_handler*, std::vector<service::storage_proxy::unique_response_handler> >, service::storage_proxy::mutate_begin(std::vector<service::storage_proxy::unique_response_handler>, db::consistency_level, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long int, std::ratio<1, 1000> > > >)::<lambda(service::storage_proxy::unique_response_handler&)> > at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future-util.hh:129
seastar::future<> service::storage_proxy::mutate_internal<boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > > >(boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > >, db::consistency_level, bool, tracing::trace_state_ptr, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > > >) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future-util.hh:180
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1111
(inlined by) service::storage_proxy::mutate_internal<boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > > >(boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > >, db::consistency_level, bool, tracing::trace_state_ptr, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > > >)::{lambda(std::vector<service::storage_proxy::unique_response_handler, std::allocator<service::storage_proxy::unique_response_handler> >)#1}::operator()(std::vector<service::storage_proxy::unique_response_handler, std::allocator<service::storage_proxy::unique_response_handler> >) const at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1288
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:35
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:43
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1472
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1002
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:983
(inlined by) seastar::future<> service::storage_proxy::mutate_internal<boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > > >(boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > >, db::consistency_level, bool, tracing::trace_state_ptr, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > > >) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1291
service::storage_proxy::do_mutate(std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1252
seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<std::_Mem_fn<seastar::future<> (service::storage_proxy::*)(std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> >::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /usr/include/c++/8/bits/invoke.h:73
(inlined by) ?? at /usr/include/c++/8/bits/invoke.h:96
(inlined by) ?? at /usr/include/c++/8/functional:114
(inlined by) seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<std::_Mem_fn<seastar::future<> (service::storage_proxy::*)(std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> >::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:71
seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<seastar::inheriting_concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::make_stage_for_group(seastar::scheduling_group)::{lambda(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)#1}>::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:145
(inlined by) seastar::inheriting_concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::make_stage_for_group(seastar::scheduling_group)::{lambda(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)#1}::operator()(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) const at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/execution_stage.hh:323
(inlined by) seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<seastar::inheriting_concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::make_stage_for_group(seastar::scheduling_group)::{lambda(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)#1}>::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:71
seastar::concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::do_flush() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:145
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:35
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:43
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1472
(inlined by) seastar::concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::do_flush() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/execution_stage.hh:243
operator() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/execution_stage.cc:140
(inlined by) run_and_dispose at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../include/seastar/core/task.hh:48
seastar::reactor::run_tasks(seastar::reactor::task_queue&) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:3652
seastar::reactor::run_some_tasks() [clone .part.2880] at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:4077
seastar::reactor::run() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:4060
(inlined by) seastar::reactor::run() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:4220
seastar::app_template::run_deprecated(int, char**, std::function<void ()>&&) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/app-template.cc:186
main at /jenkins/workspace/scylla-master/dtest-release@2/scylla/main.cc:350
```
|
1.0
|
large allocation during double_node_failure_during_mv_insert_3_nodes_test - Seen in [dtest-release/100/artifact/logs-release.2/1555735815550_materialized_views_test.TestMaterializedViews.double_node_failure_during_mv_insert_3_nodes_test/node1.log](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/100/artifact/logs-release.2/1555735815550_materialized_views_test.TestMaterializedViews.double_node_failure_during_mv_insert_3_nodes_test/node1.log):
```
WARN 2019-04-20 04:49:49,020 [shard 0] seastar_memory - oversized allocation: 1310720 bytes. This is non-fatal, but could lead to latency and/or fragmentation issues. Please report: at 0x438610b
0x3ed8213
0x3edcb1b
0x24a62f5
0x2498414
0x2499ab8
0x249b93e
0x1dd4e11
0x1ddc0a0
0x1ebe22c
0x1de8124
0x1e2c17e
0x1e2b979
0x1e2e3a0
0x3e9946c
0x3f21ff1
0x3f221ee
0x3ff68d5
0x3e94651
0x8b66f4
/jenkins/workspace/scylla-master/dtest-release@2/scylla-dtest/../scylla/dynamic_libs/libc.so.6+0x24412
0x9166ed
$ addr2line -Cfpi -e logs-release.2/scylla
seastar::current_backtrace() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../include/seastar/util/backtrace.hh:55
(inlined by) seastar::current_backtrace() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/util/backtrace.cc:84
seastar::memory::cpu_pages::warn_large_allocation(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:624
operator new(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:632
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:638
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:1209
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:1259
(inlined by) operator new(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/memory.cc:1662
seastar::circular_buffer<seastar::shared_mutex::waiter, std::allocator<seastar::shared_mutex::waiter> >::expand(unsigned long) at /usr/include/c++/8/ext/new_allocator.h:111
(inlined by) seastar::circular_buffer<seastar::shared_mutex::waiter, std::allocator<seastar::shared_mutex::waiter> >::expand(unsigned long) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:301
db::hints::manager::end_point_hints_manager::store_hint(seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr)::{lambda()#1}::operator()() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:295
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:331
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/circular_buffer.hh:391
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/shared_mutex.hh:73
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/shared_mutex.hh:137
(inlined by) operator() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/db/hints/manager.cc:166
db::hints::manager::end_point_hints_manager::store_hint(seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1482
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1554
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/gate.hh:123
(inlined by) db::hints::manager::end_point_hints_manager::store_hint(seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/db/hints/manager.cc:160
db::hints::manager::store_hint(gms::inet_address, seastar::lw_shared_ptr<schema const>, seastar::lw_shared_ptr<frozen_mutation const>, tracing::trace_state_ptr) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/db/hints/manager.cc:299
service::storage_proxy::hint_to_dead_endpoints<std::vector<gms::inet_address, std::allocator<gms::inet_address> > >(std::unique_ptr<service::mutation_holder, std::default_delete<service::mutation_holder> >&, std::vector<gms::inet_address, std::allocator<gms::inet_address> > const&, db::write_type, tracing::trace_state_ptr)::{lambda(gms::inet_address)#1}::operator()(gms::inet_address) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1625
(inlined by) ?? at /usr/include/c++/8/bits/predefined_ops.h:283
(inlined by) ?? at /usr/include/c++/8/bits/stl_algo.h:3194
(inlined by) ?? at /usr/include/c++/8/bits/stl_algo.h:4105
(inlined by) ?? at /usr/include/boost/range/algorithm/count_if.hpp:44
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1620
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1616
(inlined by) service::storage_proxy::hint_to_dead_endpoints(unsigned long, db::consistency_level) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1068
operator() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1104
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1482
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1554
(inlined by) parallel_for_each<__gnu_cxx::__normal_iterator<service::storage_proxy::unique_response_handler*, std::vector<service::storage_proxy::unique_response_handler> >, service::storage_proxy::mutate_begin(std::vector<service::storage_proxy::unique_response_handler>, db::consistency_level, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long int, std::ratio<1, 1000> > > >)::<lambda(service::storage_proxy::unique_response_handler&)> > at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future-util.hh:129
seastar::future<> service::storage_proxy::mutate_internal<boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > > >(boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > >, db::consistency_level, bool, tracing::trace_state_ptr, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > > >) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future-util.hh:180
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1111
(inlined by) service::storage_proxy::mutate_internal<boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > > >(boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > >, db::consistency_level, bool, tracing::trace_state_ptr, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > > >)::{lambda(std::vector<service::storage_proxy::unique_response_handler, std::allocator<service::storage_proxy::unique_response_handler> >)#1}::operator()(std::vector<service::storage_proxy::unique_response_handler, std::allocator<service::storage_proxy::unique_response_handler> >) const at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1288
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:35
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:43
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1472
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1002
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:983
(inlined by) seastar::future<> service::storage_proxy::mutate_internal<boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > > >(boost::iterator_range<__gnu_cxx::__normal_iterator<mutation*, std::vector<mutation, std::allocator<mutation> > > >, db::consistency_level, bool, tracing::trace_state_ptr, std::optional<std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > > >) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1291
service::storage_proxy::do_mutate(std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/service/storage_proxy.cc:1252
seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<std::_Mem_fn<seastar::future<> (service::storage_proxy::*)(std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> >::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /usr/include/c++/8/bits/invoke.h:73
(inlined by) ?? at /usr/include/c++/8/bits/invoke.h:96
(inlined by) ?? at /usr/include/c++/8/functional:114
(inlined by) seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<std::_Mem_fn<seastar::future<> (service::storage_proxy::*)(std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> >::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:71
seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<seastar::inheriting_concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::make_stage_for_group(seastar::scheduling_group)::{lambda(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)#1}>::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:145
(inlined by) seastar::inheriting_concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::make_stage_for_group(seastar::scheduling_group)::{lambda(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)#1}::operator()(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) const at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/execution_stage.hh:323
(inlined by) seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)>::direct_vtable_for<seastar::inheriting_concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::make_stage_for_group(seastar::scheduling_group)::{lambda(service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)#1}>::call(seastar::noncopyable_function<seastar::future<> (service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool)> const*, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:71
seastar::concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::do_flush() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/util/noncopyable_function.hh:145
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:35
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/apply.hh:43
(inlined by) ?? at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/future.hh:1472
(inlined by) seastar::concrete_execution_stage<seastar::future<>, service::storage_proxy*, std::vector<mutation, std::allocator<mutation> >, db::consistency_level, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000l> > >, tracing::trace_state_ptr, bool>::do_flush() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/include/seastar/core/execution_stage.hh:243
operator() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/execution_stage.cc:140
(inlined by) run_and_dispose at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../include/seastar/core/task.hh:48
seastar::reactor::run_tasks(seastar::reactor::task_queue&) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:3652
seastar::reactor::run_some_tasks() [clone .part.2880] at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:4077
seastar::reactor::run() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:4060
(inlined by) seastar::reactor::run() at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/reactor.cc:4220
seastar::app_template::run_deprecated(int, char**, std::function<void ()>&&) at /jenkins/workspace/scylla-master/dtest-release@2/scylla/seastar/build/release/../../src/core/app-template.cc:186
main at /jenkins/workspace/scylla-master/dtest-release@2/scylla/main.cc:350
```
|
test
|
large allocation during double node failure during mv insert nodes test seen in warn seastar memory oversized allocation bytes this is non fatal but could lead to latency and or fragmentation issues please report at jenkins workspace scylla master dtest release scylla dtest scylla dynamic libs libc so cfpi e logs release scylla seastar current backtrace at jenkins workspace scylla master dtest release scylla seastar build release include seastar util backtrace hh inlined by seastar current backtrace at jenkins workspace scylla master dtest release scylla seastar build release src util backtrace cc seastar memory cpu pages warn large allocation unsigned long at jenkins workspace scylla master dtest release scylla seastar build release src core memory cc operator new unsigned long at jenkins workspace scylla master dtest release scylla seastar build release src core memory cc inlined by at jenkins workspace scylla master dtest release scylla seastar build release src core memory cc inlined by at jenkins workspace scylla master dtest release scylla seastar build release src core memory cc inlined by at jenkins workspace scylla master dtest release scylla seastar build release src core memory cc inlined by operator new unsigned long at jenkins workspace scylla master dtest release scylla seastar build release src core memory cc seastar circular buffer expand unsigned long at usr include c ext new allocator h inlined by seastar circular buffer expand unsigned long at jenkins workspace scylla master dtest release scylla seastar include seastar core circular buffer hh db hints manager end point hints manager store hint seastar lw shared ptr seastar lw shared ptr tracing trace state ptr lambda operator at jenkins workspace scylla master dtest release scylla seastar include seastar core circular buffer hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core circular buffer hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core circular buffer hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core shared mutex hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core shared mutex hh inlined by operator at jenkins workspace scylla master dtest release scylla db hints manager cc db hints manager end point hints manager store hint seastar lw shared ptr seastar lw shared ptr tracing trace state ptr at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core gate hh inlined by db hints manager end point hints manager store hint seastar lw shared ptr seastar lw shared ptr tracing trace state ptr at jenkins workspace scylla master dtest release scylla db hints manager cc db hints manager store hint gms inet address seastar lw shared ptr seastar lw shared ptr tracing trace state ptr at jenkins workspace scylla master dtest release scylla db hints manager cc service storage proxy hint to dead endpoints std unique ptr std vector const db write type tracing trace state ptr lambda gms inet address operator gms inet address at jenkins workspace scylla master dtest release scylla service storage proxy cc inlined by at usr include c bits predefined ops h inlined by at usr include c bits stl algo h inlined by at usr include c bits stl algo h inlined by at usr include boost range algorithm count if hpp inlined by at jenkins workspace scylla master dtest release scylla service storage proxy cc inlined by at jenkins workspace scylla master dtest release scylla service storage proxy cc inlined by service storage proxy hint to dead endpoints unsigned long db consistency level at jenkins workspace scylla master dtest release scylla service storage proxy cc operator at jenkins workspace scylla master dtest release scylla service storage proxy cc inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by parallel for each service storage proxy mutate begin std vector db consistency level std optional at jenkins workspace scylla master dtest release scylla seastar include seastar core future util hh seastar future service storage proxy mutate internal boost iterator range db consistency level bool tracing trace state ptr std optional at jenkins workspace scylla master dtest release scylla seastar include seastar core future util hh inlined by at jenkins workspace scylla master dtest release scylla service storage proxy cc inlined by service storage proxy mutate internal boost iterator range db consistency level bool tracing trace state ptr std optional lambda std vector operator std vector const at jenkins workspace scylla master dtest release scylla service storage proxy cc inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core apply hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core apply hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by seastar future service storage proxy mutate internal boost iterator range db consistency level bool tracing trace state ptr std optional at jenkins workspace scylla master dtest release scylla service storage proxy cc service storage proxy do mutate std vector db consistency level std chrono time point tracing trace state ptr bool at jenkins workspace scylla master dtest release scylla service storage proxy cc seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool direct vtable for service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool call seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool const service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool at usr include c bits invoke h inlined by at usr include c bits invoke h inlined by at usr include c functional inlined by seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool direct vtable for service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool call seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool const service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool at jenkins workspace scylla master dtest release scylla seastar include seastar util noncopyable function hh seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool direct vtable for service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool make stage for group seastar scheduling group lambda service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool call seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool const service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool at jenkins workspace scylla master dtest release scylla seastar include seastar util noncopyable function hh inlined by seastar inheriting concrete execution stage service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool make stage for group seastar scheduling group lambda service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool operator service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool const at jenkins workspace scylla master dtest release scylla seastar include seastar core execution stage hh inlined by seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool direct vtable for service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool make stage for group seastar scheduling group lambda service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool call seastar noncopyable function service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool const service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool at jenkins workspace scylla master dtest release scylla seastar include seastar util noncopyable function hh seastar concrete execution stage service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool do flush at jenkins workspace scylla master dtest release scylla seastar include seastar util noncopyable function hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core apply hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core apply hh inlined by at jenkins workspace scylla master dtest release scylla seastar include seastar core future hh inlined by seastar concrete execution stage service storage proxy std vector db consistency level std chrono time point tracing trace state ptr bool do flush at jenkins workspace scylla master dtest release scylla seastar include seastar core execution stage hh operator at jenkins workspace scylla master dtest release scylla seastar build release src core execution stage cc inlined by run and dispose at jenkins workspace scylla master dtest release scylla seastar build release include seastar core task hh seastar reactor run tasks seastar reactor task queue at jenkins workspace scylla master dtest release scylla seastar build release src core reactor cc seastar reactor run some tasks at jenkins workspace scylla master dtest release scylla seastar build release src core reactor cc seastar reactor run at jenkins workspace scylla master dtest release scylla seastar build release src core reactor cc inlined by seastar reactor run at jenkins workspace scylla master dtest release scylla seastar build release src core reactor cc seastar app template run deprecated int char std function at jenkins workspace scylla master dtest release scylla seastar build release src core app template cc main at jenkins workspace scylla master dtest release scylla main cc
| 1
|
400,721
| 11,779,727,462
|
IssuesEvent
|
2020-03-16 18:35:00
|
AbsaOSS/enceladus
|
https://api.github.com/repos/AbsaOSS/enceladus
|
opened
|
Default values for Array and Struct
|
Standardization feature priority: medium
|
## Background
When a non-null default (fallback) value for complex types (array, struct) would be requested during the **Standardization** it will stop the process with exception because the value is not defined.
## Feature
And default (fallback) value for Array and Struct types
## Proposed Solution [Optional]
Array: empty array
Struct: each field is fllled with default value of the fields type
|
1.0
|
Default values for Array and Struct - ## Background
When a non-null default (fallback) value for complex types (array, struct) would be requested during the **Standardization** it will stop the process with exception because the value is not defined.
## Feature
And default (fallback) value for Array and Struct types
## Proposed Solution [Optional]
Array: empty array
Struct: each field is fllled with default value of the fields type
|
non_test
|
default values for array and struct background when a non null default fallback value for complex types array struct would be requested during the standardization it will stop the process with exception because the value is not defined feature and default fallback value for array and struct types proposed solution array empty array struct each field is fllled with default value of the fields type
| 0
|
325,804
| 27,963,485,992
|
IssuesEvent
|
2023-03-24 17:24:35
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: quit-all-nodes failed
|
C-test-failure O-robot O-roachtest T-sql-sessions branch-release-22.1
|
roachtest.quit-all-nodes [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9088349?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9088349?buildTab=artifacts#/quit-all-nodes) on release-22.1 @ [53e8c66a87edc7eff417e0836390c14ec903ce76](https://github.com/cockroachdb/cockroach/commits/53e8c66a87edc7eff417e0836390c14ec903ce76):
```
test artifacts and logs in: /artifacts/quit-all-nodes/run_1
(quit.go:454).runQuit: context deadline exceeded
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*quit-all-nodes.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-25788
|
2.0
|
roachtest: quit-all-nodes failed - roachtest.quit-all-nodes [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9088349?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9088349?buildTab=artifacts#/quit-all-nodes) on release-22.1 @ [53e8c66a87edc7eff417e0836390c14ec903ce76](https://github.com/cockroachdb/cockroach/commits/53e8c66a87edc7eff417e0836390c14ec903ce76):
```
test artifacts and logs in: /artifacts/quit-all-nodes/run_1
(quit.go:454).runQuit: context deadline exceeded
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*quit-all-nodes.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-25788
|
test
|
roachtest quit all nodes failed roachtest quit all nodes with on release test artifacts and logs in artifacts quit all nodes run quit go runquit context deadline exceeded parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb server jira issue crdb
| 1
|
241,827
| 20,164,629,241
|
IssuesEvent
|
2022-02-10 02:08:50
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
jobs: package timed out
|
C-test-failure O-robot A-jobs branch-master T-jobs
|
unknown.(unknown) [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2971093&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=2971093&tab=artifacts#/) on master @ [7f257d42c7b24ba7ef6be9b5d16e9bcd1d37f098](https://github.com/cockroachdb/cockroach/commits/7f257d42c7b24ba7ef6be9b5d16e9bcd1d37f098):
```
Slow failing tests:
TestTransientTxnErrors - 604.14s
Slow passing tests:
TestTenantLogic - 1157.35s
TestLogic - 844.35s
TestAlterTableLocalityRegionalByRowError - 178.01s
TestRegionChangeRacingRegionalByRowChange - 143.87s
TestAllRegisteredImportFixture - 116.67s
TestCCLLogic - 111.46s
TestTypeChangeJobCancelSemantics - 92.82s
TestRestoreMidSchemaChange - 84.11s
TestRollbackSyncRangedIntentResolution - 83.69s
TestIndexCleanupAfterAlterFromRegionalByRow - 73.89s
TestTelemetry - 65.49s
TestBTreeDeleteInsertCloneEachTime - 61.65s
TestExecBuild - 60.42s
TestRemoveDeadReplicas - 57.59s
TestImportIntoCSV - 57.49s
TestRingBuffer - 55.76s
TestImportData - 54.40s
TestConcurrentAddDropRegions - 53.80s
TestRaceWithBackfill - 49.70s
Example_demo - 49.01s
```
<details><summary>Reproduce</summary>
<p>
<p>To reproduce, try:
```bash
make stressrace TESTS=(unknown) PKG=./pkg/unknown TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
</p>
<p>Parameters in this failure:
- GOFLAGS=-json
</p>
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*\(unknown\).*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
jobs: package timed out - unknown.(unknown) [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2971093&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=2971093&tab=artifacts#/) on master @ [7f257d42c7b24ba7ef6be9b5d16e9bcd1d37f098](https://github.com/cockroachdb/cockroach/commits/7f257d42c7b24ba7ef6be9b5d16e9bcd1d37f098):
```
Slow failing tests:
TestTransientTxnErrors - 604.14s
Slow passing tests:
TestTenantLogic - 1157.35s
TestLogic - 844.35s
TestAlterTableLocalityRegionalByRowError - 178.01s
TestRegionChangeRacingRegionalByRowChange - 143.87s
TestAllRegisteredImportFixture - 116.67s
TestCCLLogic - 111.46s
TestTypeChangeJobCancelSemantics - 92.82s
TestRestoreMidSchemaChange - 84.11s
TestRollbackSyncRangedIntentResolution - 83.69s
TestIndexCleanupAfterAlterFromRegionalByRow - 73.89s
TestTelemetry - 65.49s
TestBTreeDeleteInsertCloneEachTime - 61.65s
TestExecBuild - 60.42s
TestRemoveDeadReplicas - 57.59s
TestImportIntoCSV - 57.49s
TestRingBuffer - 55.76s
TestImportData - 54.40s
TestConcurrentAddDropRegions - 53.80s
TestRaceWithBackfill - 49.70s
Example_demo - 49.01s
```
<details><summary>Reproduce</summary>
<p>
<p>To reproduce, try:
```bash
make stressrace TESTS=(unknown) PKG=./pkg/unknown TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
</p>
<p>Parameters in this failure:
- GOFLAGS=-json
</p>
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*\(unknown\).*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
jobs package timed out unknown unknown with on master slow failing tests testtransienttxnerrors slow passing tests testtenantlogic testlogic testaltertablelocalityregionalbyrowerror testregionchangeracingregionalbyrowchange testallregisteredimportfixture testccllogic testtypechangejobcancelsemantics testrestoremidschemachange testrollbacksyncrangedintentresolution testindexcleanupafteralterfromregionalbyrow testtelemetry testbtreedeleteinsertcloneeachtime testexecbuild testremovedeadreplicas testimportintocsv testringbuffer testimportdata testconcurrentadddropregions testracewithbackfill example demo reproduce to reproduce try bash make stressrace tests unknown pkg pkg unknown testtimeout stressflags timeout parameters in this failure goflags json
| 1
|
311,029
| 23,367,190,477
|
IssuesEvent
|
2022-08-10 16:20:39
|
hashicorp/terraform-provider-google
|
https://api.github.com/repos/hashicorp/terraform-provider-google
|
closed
|
Add handwritten datasources section
|
documentation size/s docs-fixit
|
Add a section on how to add/ modify handwritten datasources (native datasources and resource-based datasources) in handwritten readme.
https://github.com/GoogleCloudPlatform/magic-modules/tree/main/mmv1/third_party/terraform#datasource
|
1.0
|
Add handwritten datasources section - Add a section on how to add/ modify handwritten datasources (native datasources and resource-based datasources) in handwritten readme.
https://github.com/GoogleCloudPlatform/magic-modules/tree/main/mmv1/third_party/terraform#datasource
|
non_test
|
add handwritten datasources section add a section on how to add modify handwritten datasources native datasources and resource based datasources in handwritten readme
| 0
|
306,369
| 26,462,374,251
|
IssuesEvent
|
2023-01-16 19:02:06
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
closed
|
BUG: DataFrame.replace fails to replace value when column contains pd.NA
|
good first issue Needs Tests replace NA - MaskedArrays
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the main branch of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({'A': [0, 1, 2]})
print(df)
# A
# 0 0
# 1 1
# 2 2
df['A'].replace(to_replace=2, value=99, inplace=True)
print(df)
# A
# 0 0
# 1 1
# 2 99
df.at[0, 'A'] = pd.NA
df['A'].replace(to_replace=1, value=100, inplace=True)
print(df)
# A
# 0 <NA>
# 1 1 <-- should be 100
# 2 99
```
### Issue Description
Pandas replace function does not seem to work on a column if the column contains at least one pd.NA value
### Expected Behavior
replace function should work even if pd.NA values are in the column
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 66e3805b8cabe977f40c05259cc3fcf7ead5687d
python : 3.10.0.final.0
python-bits : 64
OS : Linux
OS-release : 5.16.19-76051619-generic
Version : #202204081339~1649696161~20.04~091f44b~dev-Ubuntu SMP PREEMPT Tu
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.5
numpy : 1.21.2
pytz : 2021.3
dateutil : 2.8.2
pip : 21.2.4
setuptools : 58.0.4
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : 3.0.3
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.0.2
IPython : 7.29.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.5.1
numexpr : None
odfpy : None
openpyxl : 3.0.9
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.8.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
|
1.0
|
BUG: DataFrame.replace fails to replace value when column contains pd.NA - ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the main branch of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({'A': [0, 1, 2]})
print(df)
# A
# 0 0
# 1 1
# 2 2
df['A'].replace(to_replace=2, value=99, inplace=True)
print(df)
# A
# 0 0
# 1 1
# 2 99
df.at[0, 'A'] = pd.NA
df['A'].replace(to_replace=1, value=100, inplace=True)
print(df)
# A
# 0 <NA>
# 1 1 <-- should be 100
# 2 99
```
### Issue Description
Pandas replace function does not seem to work on a column if the column contains at least one pd.NA value
### Expected Behavior
replace function should work even if pd.NA values are in the column
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 66e3805b8cabe977f40c05259cc3fcf7ead5687d
python : 3.10.0.final.0
python-bits : 64
OS : Linux
OS-release : 5.16.19-76051619-generic
Version : #202204081339~1649696161~20.04~091f44b~dev-Ubuntu SMP PREEMPT Tu
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.5
numpy : 1.21.2
pytz : 2021.3
dateutil : 2.8.2
pip : 21.2.4
setuptools : 58.0.4
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : 3.0.3
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.0.2
IPython : 7.29.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.5.1
numexpr : None
odfpy : None
openpyxl : 3.0.9
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.8.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
|
test
|
bug dataframe replace fails to replace value when column contains pd na pandas version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pandas i have confirmed this bug exists on the main branch of pandas reproducible example python import pandas as pd df pd dataframe a print df a df replace to replace value inplace true print df a df at pd na df replace to replace value inplace true print df a should be issue description pandas replace function does not seem to work on a column if the column contains at least one pd na value expected behavior replace function should work even if pd na values are in the column installed versions installed versions commit python final python bits os linux os release generic version dev ubuntu smp preempt tu machine processor byteorder little lc all none lang en us utf locale en us utf pandas numpy pytz dateutil pip setuptools cython none pytest none hypothesis none sphinx none blosc none feather none xlsxwriter lxml etree none none pymysql none none ipython pandas datareader none none bottleneck none fsspec none fastparquet none gcsfs none matplotlib numexpr none odfpy none openpyxl pandas gbq none pyarrow none pyxlsb none none scipy sqlalchemy none tables none tabulate none xarray none xlrd none xlwt none numba none
| 1
|
621,501
| 19,588,926,065
|
IssuesEvent
|
2022-01-05 10:34:27
|
apache/dolphinscheduler
|
https://api.github.com/repos/apache/dolphinscheduler
|
closed
|
[Bug] [menu] The content of the menu on the left is not fully displayed.
|
bug priority-middle
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
The Workflow Monitor menu cannot fully display the content when there is data due to its width. When other menu texts are too long, they cannot be displayed completely.
### What you expected to happen
A combination of the width of the menu and the length of the text content. It can be improved by the `tooltip` component.
### How to reproduce
The content of Workflow Monitor is too much.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
1.0
|
[Bug] [menu] The content of the menu on the left is not fully displayed. - ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
The Workflow Monitor menu cannot fully display the content when there is data due to its width. When other menu texts are too long, they cannot be displayed completely.
### What you expected to happen
A combination of the width of the menu and the length of the text content. It can be improved by the `tooltip` component.
### How to reproduce
The content of Workflow Monitor is too much.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
non_test
|
the content of the menu on the left is not fully displayed search before asking i had searched in the and found no similar issues what happened the workflow monitor menu cannot fully display the content when there is data due to its width when other menu texts are too long they cannot be displayed completely what you expected to happen a combination of the width of the menu and the length of the text content it can be improved by the tooltip component how to reproduce the content of workflow monitor is too much anything else no response version dev are you willing to submit pr yes i am willing to submit a pr code of conduct i agree to follow this project s
| 0
|
52,292
| 13,218,646,288
|
IssuesEvent
|
2020-08-17 09:06:04
|
hikaya-io/activity
|
https://api.github.com/repos/hikaya-io/activity
|
reopened
|
IPTT: For Target column in the various periods, use the Target Period value and not summation
|
defect priority
|
**Current behavior**
When we have multiple entries for a certain Period Target, the target values is summed for each entry and then displayed in the IPTT report. For example, if we have two entries for the Target of say 2 for each entry, the target is displayed as 4 instead of 2.
See screenshot:
<img width="1374" alt="image" src="https://user-images.githubusercontent.com/16039248/82843054-95223300-9ee4-11ea-8928-afc8db17e514.png">
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Reports > Indicator Tracking Table
2. Generate a report by completing all the dropdowns
3. Look at your period targets specifically the `Target` column and see multiple if you have multiple collection entries, the calculation is a summation of all collection entries.
4. See error
**Expected behavior**
The Target Value displayed should only show the Period Target value, not a summation of the entries against this Target period.
|
1.0
|
IPTT: For Target column in the various periods, use the Target Period value and not summation - **Current behavior**
When we have multiple entries for a certain Period Target, the target values is summed for each entry and then displayed in the IPTT report. For example, if we have two entries for the Target of say 2 for each entry, the target is displayed as 4 instead of 2.
See screenshot:
<img width="1374" alt="image" src="https://user-images.githubusercontent.com/16039248/82843054-95223300-9ee4-11ea-8928-afc8db17e514.png">
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Reports > Indicator Tracking Table
2. Generate a report by completing all the dropdowns
3. Look at your period targets specifically the `Target` column and see multiple if you have multiple collection entries, the calculation is a summation of all collection entries.
4. See error
**Expected behavior**
The Target Value displayed should only show the Period Target value, not a summation of the entries against this Target period.
|
non_test
|
iptt for target column in the various periods use the target period value and not summation current behavior when we have multiple entries for a certain period target the target values is summed for each entry and then displayed in the iptt report for example if we have two entries for the target of say for each entry the target is displayed as instead of see screenshot img width alt image src to reproduce steps to reproduce the behavior go to reports indicator tracking table generate a report by completing all the dropdowns look at your period targets specifically the target column and see multiple if you have multiple collection entries the calculation is a summation of all collection entries see error expected behavior the target value displayed should only show the period target value not a summation of the entries against this target period
| 0
|
183,177
| 6,677,978,734
|
IssuesEvent
|
2017-10-05 12:44:43
|
resin-io/pensieve
|
https://api.github.com/repos/resin-io/pensieve
|
closed
|
Add a feature for handling merge conflicts on save
|
priority
|
Before saving we should check to see if there have been changes to the source document. If there is a conflict we should have a way for users to resolve the conflict.
As a bare minimum, we could have a function where a PR can be opened from the pensieve and the user can merge in using the GitHub UI.
|
1.0
|
Add a feature for handling merge conflicts on save - Before saving we should check to see if there have been changes to the source document. If there is a conflict we should have a way for users to resolve the conflict.
As a bare minimum, we could have a function where a PR can be opened from the pensieve and the user can merge in using the GitHub UI.
|
non_test
|
add a feature for handling merge conflicts on save before saving we should check to see if there have been changes to the source document if there is a conflict we should have a way for users to resolve the conflict as a bare minimum we could have a function where a pr can be opened from the pensieve and the user can merge in using the github ui
| 0
|
321,607
| 27,542,590,503
|
IssuesEvent
|
2023-03-07 09:35:00
|
finos/waltz
|
https://api.github.com/repos/finos/waltz
|
closed
|
Indicator section name is confusing, rename to Statistics (Indicators) - will drop the Indicators bit in 1.48
|
small change fixed (test & close)
|
### Description
The underlying tables is `entity_statistic` and everyone calls them statistics. The gui has them labelled as Indicators. Suggest we rename them.
### Resourcing
We would like to add this request to the Waltz team's feature backlog
|
1.0
|
Indicator section name is confusing, rename to Statistics (Indicators) - will drop the Indicators bit in 1.48 - ### Description
The underlying tables is `entity_statistic` and everyone calls them statistics. The gui has them labelled as Indicators. Suggest we rename them.
### Resourcing
We would like to add this request to the Waltz team's feature backlog
|
test
|
indicator section name is confusing rename to statistics indicators will drop the indicators bit in description the underlying tables is entity statistic and everyone calls them statistics the gui has them labelled as indicators suggest we rename them resourcing we would like to add this request to the waltz team s feature backlog
| 1
|
6,457
| 7,624,386,455
|
IssuesEvent
|
2018-05-03 17:51:07
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
How to enable / disable for SDK | Local Development
|
cxp in-progress product-question service-fabric triaged
|
When I'm running Service Fabric locally, for development, how do I enable/disable this service?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d273a891-b7f9-81db-bfc5-044e6755541f
* Version Independent ID: dc9f17bb-64af-d4b5-186f-7647c0a0c511
* Content: [Azure Service Fabric DNS service](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-dnsservice)
* Content Source: [articles/service-fabric/service-fabric-dnsservice.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-dnsservice.md)
* Service: **service-fabric**
* GitHub Login: @msfussell
* Microsoft Alias: **msfussell**
|
1.0
|
How to enable / disable for SDK | Local Development - When I'm running Service Fabric locally, for development, how do I enable/disable this service?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d273a891-b7f9-81db-bfc5-044e6755541f
* Version Independent ID: dc9f17bb-64af-d4b5-186f-7647c0a0c511
* Content: [Azure Service Fabric DNS service](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-dnsservice)
* Content Source: [articles/service-fabric/service-fabric-dnsservice.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-dnsservice.md)
* Service: **service-fabric**
* GitHub Login: @msfussell
* Microsoft Alias: **msfussell**
|
non_test
|
how to enable disable for sdk local development when i m running service fabric locally for development how do i enable disable this service document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service fabric github login msfussell microsoft alias msfussell
| 0
|
110,346
| 23,916,247,231
|
IssuesEvent
|
2022-09-09 12:58:32
|
SneaksAndData/anti-clustering
|
https://api.github.com/repos/SneaksAndData/anti-clustering
|
closed
|
[BUG] Update dependencies
|
code/bug
|
**Describe the bug**
Some dependencies uses outdated versions. They should be upgraded.
|
1.0
|
[BUG] Update dependencies - **Describe the bug**
Some dependencies uses outdated versions. They should be upgraded.
|
non_test
|
update dependencies describe the bug some dependencies uses outdated versions they should be upgraded
| 0
|
84,611
| 7,928,757,368
|
IssuesEvent
|
2018-07-06 12:54:23
|
ValveSoftware/steam-for-linux
|
https://api.github.com/repos/ValveSoftware/steam-for-linux
|
closed
|
Screenshots Break After Use
|
Need Retest overlay
|
#### Your system information
* Steam client version (build number or date): Built: Mar 30 2017, at 22:20:57; Steam package versions: 1490914880
* Distribution (e.g. Ubuntu): Manjaro 64 bit (Arch based)
* Opted into Steam client beta?: Yes
* Have you checked for system updates?: Yes
#### Please describe your issue in as much detail as possible:
Describe what you _expected_ should happen and what _did_ happen. Please link any large code pastes as a [Github Gist](https://gist.github.com/)
While running games I would expect that pressing the screenshot button (F12) would result in a screenshot being taken which would then later prompt me to upload them to the Steam community.
However, when I run a game it seems I can only take 2-3 screenshots before it stops working. The games will stutter like I'm taking a screenshot, but no notification appears, and no screenshot is taken. The first 2-3 appear in my respective screenshot folders, the others do not.
I've tried a few random games and have had the same results, such as Saints Row the Third, some Source based game, and Duke3D Megaton Edition.
I opted into the Steam beta (the version I'm using now) with the same result.
I should also note that in my /tmp/dumps folder I seem to get some assert dumps.
If you need any additional information such as information from Steam if ran from the terminal, please let me know.
#### Steps for reproducing this issue:
1. Open a game.
2. Take some screenshots.
3. Screenshots no longer work.
|
1.0
|
Screenshots Break After Use - #### Your system information
* Steam client version (build number or date): Built: Mar 30 2017, at 22:20:57; Steam package versions: 1490914880
* Distribution (e.g. Ubuntu): Manjaro 64 bit (Arch based)
* Opted into Steam client beta?: Yes
* Have you checked for system updates?: Yes
#### Please describe your issue in as much detail as possible:
Describe what you _expected_ should happen and what _did_ happen. Please link any large code pastes as a [Github Gist](https://gist.github.com/)
While running games I would expect that pressing the screenshot button (F12) would result in a screenshot being taken which would then later prompt me to upload them to the Steam community.
However, when I run a game it seems I can only take 2-3 screenshots before it stops working. The games will stutter like I'm taking a screenshot, but no notification appears, and no screenshot is taken. The first 2-3 appear in my respective screenshot folders, the others do not.
I've tried a few random games and have had the same results, such as Saints Row the Third, some Source based game, and Duke3D Megaton Edition.
I opted into the Steam beta (the version I'm using now) with the same result.
I should also note that in my /tmp/dumps folder I seem to get some assert dumps.
If you need any additional information such as information from Steam if ran from the terminal, please let me know.
#### Steps for reproducing this issue:
1. Open a game.
2. Take some screenshots.
3. Screenshots no longer work.
|
test
|
screenshots break after use your system information steam client version build number or date built mar at steam package versions distribution e g ubuntu manjaro bit arch based opted into steam client beta yes have you checked for system updates yes please describe your issue in as much detail as possible describe what you expected should happen and what did happen please link any large code pastes as a while running games i would expect that pressing the screenshot button would result in a screenshot being taken which would then later prompt me to upload them to the steam community however when i run a game it seems i can only take screenshots before it stops working the games will stutter like i m taking a screenshot but no notification appears and no screenshot is taken the first appear in my respective screenshot folders the others do not i ve tried a few random games and have had the same results such as saints row the third some source based game and megaton edition i opted into the steam beta the version i m using now with the same result i should also note that in my tmp dumps folder i seem to get some assert dumps if you need any additional information such as information from steam if ran from the terminal please let me know steps for reproducing this issue open a game take some screenshots screenshots no longer work
| 1
|
329,520
| 10,021,041,262
|
IssuesEvent
|
2019-07-16 13:53:38
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
guanjia.qq.com - see bug description
|
browser-firefox engine-gecko priority-critical
|
<!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://guanjia.qq.com/
**Browser / Version**: Firefox 69.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Prelozit
**Steps to Reproduce**:
Nejde preložiť.
[](https://webcompat.com/uploads/2019/7/9d897793-30a6-4d31-89ca-e72493258133.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190712011116</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[console.warn(emonitor) https://guanjia.qq.com/:130:13]', u'[JavaScript Warning: "The script from https://tag.baidu.com/vcard/v.js?siteid=6470019&url=https%3A%2F%2Fguanjia.qq.com%2F&source=&rnd=9796067&hm=1 was loaded even though its MIME type (text/html) is not a valid JavaScript MIME type." {file: "https://guanjia.qq.com/" line: 0}]', u'[JavaScript Warning: "The script from https://p.guanjia.qq.com/bin/other/date.php?jsonp=1&callback=jQuery17202367511014679402_1563231660523&_=1563231691171 was loaded even though its MIME type (application/json) is not a valid JavaScript MIME type." {file: "https://guanjia.qq.com/" line: 0}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
guanjia.qq.com - see bug description - <!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://guanjia.qq.com/
**Browser / Version**: Firefox 69.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Prelozit
**Steps to Reproduce**:
Nejde preložiť.
[](https://webcompat.com/uploads/2019/7/9d897793-30a6-4d31-89ca-e72493258133.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190712011116</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[console.warn(emonitor) https://guanjia.qq.com/:130:13]', u'[JavaScript Warning: "The script from https://tag.baidu.com/vcard/v.js?siteid=6470019&url=https%3A%2F%2Fguanjia.qq.com%2F&source=&rnd=9796067&hm=1 was loaded even though its MIME type (text/html) is not a valid JavaScript MIME type." {file: "https://guanjia.qq.com/" line: 0}]', u'[JavaScript Warning: "The script from https://p.guanjia.qq.com/bin/other/date.php?jsonp=1&callback=jQuery17202367511014679402_1563231660523&_=1563231691171 was loaded even though its MIME type (application/json) is not a valid JavaScript MIME type." {file: "https://guanjia.qq.com/" line: 0}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
guanjia qq com see bug description url browser version firefox operating system windows tested another browser yes problem type something else description prelozit steps to reproduce nejde preložiť browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages u u from with ❤️
| 0
|
288,105
| 24,882,768,426
|
IssuesEvent
|
2022-10-28 03:47:09
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Orçamento - Execução - Fortuna de Minas
|
generalization test development template - Memory (66) tag - Orçamento subtag - Execução
|
DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Fortuna de Minas.
|
1.0
|
Teste de generalizacao para a tag Orçamento - Execução - Fortuna de Minas - DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Fortuna de Minas.
|
test
|
teste de generalizacao para a tag orçamento execução fortuna de minas dod realizar o teste de generalização do validador da tag orçamento execução para o município de fortuna de minas
| 1
|
84,922
| 3,681,738,721
|
IssuesEvent
|
2016-02-24 05:34:11
|
codenameone/CodenameOne
|
https://api.github.com/repos/codenameone/CodenameOne
|
opened
|
JSObject & JSContext need method documentation and better Java 8/5 support
|
Priority-Low
|
Currently many methods in the javascript package lack javadoc comments and while they are mostly intuitive this would probably make the resulting javadoc look better.
Methods are also missing usage of varargs (the ...) for arguments and aren't generified. Both probably because the code is pretty old.
Also Callback is used and we should switch to SuccessCallback and possibly FailureCallback both of which will allow us to write lambda code for the callback logic.
|
1.0
|
JSObject & JSContext need method documentation and better Java 8/5 support - Currently many methods in the javascript package lack javadoc comments and while they are mostly intuitive this would probably make the resulting javadoc look better.
Methods are also missing usage of varargs (the ...) for arguments and aren't generified. Both probably because the code is pretty old.
Also Callback is used and we should switch to SuccessCallback and possibly FailureCallback both of which will allow us to write lambda code for the callback logic.
|
non_test
|
jsobject jscontext need method documentation and better java support currently many methods in the javascript package lack javadoc comments and while they are mostly intuitive this would probably make the resulting javadoc look better methods are also missing usage of varargs the for arguments and aren t generified both probably because the code is pretty old also callback is used and we should switch to successcallback and possibly failurecallback both of which will allow us to write lambda code for the callback logic
| 0
|
137,712
| 18,762,552,559
|
IssuesEvent
|
2021-11-05 18:18:37
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Contradiction in statements
|
security/svc triaged cxp in-progress doc-enhancement Pri1 security-fundamentals/subsvc
|
hello,
In the article we say at one point :
"_Zero Trust networks eliminate the concept of trust based on network location within a perimeter._"
In the best practices part we have :
"Best practice: Give Conditional Access to resources based on device, identity, assurance, **network location**, and more."
For me it is confusing to mention network location in the best practices part.
I think we should remove network location from the statement.
Let me know what you think,
Razvan
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a1856224-8709-d6db-c533-50e5e05f2091
* Version Independent ID: f4f18c8f-c849-6da6-e0ab-a52c6eeec07a
* Content: [Best practices for network security - Microsoft Azure](https://docs.microsoft.com/en-us/azure/security/fundamentals/network-best-practices)
* Content Source: [articles/security/fundamentals/network-best-practices.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/security/fundamentals/network-best-practices.md)
* Service: **security**
* Sub-service: **security-fundamentals**
* GitHub Login: @TerryLanfear
* Microsoft Alias: **TomSh**
|
True
|
Contradiction in statements -
hello,
In the article we say at one point :
"_Zero Trust networks eliminate the concept of trust based on network location within a perimeter._"
In the best practices part we have :
"Best practice: Give Conditional Access to resources based on device, identity, assurance, **network location**, and more."
For me it is confusing to mention network location in the best practices part.
I think we should remove network location from the statement.
Let me know what you think,
Razvan
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a1856224-8709-d6db-c533-50e5e05f2091
* Version Independent ID: f4f18c8f-c849-6da6-e0ab-a52c6eeec07a
* Content: [Best practices for network security - Microsoft Azure](https://docs.microsoft.com/en-us/azure/security/fundamentals/network-best-practices)
* Content Source: [articles/security/fundamentals/network-best-practices.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/security/fundamentals/network-best-practices.md)
* Service: **security**
* Sub-service: **security-fundamentals**
* GitHub Login: @TerryLanfear
* Microsoft Alias: **TomSh**
|
non_test
|
contradiction in statements hello in the article we say at one point zero trust networks eliminate the concept of trust based on network location within a perimeter in the best practices part we have best practice give conditional access to resources based on device identity assurance network location and more for me it is confusing to mention network location in the best practices part i think we should remove network location from the statement let me know what you think razvan document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service security sub service security fundamentals github login terrylanfear microsoft alias tomsh
| 0
|
135,807
| 11,018,131,464
|
IssuesEvent
|
2019-12-05 09:55:02
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
opened
|
The tooltip of the localized 'Activities' isn't localized
|
🌐 localization 🧪 testing
|
**Storage Explorer Version:** 1.11.1
**Build:** [20191205.1](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3295843)
**Branch:** master
**Language:** Chinese(zh-TW)/Chinese(zh-CN)
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select '繁體中文' -> Restart Storage Explorer.
3. Hover the localized 'Activities'.
4. Check the tooltip.
**Expect Experience:**
The tooltip of the localized 'Activities' is localized.
**Actual Experience:**
The tooltip of the localized 'Activities' isn't localized.

|
1.0
|
The tooltip of the localized 'Activities' isn't localized - **Storage Explorer Version:** 1.11.1
**Build:** [20191205.1](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3295843)
**Branch:** master
**Language:** Chinese(zh-TW)/Chinese(zh-CN)
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select '繁體中文' -> Restart Storage Explorer.
3. Hover the localized 'Activities'.
4. Check the tooltip.
**Expect Experience:**
The tooltip of the localized 'Activities' is localized.
**Actual Experience:**
The tooltip of the localized 'Activities' isn't localized.

|
test
|
the tooltip of the localized activities isn t localized storage explorer version build branch master language chinese zh tw chinese zh cn platform os windows linux ubuntu macos high sierra architecture regression from not a regression steps to reproduce launch storage explorer open settings application regional settings select 繁體中文 restart storage explorer hover the localized activities check the tooltip expect experience the tooltip of the localized activities is localized actual experience the tooltip of the localized activities isn t localized
| 1
|
298,365
| 25,820,504,058
|
IssuesEvent
|
2022-12-12 09:14:07
|
wazuh/wazuh
|
https://api.github.com/repos/wazuh/wazuh
|
closed
|
Release 4.4.0 - Alpha 1 - E2E UX tests - Slack + Pagerduty + Shuffle
|
type/test/manual team/frontend release test/4.4.0
|
The following issue aims to run the specified test for the current release candidate, report the results, and open new issues for any encountered errors.
## Test information
| | |
|-------------------------|--------------------------------------------|
| **Test name** | Slack + Pagerduty + Shuffle |
| **Category** | Integrations |
| **Deployment option** | [Step-by-Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html) |
| **Main release issue** | https://github.com/wazuh/wazuh/issues/15505 |
| **Main E2E UX test issue** | https://github.com/wazuh/wazuh/issues/15519 |
| **Release candidate #** | Alpha 1 |
## Test description
Deploy Wazuh with the following design
| Component | Guide | Cluster / Single | OS |
|-------------|------|-----------------|----|
| indexer | [ Step-by-step ](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)| Single | Ubuntu 22.04 |
| server | [ Step-by-step ](https://documentation.wazuh.com/current/installation-guide/wazuh-server/step-by-step.html)| Cluster | Ubuntu 18.04 |
| dashboard | [ Step-by-step ](https://documentation.wazuh.com/current/installation-guide/wazuh-dashboard/step-by-step.html)| Single | Ubuntu 22.04 |
| agent | [ Dashboard command ](https://documentation.wazuh.com/current/installation-guide/wazuh-agent/index.html) | Single | Fedora 37 |
Following the documentation, do your best effort to test Slack, Pagerduty, and Shuffle integrations, including:
- Using a wazuh cluster of at least 2 worker nodes, use one for Slack and another for Pagerduty
- Test Slack integration [POC])https://documentation-dev.wazuh.com/current/proof-of-concept-guide/poc-integrate-slack.html#slack-integration)
- Test integration with external APIs section from [documentation](https://documentation-dev.wazuh.com/current/user-manual/manager/manual-integration.html) for Slack, Pagerduty, and Shuffle
## Test report procedure
All test results must have one of the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed result. |
| :yellow_circle: | There is at least one expected failure or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause.
A comprehensive report of the test results must be attached as a ZIP or TXT file. Please attach any documents, screenshots, or tables to the issue update with the results. The auditors can use this report to dig deeper into any possible failures and details.
## Conclusions
All tests have been executed and the results can be found [here]().
| | | | |
|----------------|-------------|---------------------|----------------|
| **Status** | **Test** | **Failure type** | **Notes** |
| 🟢 | Indexer installation step-by-step | | |
| 🟢 | Configuring the Wazuh indexer | | |
| 🟢 | Cluster initialization | | |
| 🟢 | Deploying certificates | | |
| 🟢 | Installing the Wazuh manager | | |
| 🟢 | Installing Filebeat | | |
| 🟢 | Wazuh dashboard installation | | |
| 🟢 | Configuring the wazuh dashboard | | |
| 🔴 | Accessing dashboard | | https://github.com/wazuh/wazuh-kibana-app/issues/4938 |
| 🔴 | Installing the agent using WU | | https://github.com/wazuh/wazuh-kibana-app/issues/4809 |
| 🟢 | Configure Slack integration | | |
| 🟡 | Configure PgerDuty integration | | https://github.com/wazuh/wazuh/issues/4264 |
| 🔴 | Configure shuffler integration | No documentation | https://github.com/wazuh/wazuh/issues/15034 |
All tests have passed and the fails have been reported or justified. Therefore, I conclude that this issue is finished and OK for this release candidate.
## Auditors' validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [x] https://github.com/orgs/wazuh/teams/watchdogs
- [x] @davidjiglesias
|
2.0
|
Release 4.4.0 - Alpha 1 - E2E UX tests - Slack + Pagerduty + Shuffle - The following issue aims to run the specified test for the current release candidate, report the results, and open new issues for any encountered errors.
## Test information
| | |
|-------------------------|--------------------------------------------|
| **Test name** | Slack + Pagerduty + Shuffle |
| **Category** | Integrations |
| **Deployment option** | [Step-by-Step](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html) |
| **Main release issue** | https://github.com/wazuh/wazuh/issues/15505 |
| **Main E2E UX test issue** | https://github.com/wazuh/wazuh/issues/15519 |
| **Release candidate #** | Alpha 1 |
## Test description
Deploy Wazuh with the following design
| Component | Guide | Cluster / Single | OS |
|-------------|------|-----------------|----|
| indexer | [ Step-by-step ](https://documentation.wazuh.com/current/installation-guide/wazuh-indexer/step-by-step.html)| Single | Ubuntu 22.04 |
| server | [ Step-by-step ](https://documentation.wazuh.com/current/installation-guide/wazuh-server/step-by-step.html)| Cluster | Ubuntu 18.04 |
| dashboard | [ Step-by-step ](https://documentation.wazuh.com/current/installation-guide/wazuh-dashboard/step-by-step.html)| Single | Ubuntu 22.04 |
| agent | [ Dashboard command ](https://documentation.wazuh.com/current/installation-guide/wazuh-agent/index.html) | Single | Fedora 37 |
Following the documentation, do your best effort to test Slack, Pagerduty, and Shuffle integrations, including:
- Using a wazuh cluster of at least 2 worker nodes, use one for Slack and another for Pagerduty
- Test Slack integration [POC])https://documentation-dev.wazuh.com/current/proof-of-concept-guide/poc-integrate-slack.html#slack-integration)
- Test integration with external APIs section from [documentation](https://documentation-dev.wazuh.com/current/user-manual/manager/manual-integration.html) for Slack, Pagerduty, and Shuffle
## Test report procedure
All test results must have one of the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed result. |
| :yellow_circle: | There is at least one expected failure or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause.
A comprehensive report of the test results must be attached as a ZIP or TXT file. Please attach any documents, screenshots, or tables to the issue update with the results. The auditors can use this report to dig deeper into any possible failures and details.
## Conclusions
All tests have been executed and the results can be found [here]().
| | | | |
|----------------|-------------|---------------------|----------------|
| **Status** | **Test** | **Failure type** | **Notes** |
| 🟢 | Indexer installation step-by-step | | |
| 🟢 | Configuring the Wazuh indexer | | |
| 🟢 | Cluster initialization | | |
| 🟢 | Deploying certificates | | |
| 🟢 | Installing the Wazuh manager | | |
| 🟢 | Installing Filebeat | | |
| 🟢 | Wazuh dashboard installation | | |
| 🟢 | Configuring the wazuh dashboard | | |
| 🔴 | Accessing dashboard | | https://github.com/wazuh/wazuh-kibana-app/issues/4938 |
| 🔴 | Installing the agent using WU | | https://github.com/wazuh/wazuh-kibana-app/issues/4809 |
| 🟢 | Configure Slack integration | | |
| 🟡 | Configure PgerDuty integration | | https://github.com/wazuh/wazuh/issues/4264 |
| 🔴 | Configure shuffler integration | No documentation | https://github.com/wazuh/wazuh/issues/15034 |
All tests have passed and the fails have been reported or justified. Therefore, I conclude that this issue is finished and OK for this release candidate.
## Auditors' validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [x] https://github.com/orgs/wazuh/teams/watchdogs
- [x] @davidjiglesias
|
test
|
release alpha ux tests slack pagerduty shuffle the following issue aims to run the specified test for the current release candidate report the results and open new issues for any encountered errors test information test name slack pagerduty shuffle category integrations deployment option main release issue main ux test issue release candidate alpha test description deploy wazuh with the following design component guide cluster single os indexer single ubuntu server cluster ubuntu dashboard single ubuntu agent single fedora following the documentation do your best effort to test slack pagerduty and shuffle integrations including using a wazuh cluster of at least worker nodes use one for slack and another for pagerduty test slack integration test integration with external apis section from for slack pagerduty and shuffle test report procedure all test results must have one of the following statuses green circle all checks passed red circle there is at least one failed result yellow circle there is at least one expected failure or skipped test and no failures any failing test must be properly addressed with a new issue detailing the error and the possible cause a comprehensive report of the test results must be attached as a zip or txt file please attach any documents screenshots or tables to the issue update with the results the auditors can use this report to dig deeper into any possible failures and details conclusions all tests have been executed and the results can be found status test failure type notes 🟢 indexer installation step by step 🟢 configuring the wazuh indexer 🟢 cluster initialization 🟢 deploying certificates 🟢 installing the wazuh manager 🟢 installing filebeat 🟢 wazuh dashboard installation 🟢 configuring the wazuh dashboard 🔴 accessing dashboard 🔴 installing the agent using wu 🟢 configure slack integration 🟡 configure pgerduty integration 🔴 configure shuffler integration no documentation all tests have passed and the fails have been reported or justified therefore i conclude that this issue is finished and ok for this release candidate auditors validation the definition of done for this one is the validation of the conclusions and the test results from all auditors all checks from below must be accepted in order to close this issue davidjiglesias
| 1
|
123,460
| 16,497,343,685
|
IssuesEvent
|
2021-05-25 11:44:10
|
model-checking/rmc
|
https://api.github.com/repos/model-checking/rmc
|
opened
|
Mimick crate compilation process as in cargo/rustc
|
Area: compilation Type: design
|
#105 revealed some problems with the way we compile upstream/source crates. In particular, some functions may be missing in source crate output because they are included in the upstream crate output. In this case, it is possible we need to generate a single output, but further research is needed.
The suggested steps to follow are:
1. Find out how cargo/rustc deals with upstream crates
2. Review the existing code base for crate handling
3. Generate output files mimicking the process as in cargo/rustc (e.g., generate one output file if that is what cargo/rustc does)
|
1.0
|
Mimick crate compilation process as in cargo/rustc - #105 revealed some problems with the way we compile upstream/source crates. In particular, some functions may be missing in source crate output because they are included in the upstream crate output. In this case, it is possible we need to generate a single output, but further research is needed.
The suggested steps to follow are:
1. Find out how cargo/rustc deals with upstream crates
2. Review the existing code base for crate handling
3. Generate output files mimicking the process as in cargo/rustc (e.g., generate one output file if that is what cargo/rustc does)
|
non_test
|
mimick crate compilation process as in cargo rustc revealed some problems with the way we compile upstream source crates in particular some functions may be missing in source crate output because they are included in the upstream crate output in this case it is possible we need to generate a single output but further research is needed the suggested steps to follow are find out how cargo rustc deals with upstream crates review the existing code base for crate handling generate output files mimicking the process as in cargo rustc e g generate one output file if that is what cargo rustc does
| 0
|
52,747
| 6,275,587,673
|
IssuesEvent
|
2017-07-18 07:22:32
|
doctrine/doctrine2
|
https://api.github.com/repos/doctrine/doctrine2
|
closed
|
Multiple level discriminator mapping does not work in combination with a self-referencing one-to-many relation.
|
Missing Tests Question
|
Given the following structure of entities, we are experiencing fatal errors the moment we try to make use of the self-referencing relation.

The multiple level discrimination on itself works fine. We can query for the object directly or get it from a related entity. But the moment we insert data in the parent_id column in the database and try to call the ‘getParent()’ method we get the following fatal error:
_PHP Fatal error: Cannot instantiate abstract class FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct in ..vendor/doctrine/orm/lib/Doctrine/ORM/Mapping/ClassMetadataInfo.php on line 872_
An important detail is that we CAN use the self-referencing relation when querying for the ‘AddonMedicine’ entity. So it looks like that when we use the combination of the self-referencing relation AND the multiple level discrimination something is not interpreted right and that Doctrine is trying to instantiate the ‘middle layer’, being the AbstractProduct.
These are the mapping files in use:
**AbstractLine**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\AbstractLine"
table="invoiceline"
inheritance-type="JOINED">
<discriminator-column name="type" type="string" />
<discriminator-map>
<discriminator-mapping value="socialsupportproduct_effort" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct" />
<discriminator-mapping value="socialsupportproduct_output" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct" />
<discriminator-mapping value="addonmedicine" class="FinancialBundle\Entity\Invoice\Line\AddonMedicine" />
</discriminator-map>
<id name="id" column="id" type="integer">
<generator strategy="AUTO" />
</id>
<field name="quantity" column="quantity" type="float" nullable="false" />
<field name="price" column="price" type="float" nullable="false" />
<field name="subtotal" column="subtotal" type="integer" nullable="false" />
<field name="parentId" column="parent_id" type="integer" nullable="true" />
<many-to-one field="invoice" target-entity="FinancialBundle\Entity\Invoice\AbstractInvoice" inversed-by="invoiceLines">
<join-column name="invoice_id" referenced-column-name="id" nullable="false" />
</many-to-one>
<many-to-one field="invoiceLineVat" target-entity="FinancialBundle\Entity\Invoice\Line\Vat">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelinevat_id" referenced-column-name="id" nullable="true" />
</many-to-one>
</entity>
</doctrine-mapping>
```
**AbstractProduct**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct"
table="invoicelinesocialsupportproduct"
repository-class="FinancialBundle\Repository\Invoice\Line\SocialSupport\AbstractProductRepository"
inheritance-type="JOINED">
<discriminator-column name="type" type="string" />
<discriminator-map>
<discriminator-mapping value="socialsupportproduct_effort" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\EffortOrientedProduct" />
<discriminator-mapping value="socialsupportproduct_output" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\OutputOrientedProduct" />
</discriminator-map>
<field name="startDate" column="startdate" type="date" nullable="false" />
<field name="endDate" column="enddate" type="date" nullable="true" />
<many-to-one field="product" target-entity="FinancialBundle\Entity\Invoice\Line\GenericSocialSupportProduct">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelinegenericsocialsupportproduct_id" referenced-column-name="id" />
</many-to-one>
<many-to-one field="contract" target-entity="FinancialBundle\Entity\Contract\AbstractContract">
<join-column name="fin_contract_id" referenced-column-name="id" nullable="false" />
</many-to-one>
</entity>
</doctrine-mapping>
```
**OutputOrientedProduct**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\SocialSupport\OutputOrientedProduct"
table="invoicelinesocialsupportproductoutput"
repository-class="FinancialBundle\Repository\Invoice\Line\SocialSupport\Product\OutputOrientedRepository">
<many-to-one field="treatmentActivity" target-entity="Medical\SocialSupport\AbstractTreatmentActivity">
<join-column name="behandeling_socialsupportactivity_id" referenced-column-name="id" />
</many-to-one>
<many-to-one field="allocatedProduct" target-entity="FinancialBundle\Entity\Invoice\Line\GenericDispositionProductAllocated">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelineallocatedproduct_id" referenced-column-name="id" />
</many-to-one>
</entity>
</doctrine-mapping>
```
**AddonMedicine**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\AddonMedicine"
table="invoicelineaddonmedicine"
repository-class="FinancialBundle\Repository\Invoice\Line\AddonMedicineRepository">
<one-to-one field="addon" target-entity="FinancialBundle\Entity\Invoice\Line\AddonMedicine\Addon">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelineaddonmedicineaddon_id" referenced-column-name="id" nullable="false" />
</one-to-one>
<many-to-one field="contract" target-entity="FinancialBundle\Entity\Contract\AbstractContract">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="fin_contract_id" referenced-column-name="id" nullable="false" />
</many-to-one>
</entity>
</doctrine-mapping>
```
|
1.0
|
Multiple level discriminator mapping does not work in combination with a self-referencing one-to-many relation. - Given the following structure of entities, we are experiencing fatal errors the moment we try to make use of the self-referencing relation.

The multiple level discrimination on itself works fine. We can query for the object directly or get it from a related entity. But the moment we insert data in the parent_id column in the database and try to call the ‘getParent()’ method we get the following fatal error:
_PHP Fatal error: Cannot instantiate abstract class FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct in ..vendor/doctrine/orm/lib/Doctrine/ORM/Mapping/ClassMetadataInfo.php on line 872_
An important detail is that we CAN use the self-referencing relation when querying for the ‘AddonMedicine’ entity. So it looks like that when we use the combination of the self-referencing relation AND the multiple level discrimination something is not interpreted right and that Doctrine is trying to instantiate the ‘middle layer’, being the AbstractProduct.
These are the mapping files in use:
**AbstractLine**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\AbstractLine"
table="invoiceline"
inheritance-type="JOINED">
<discriminator-column name="type" type="string" />
<discriminator-map>
<discriminator-mapping value="socialsupportproduct_effort" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct" />
<discriminator-mapping value="socialsupportproduct_output" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct" />
<discriminator-mapping value="addonmedicine" class="FinancialBundle\Entity\Invoice\Line\AddonMedicine" />
</discriminator-map>
<id name="id" column="id" type="integer">
<generator strategy="AUTO" />
</id>
<field name="quantity" column="quantity" type="float" nullable="false" />
<field name="price" column="price" type="float" nullable="false" />
<field name="subtotal" column="subtotal" type="integer" nullable="false" />
<field name="parentId" column="parent_id" type="integer" nullable="true" />
<many-to-one field="invoice" target-entity="FinancialBundle\Entity\Invoice\AbstractInvoice" inversed-by="invoiceLines">
<join-column name="invoice_id" referenced-column-name="id" nullable="false" />
</many-to-one>
<many-to-one field="invoiceLineVat" target-entity="FinancialBundle\Entity\Invoice\Line\Vat">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelinevat_id" referenced-column-name="id" nullable="true" />
</many-to-one>
</entity>
</doctrine-mapping>
```
**AbstractProduct**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\SocialSupport\AbstractProduct"
table="invoicelinesocialsupportproduct"
repository-class="FinancialBundle\Repository\Invoice\Line\SocialSupport\AbstractProductRepository"
inheritance-type="JOINED">
<discriminator-column name="type" type="string" />
<discriminator-map>
<discriminator-mapping value="socialsupportproduct_effort" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\EffortOrientedProduct" />
<discriminator-mapping value="socialsupportproduct_output" class="FinancialBundle\Entity\Invoice\Line\SocialSupport\OutputOrientedProduct" />
</discriminator-map>
<field name="startDate" column="startdate" type="date" nullable="false" />
<field name="endDate" column="enddate" type="date" nullable="true" />
<many-to-one field="product" target-entity="FinancialBundle\Entity\Invoice\Line\GenericSocialSupportProduct">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelinegenericsocialsupportproduct_id" referenced-column-name="id" />
</many-to-one>
<many-to-one field="contract" target-entity="FinancialBundle\Entity\Contract\AbstractContract">
<join-column name="fin_contract_id" referenced-column-name="id" nullable="false" />
</many-to-one>
</entity>
</doctrine-mapping>
```
**OutputOrientedProduct**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\SocialSupport\OutputOrientedProduct"
table="invoicelinesocialsupportproductoutput"
repository-class="FinancialBundle\Repository\Invoice\Line\SocialSupport\Product\OutputOrientedRepository">
<many-to-one field="treatmentActivity" target-entity="Medical\SocialSupport\AbstractTreatmentActivity">
<join-column name="behandeling_socialsupportactivity_id" referenced-column-name="id" />
</many-to-one>
<many-to-one field="allocatedProduct" target-entity="FinancialBundle\Entity\Invoice\Line\GenericDispositionProductAllocated">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelineallocatedproduct_id" referenced-column-name="id" />
</many-to-one>
</entity>
</doctrine-mapping>
```
**AddonMedicine**
```
<?xml version="1.0" encoding="utf-8"?>
<doctrine-mapping
xmlns="http://doctrine-project.org/schemas/orm/doctrine-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://doctrine-project.org/schemas/orm/doctrine-mapping https://raw.githubusercontent.com/doctrine/doctrine2/2.4/doctrine-mapping.xsd">
<entity name="FinancialBundle\Entity\Invoice\Line\AddonMedicine"
table="invoicelineaddonmedicine"
repository-class="FinancialBundle\Repository\Invoice\Line\AddonMedicineRepository">
<one-to-one field="addon" target-entity="FinancialBundle\Entity\Invoice\Line\AddonMedicine\Addon">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="invoicelineaddonmedicineaddon_id" referenced-column-name="id" nullable="false" />
</one-to-one>
<many-to-one field="contract" target-entity="FinancialBundle\Entity\Contract\AbstractContract">
<cascade>
<cascade-persist/>
</cascade>
<join-column name="fin_contract_id" referenced-column-name="id" nullable="false" />
</many-to-one>
</entity>
</doctrine-mapping>
```
|
test
|
multiple level discriminator mapping does not work in combination with a self referencing one to many relation given the following structure of entities we are experiencing fatal errors the moment we try to make use of the self referencing relation the multiple level discrimination on itself works fine we can query for the object directly or get it from a related entity but the moment we insert data in the parent id column in the database and try to call the ‘getparent ’ method we get the following fatal error php fatal error cannot instantiate abstract class financialbundle entity invoice line socialsupport abstractproduct in vendor doctrine orm lib doctrine orm mapping classmetadatainfo php on line an important detail is that we can use the self referencing relation when querying for the ‘addonmedicine’ entity so it looks like that when we use the combination of the self referencing relation and the multiple level discrimination something is not interpreted right and that doctrine is trying to instantiate the ‘middle layer’ being the abstractproduct these are the mapping files in use abstractline doctrine mapping xmlns xmlns xsi xsi schemalocation entity name financialbundle entity invoice line abstractline table invoiceline inheritance type joined abstractproduct doctrine mapping xmlns xmlns xsi xsi schemalocation entity name financialbundle entity invoice line socialsupport abstractproduct table invoicelinesocialsupportproduct repository class financialbundle repository invoice line socialsupport abstractproductrepository inheritance type joined outputorientedproduct doctrine mapping xmlns xmlns xsi xsi schemalocation entity name financialbundle entity invoice line socialsupport outputorientedproduct table invoicelinesocialsupportproductoutput repository class financialbundle repository invoice line socialsupport product outputorientedrepository addonmedicine doctrine mapping xmlns xmlns xsi xsi schemalocation entity name financialbundle entity invoice line addonmedicine table invoicelineaddonmedicine repository class financialbundle repository invoice line addonmedicinerepository
| 1
|
416,924
| 28,106,176,033
|
IssuesEvent
|
2023-03-31 01:00:51
|
llyu2003/web_clipper
|
https://api.github.com/repos/llyu2003/web_clipper
|
opened
|
(5条消息) 强大且超实用的论文阅读工具——ReadPaper_野指针小李的博客-CSDN博客
|
documentation
|
> 最近突然发现了一款超好用的[论文阅读](https://so.csdn.net/so/search?q=%E8%AE%BA%E6%96%87%E9%98%85%E8%AF%BB&spm=1001.2101.3001.7020)工具 **ReadPaper**,简直是科研人的福音,在这里推荐给大家。
对于每个科研工作者而言,阅读论文就像吃饭喝水一样同款重要的事情。在我看来,阅读论文是分为两个步骤:1) 看论文;2) 理解论文。而大部分的人看论文都十分吃力,尤其是刚刚尝试入门的同学,就更别提理解论文了。所以一个好的论文阅读工具能够让你在科研路上事半功倍。我这几天就发现了一款优秀的,强大的,实用的论文阅读工具,**ReadPaper**,我希望这篇博客能够让你关注到这个工具,让你更快速的进入科研道路。当然,我这篇博客不是一篇说明文档,而是我从刚刚提到的两个步骤来说明该工具优秀且实用的地方。
* **官网:** [https://readpaper.com/](https://readpaper.com/)
* **下载链接:** [https://readpaper.com/download](https://readpaper.com/download)
* **平板:** 可以在应用商城中搜索 **ReadPaper**。
* **重点:** `免费的`,`免费的`,`免费的`
看论文过程中的难点我一共梳理为以下几个:`找论文`,`读论文`,`记笔记`,`搜参考文献`,`文献归档`,并在下面的内容里面以此梳理 `ReadPaper` 在这几点上很方便的地方。
1.1 找论文
-------
对于初学者而言,找论文会涉及到两个很关键的问题:1)我不知道该读什么论文;2) 我不知道这个论文好不好。
而 **ReadPaper** 在这点上做得很好的就是检索系统很强大,并且可以有多种查询条件,比如检索作者名,年份,引用量等。

同时,**ReadPaper** 还很贴心的给了是否有 `PDF`,以及 `CCF` 评分,可以快速了解这篇论文能否直接下载 PDF,以及会议级别。
最重要的是,这是我国的检索系统,所以检索速度巨快无比。
1.2 读论文
-------
### 1.2.1 翻译
对于大部分人而言,翻译一直是一个很大的问题。许多同学(比如我)都是遇到不懂的词语或者句子,复制下来再粘贴到另一个网页中进行翻译。同时我用的是 [Adobe](https://so.csdn.net/so/search?q=Adobe&spm=1001.2101.3001.7020) 的 PDF 阅读器,所以粘贴过去还要自己回车,就很麻烦。
而 **ReadPaper** 在这一点上做的就十分贴心,支持划词翻译,同时还有 `有道`,`腾讯` 和 `谷歌` 翻译引擎,可以随时切换。并且不用担心每次都要切换自己喜欢的翻译引擎,系统会自动记录你上次使用的翻译引擎。

### 1.2.2 看论文过程中忘记该论文发表的年份和期刊/会议
许多同学,尤其是面对一篇不熟悉的论文,很容易看着看着就忘记这篇论文是哪一年的,出自哪里,等级怎样,然后就要翻回首页找信息,看完后又忘记之前看到哪里了。
而 **ReadPaper** 可以直接在右侧 `资料` 栏中一下就找到所有的信息。

### 1.2.3 段落和图表不在一页,要来回上下翻
我相信许多同学都遇到过这个问题,就是论文的图表和描述文字不在同一页(我遇到过最夸张的一篇论文,文字和图差了10页左右),无论是打印下来还是在电脑上看都要来回上下滑动,会特别麻烦而且影响记忆。
在 **ReadPaper** 中这些都不是问题,在 `资料` 一栏中可以翻阅到该论文中的所有图和表,并且可以点击将其放在页面上,还可以拖动、放大、缩小,特别实用。同时,也可以通过点击类似于 `Figure 1` 等按钮来召唤出这篇论文。

同时,如果你不想用这种方式看图或者表,或者图表太大了必须跳转过去看,那么可以双击右侧图表的标题跳转,而且 **ReadPaper** 还很温馨的提供了一个 `回到刚才位置` 的按钮,方便你快速切回刚刚的位置。

1.3 记笔记
-------
看论文记笔记也是一个很关键的问题,**ReadPaper** 提供了 `高亮`,`记录文本笔记` 和 `记录图表笔记` 的功能。
* **高亮和记录文本笔记:** 高亮和记录文本笔记都是靠划词实现的。当你想要高亮或者记录文本笔记的时候,只需要选中文本,点击中间按钮选择颜色,在右侧就可以开始记录笔记了,如果不记录,也会有高亮。

* **记录图表笔记:** 图表也可以用 **ReadPaper** 记录。在左上角有个截图按钮,点击按钮,框选你想要的区域,即可做笔记。

1.4 查阅参考文献
----------
我相信许多人被 adobe 的参考文献跳转给恶心过,尤其是那种引用不是数字,而是作者名的那种引用,跳转过去还要再自己查作者是谁,才能找到这是哪一篇参考文献。
而 **ReadPaper** 在这点做的我很喜欢,点击参考文献链接,会出 `作者`,`论文名`,`期刊/会议名`,`年份`,`引用量`,`评级`,以及 `是否有PDF`。

1.5 文献归档
--------
我看过许多同学收藏论文的文件夹,有一说一,我个人觉得十分凌乱,就是那种可能你过个几天就会忘了哪一篇是哪一篇了。而 **ReadPaper** 有很好的管理界面。

上面的这些功能实际上许多软件也都有,而真正让我觉得这款软件是顶尖论文阅读工具原因在于它能够帮你理解这篇论文。
我看过许多同学在读论文的时候就是 **将论文从英文全文翻译为中文,然后看完中文就完了**,像极了在做任务,但是这样你就理解了这篇论文了么?我想大概率是没有的。而 **ReadPaper** 能够很好的帮你解决这个问题。**ReadPaper** 的解决方案是由 **沈向阳** 老师和 **华刚** 老师提出的 `论文十问`,这十个问题分别是:
1. 论文试图解决什么问题?
2. 这是否是一个新的问题?
3. 这篇文章要验证一个什么科学假设?
4. 有哪些相关研究?如何归类?谁是这一课题在领域内值得关注的研究员?
5. 论文中提到的解决方案之关键是什么?
6. 论文中的实验是如何设计的?
7. 用于定量评估的数据集是什么?代码有没有开源?
8. 论文中的实验及结果有没有很好地支持需要验证的科学假设?
9. 这篇论文到底有什么贡献?
10. 下一步呢?有什么工作可以继续深入?
在 **ReadPaper** 的软件中,右侧有个 `学习任务` 按钮,在里面可以记录这十个问题。同时如果自己回答不了,那么在 **ReadPaper** 官网,可以查看其它用户对这篇论文这十个问题的回答,能够很大程度上帮助到同学们理解每一篇论文。同时,当你有独到的看法时,你也可以发表你的看法,收获别人的点赞。


\[1\] ReadPaper论文阅读.
【ReadPaper保姆级教程】手把手教你搞定论文阅读神器ReadPaper | 论文如何高效读懂?论文笔记要怎么做?\[EB/OL\]. https://www.bilibili.com/video/BV13T411V7rN?share\_source=copy\_web&vd\_source=be19fd8057d8c92fcca5f91666f9b006 2022-06-20.
|
1.0
|
(5条消息) 强大且超实用的论文阅读工具——ReadPaper_野指针小李的博客-CSDN博客 - > 最近突然发现了一款超好用的[论文阅读](https://so.csdn.net/so/search?q=%E8%AE%BA%E6%96%87%E9%98%85%E8%AF%BB&spm=1001.2101.3001.7020)工具 **ReadPaper**,简直是科研人的福音,在这里推荐给大家。
对于每个科研工作者而言,阅读论文就像吃饭喝水一样同款重要的事情。在我看来,阅读论文是分为两个步骤:1) 看论文;2) 理解论文。而大部分的人看论文都十分吃力,尤其是刚刚尝试入门的同学,就更别提理解论文了。所以一个好的论文阅读工具能够让你在科研路上事半功倍。我这几天就发现了一款优秀的,强大的,实用的论文阅读工具,**ReadPaper**,我希望这篇博客能够让你关注到这个工具,让你更快速的进入科研道路。当然,我这篇博客不是一篇说明文档,而是我从刚刚提到的两个步骤来说明该工具优秀且实用的地方。
* **官网:** [https://readpaper.com/](https://readpaper.com/)
* **下载链接:** [https://readpaper.com/download](https://readpaper.com/download)
* **平板:** 可以在应用商城中搜索 **ReadPaper**。
* **重点:** `免费的`,`免费的`,`免费的`
看论文过程中的难点我一共梳理为以下几个:`找论文`,`读论文`,`记笔记`,`搜参考文献`,`文献归档`,并在下面的内容里面以此梳理 `ReadPaper` 在这几点上很方便的地方。
1.1 找论文
-------
对于初学者而言,找论文会涉及到两个很关键的问题:1)我不知道该读什么论文;2) 我不知道这个论文好不好。
而 **ReadPaper** 在这点上做得很好的就是检索系统很强大,并且可以有多种查询条件,比如检索作者名,年份,引用量等。

同时,**ReadPaper** 还很贴心的给了是否有 `PDF`,以及 `CCF` 评分,可以快速了解这篇论文能否直接下载 PDF,以及会议级别。
最重要的是,这是我国的检索系统,所以检索速度巨快无比。
1.2 读论文
-------
### 1.2.1 翻译
对于大部分人而言,翻译一直是一个很大的问题。许多同学(比如我)都是遇到不懂的词语或者句子,复制下来再粘贴到另一个网页中进行翻译。同时我用的是 [Adobe](https://so.csdn.net/so/search?q=Adobe&spm=1001.2101.3001.7020) 的 PDF 阅读器,所以粘贴过去还要自己回车,就很麻烦。
而 **ReadPaper** 在这一点上做的就十分贴心,支持划词翻译,同时还有 `有道`,`腾讯` 和 `谷歌` 翻译引擎,可以随时切换。并且不用担心每次都要切换自己喜欢的翻译引擎,系统会自动记录你上次使用的翻译引擎。

### 1.2.2 看论文过程中忘记该论文发表的年份和期刊/会议
许多同学,尤其是面对一篇不熟悉的论文,很容易看着看着就忘记这篇论文是哪一年的,出自哪里,等级怎样,然后就要翻回首页找信息,看完后又忘记之前看到哪里了。
而 **ReadPaper** 可以直接在右侧 `资料` 栏中一下就找到所有的信息。

### 1.2.3 段落和图表不在一页,要来回上下翻
我相信许多同学都遇到过这个问题,就是论文的图表和描述文字不在同一页(我遇到过最夸张的一篇论文,文字和图差了10页左右),无论是打印下来还是在电脑上看都要来回上下滑动,会特别麻烦而且影响记忆。
在 **ReadPaper** 中这些都不是问题,在 `资料` 一栏中可以翻阅到该论文中的所有图和表,并且可以点击将其放在页面上,还可以拖动、放大、缩小,特别实用。同时,也可以通过点击类似于 `Figure 1` 等按钮来召唤出这篇论文。

同时,如果你不想用这种方式看图或者表,或者图表太大了必须跳转过去看,那么可以双击右侧图表的标题跳转,而且 **ReadPaper** 还很温馨的提供了一个 `回到刚才位置` 的按钮,方便你快速切回刚刚的位置。

1.3 记笔记
-------
看论文记笔记也是一个很关键的问题,**ReadPaper** 提供了 `高亮`,`记录文本笔记` 和 `记录图表笔记` 的功能。
* **高亮和记录文本笔记:** 高亮和记录文本笔记都是靠划词实现的。当你想要高亮或者记录文本笔记的时候,只需要选中文本,点击中间按钮选择颜色,在右侧就可以开始记录笔记了,如果不记录,也会有高亮。

* **记录图表笔记:** 图表也可以用 **ReadPaper** 记录。在左上角有个截图按钮,点击按钮,框选你想要的区域,即可做笔记。

1.4 查阅参考文献
----------
我相信许多人被 adobe 的参考文献跳转给恶心过,尤其是那种引用不是数字,而是作者名的那种引用,跳转过去还要再自己查作者是谁,才能找到这是哪一篇参考文献。
而 **ReadPaper** 在这点做的我很喜欢,点击参考文献链接,会出 `作者`,`论文名`,`期刊/会议名`,`年份`,`引用量`,`评级`,以及 `是否有PDF`。

1.5 文献归档
--------
我看过许多同学收藏论文的文件夹,有一说一,我个人觉得十分凌乱,就是那种可能你过个几天就会忘了哪一篇是哪一篇了。而 **ReadPaper** 有很好的管理界面。

上面的这些功能实际上许多软件也都有,而真正让我觉得这款软件是顶尖论文阅读工具原因在于它能够帮你理解这篇论文。
我看过许多同学在读论文的时候就是 **将论文从英文全文翻译为中文,然后看完中文就完了**,像极了在做任务,但是这样你就理解了这篇论文了么?我想大概率是没有的。而 **ReadPaper** 能够很好的帮你解决这个问题。**ReadPaper** 的解决方案是由 **沈向阳** 老师和 **华刚** 老师提出的 `论文十问`,这十个问题分别是:
1. 论文试图解决什么问题?
2. 这是否是一个新的问题?
3. 这篇文章要验证一个什么科学假设?
4. 有哪些相关研究?如何归类?谁是这一课题在领域内值得关注的研究员?
5. 论文中提到的解决方案之关键是什么?
6. 论文中的实验是如何设计的?
7. 用于定量评估的数据集是什么?代码有没有开源?
8. 论文中的实验及结果有没有很好地支持需要验证的科学假设?
9. 这篇论文到底有什么贡献?
10. 下一步呢?有什么工作可以继续深入?
在 **ReadPaper** 的软件中,右侧有个 `学习任务` 按钮,在里面可以记录这十个问题。同时如果自己回答不了,那么在 **ReadPaper** 官网,可以查看其它用户对这篇论文这十个问题的回答,能够很大程度上帮助到同学们理解每一篇论文。同时,当你有独到的看法时,你也可以发表你的看法,收获别人的点赞。


\[1\] ReadPaper论文阅读.
【ReadPaper保姆级教程】手把手教你搞定论文阅读神器ReadPaper | 论文如何高效读懂?论文笔记要怎么做?\[EB/OL\]. https://www.bilibili.com/video/BV13T411V7rN?share\_source=copy\_web&vd\_source=be19fd8057d8c92fcca5f91666f9b006 2022-06-20.
|
non_test
|
强大且超实用的论文阅读工具——readpaper 野指针小李的博客 csdn博客 最近突然发现了一款超好用的 readpaper ,简直是科研人的福音,在这里推荐给大家。 对于每个科研工作者而言,阅读论文就像吃饭喝水一样同款重要的事情。在我看来,阅读论文是分为两个步骤: 看论文; 理解论文。而大部分的人看论文都十分吃力,尤其是刚刚尝试入门的同学,就更别提理解论文了。所以一个好的论文阅读工具能够让你在科研路上事半功倍。我这几天就发现了一款优秀的,强大的,实用的论文阅读工具, readpaper ,我希望这篇博客能够让你关注到这个工具,让你更快速的进入科研道路。当然,我这篇博客不是一篇说明文档,而是我从刚刚提到的两个步骤来说明该工具优秀且实用的地方。 官网: 下载链接: 平板: 可以在应用商城中搜索 readpaper 。 重点: 免费的 , 免费的 , 免费的 看论文过程中的难点我一共梳理为以下几个: 找论文 , 读论文 , 记笔记 , 搜参考文献 , 文献归档 ,并在下面的内容里面以此梳理 readpaper 在这几点上很方便的地方。 找论文 对于初学者而言,找论文会涉及到两个很关键的问题: )我不知道该读什么论文; 我不知道这个论文好不好。 而 readpaper 在这点上做得很好的就是检索系统很强大,并且可以有多种查询条件,比如检索作者名,年份,引用量等。 同时, readpaper 还很贴心的给了是否有 pdf ,以及 ccf 评分,可以快速了解这篇论文能否直接下载 pdf,以及会议级别。 最重要的是,这是我国的检索系统,所以检索速度巨快无比。 读论文 翻译 对于大部分人而言,翻译一直是一个很大的问题。许多同学(比如我)都是遇到不懂的词语或者句子,复制下来再粘贴到另一个网页中进行翻译。同时我用的是 的 pdf 阅读器,所以粘贴过去还要自己回车,就很麻烦。 而 readpaper 在这一点上做的就十分贴心,支持划词翻译,同时还有 有道 , 腾讯 和 谷歌 翻译引擎,可以随时切换。并且不用担心每次都要切换自己喜欢的翻译引擎,系统会自动记录你上次使用的翻译引擎。 看论文过程中忘记该论文发表的年份和期刊 会议 许多同学,尤其是面对一篇不熟悉的论文,很容易看着看着就忘记这篇论文是哪一年的,出自哪里,等级怎样,然后就要翻回首页找信息,看完后又忘记之前看到哪里了。 而 readpaper 可以直接在右侧 资料 栏中一下就找到所有的信息。 段落和图表不在一页,要来回上下翻 我相信许多同学都遇到过这个问题,就是论文的图表和描述文字不在同一页(我遇到过最夸张的一篇论文, ),无论是打印下来还是在电脑上看都要来回上下滑动,会特别麻烦而且影响记忆。 在 readpaper 中这些都不是问题,在 资料 一栏中可以翻阅到该论文中的所有图和表,并且可以点击将其放在页面上,还可以拖动、放大、缩小,特别实用。同时,也可以通过点击类似于 figure 等按钮来召唤出这篇论文。 同时,如果你不想用这种方式看图或者表,或者图表太大了必须跳转过去看,那么可以双击右侧图表的标题跳转,而且 readpaper 还很温馨的提供了一个 回到刚才位置 的按钮,方便你快速切回刚刚的位置。 记笔记 看论文记笔记也是一个很关键的问题, readpaper 提供了 高亮 , 记录文本笔记 和 记录图表笔记 的功能。 高亮和记录文本笔记: 高亮和记录文本笔记都是靠划词实现的。当你想要高亮或者记录文本笔记的时候,只需要选中文本,点击中间按钮选择颜色,在右侧就可以开始记录笔记了,如果不记录,也会有高亮。 记录图表笔记: 图表也可以用 readpaper 记录。在左上角有个截图按钮,点击按钮,框选你想要的区域,即可做笔记。 查阅参考文献 我相信许多人被 adobe 的参考文献跳转给恶心过,尤其是那种引用不是数字,而是作者名的那种引用,跳转过去还要再自己查作者是谁,才能找到这是哪一篇参考文献。 而 readpaper 在这点做的我很喜欢,点击参考文献链接,会出 作者 , 论文名 , 期刊 会议名 , 年份 , 引用量 , 评级 ,以及 是否有pdf 。 文献归档 我看过许多同学收藏论文的文件夹,有一说一,我个人觉得十分凌乱,就是那种可能你过个几天就会忘了哪一篇是哪一篇了。而 readpaper 有很好的管理界面。 上面的这些功能实际上许多软件也都有,而真正让我觉得这款软件是顶尖论文阅读工具原因在于它能够帮你理解这篇论文。 我看过许多同学在读论文的时候就是 将论文从英文全文翻译为中文,然后看完中文就完了 ,像极了在做任务,但是这样你就理解了这篇论文了么?我想大概率是没有的。而 readpaper 能够很好的帮你解决这个问题。 readpaper 的解决方案是由 沈向阳 老师和 华刚 老师提出的 论文十问 ,这十个问题分别是: 论文试图解决什么问题? 这是否是一个新的问题? 这篇文章要验证一个什么科学假设? 有哪些相关研究?如何归类?谁是这一课题在领域内值得关注的研究员? 论文中提到的解决方案之关键是什么? 论文中的实验是如何设计的? 用于定量评估的数据集是什么?代码有没有开源? 论文中的实验及结果有没有很好地支持需要验证的科学假设? 这篇论文到底有什么贡献? 下一步呢?有什么工作可以继续深入? 在 readpaper 的软件中,右侧有个 学习任务 按钮,在里面可以记录这十个问题。同时如果自己回答不了,那么在 readpaper 官网,可以查看其它用户对这篇论文这十个问题的回答,能够很大程度上帮助到同学们理解每一篇论文。同时,当你有独到的看法时,你也可以发表你的看法,收获别人的点赞。 readpaper论文阅读 【readpaper保姆级教程】手把手教你搞定论文阅读神器readpaper 论文如何高效读懂?论文笔记要怎么做?
| 0
|
188,519
| 15,164,534,892
|
IssuesEvent
|
2021-02-12 13:52:46
|
arturo-lang/arturo
|
https://api.github.com/repos/arturo-lang/arturo
|
closed
|
[Iterators\loop] add documentation example for .forever
|
documentation easy library todo
|
[Iterators\loop] add documentation example for .forever
https://github.com/arturo-lang/arturo/blob/6c4cb5aae21476de35ac125a5b2763655d02514f/src/library/Iterators.nim#L248
```text
"action" : {Block}
},
# TODO(Iterators\loop) add documentation example for .forever
# labels: library,documentation,easy
attrs = {
"with" : ({Literal},"use given index"),
"forever" : ({Boolean},"cycle through collection infinitely")
},
returns = {Nothing},
example = """
```
64812005f18e259b833932d8d63ba24b44ac69ef
|
1.0
|
[Iterators\loop] add documentation example for .forever - [Iterators\loop] add documentation example for .forever
https://github.com/arturo-lang/arturo/blob/6c4cb5aae21476de35ac125a5b2763655d02514f/src/library/Iterators.nim#L248
```text
"action" : {Block}
},
# TODO(Iterators\loop) add documentation example for .forever
# labels: library,documentation,easy
attrs = {
"with" : ({Literal},"use given index"),
"forever" : ({Boolean},"cycle through collection infinitely")
},
returns = {Nothing},
example = """
```
64812005f18e259b833932d8d63ba24b44ac69ef
|
non_test
|
add documentation example for forever add documentation example for forever text action block todo iterators loop add documentation example for forever labels library documentation easy attrs with literal use given index forever boolean cycle through collection infinitely returns nothing example
| 0
|
162,536
| 25,551,139,439
|
IssuesEvent
|
2022-11-29 23:55:10
|
usds/justice40-tool
|
https://api.github.com/repos/usds/justice40-tool
|
opened
|
Add the source for the census tract information and demographics aka the census
|
design-needed 1.1
|
**Describe the bug**
KK to propose content to Sharm and Natasha
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
|
1.0
|
Add the source for the census tract information and demographics aka the census - **Describe the bug**
KK to propose content to Sharm and Natasha
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
|
non_test
|
add the source for the census tract information and demographics aka the census describe the bug kk to propose content to sharm and natasha to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem desktop please complete the following information os browser version smartphone please complete the following information device os browser version additional context add any other context about the problem here
| 0
|
309,863
| 26,680,021,702
|
IssuesEvent
|
2023-01-26 16:57:03
|
EddieHubCommunity/LinkFree
|
https://api.github.com/repos/EddieHubCommunity/LinkFree
|
closed
|
New Testimonial for Francesco Ciulla
|
testimonial
|
### Name
FrancescoXX
### Title
Awesome Mentor
### Description
Francesco is an AMAZING mentor when it comes to community, DevOps, Devrel, and an overall friendly person. I love that he goes out of his way to support and help other developers. His way of constantly challenging himself to grow is very inspiring, and it is something I am constantly in awe of!
|
1.0
|
New Testimonial for Francesco Ciulla - ### Name
FrancescoXX
### Title
Awesome Mentor
### Description
Francesco is an AMAZING mentor when it comes to community, DevOps, Devrel, and an overall friendly person. I love that he goes out of his way to support and help other developers. His way of constantly challenging himself to grow is very inspiring, and it is something I am constantly in awe of!
|
test
|
new testimonial for francesco ciulla name francescoxx title awesome mentor description francesco is an amazing mentor when it comes to community devops devrel and an overall friendly person i love that he goes out of his way to support and help other developers his way of constantly challenging himself to grow is very inspiring and it is something i am constantly in awe of
| 1
|
341,569
| 30,592,894,225
|
IssuesEvent
|
2023-07-21 18:45:24
|
iotaledger/iota-sdk
|
https://api.github.com/repos/iotaledger/iota-sdk
|
closed
|
QA: reach 75% coverage
|
t-test m-all
|
1.3% each :)
- [x] Thibault https://coveralls.io/builds/61233706 (0.2%), https://coveralls.io/builds/61240890 (0.2%), https://coveralls.io/builds/61274869 (0.2%), https://coveralls.io/builds/61320781 (0.3%), https://coveralls.io/builds/61327310 (0,07%), https://coveralls.io/builds/61409496 (0.4%)
- [x] AlexC https://coveralls.io/builds/61223133 (1.2%), https://coveralls.io/builds/61351390 (0.1%)
- [ ] AlexS
- [x] Thoralf https://coveralls.io/builds/60917230 (1.0%), https://coveralls.io/builds/61019584 (0.2%), https://coveralls.io/builds/61247677 (0.02%) https://coveralls.io/builds/61277690 (0.1%)
- [ ] Pawel https://coveralls.io/builds/61490033 (0.4%) https://coveralls.io/builds/61494944 (0.02%)
- [ ] Brord https://coveralls.io/builds/61418775 (0.3%)
|
1.0
|
QA: reach 75% coverage - 1.3% each :)
- [x] Thibault https://coveralls.io/builds/61233706 (0.2%), https://coveralls.io/builds/61240890 (0.2%), https://coveralls.io/builds/61274869 (0.2%), https://coveralls.io/builds/61320781 (0.3%), https://coveralls.io/builds/61327310 (0,07%), https://coveralls.io/builds/61409496 (0.4%)
- [x] AlexC https://coveralls.io/builds/61223133 (1.2%), https://coveralls.io/builds/61351390 (0.1%)
- [ ] AlexS
- [x] Thoralf https://coveralls.io/builds/60917230 (1.0%), https://coveralls.io/builds/61019584 (0.2%), https://coveralls.io/builds/61247677 (0.02%) https://coveralls.io/builds/61277690 (0.1%)
- [ ] Pawel https://coveralls.io/builds/61490033 (0.4%) https://coveralls.io/builds/61494944 (0.02%)
- [ ] Brord https://coveralls.io/builds/61418775 (0.3%)
|
test
|
qa reach coverage each thibault alexc alexs thoralf pawel brord
| 1
|
191,584
| 14,594,787,243
|
IssuesEvent
|
2020-12-20 08:00:41
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
ZupIT/ritchie-server: server/fph/provider_handler_test.go; 3 LoC
|
fresh test tiny
|
Found a possible issue in [ZupIT/ritchie-server](https://www.github.com/ZupIT/ritchie-server) at [server/fph/provider_handler_test.go](https://github.com/ZupIT/ritchie-server/blob/b20ba9b2113f8aa43d315266ae475afd9e802e8a/server/fph/provider_handler_test.go#L162-L164)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to c is reassigned at line 163
[Click here to see the code in its original context.](https://github.com/ZupIT/ritchie-server/blob/b20ba9b2113f8aa43d315266ae475afd9e802e8a/server/fph/provider_handler_test.go#L162-L164)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, c := range got.Commands {
commands[c.Parent+c.Usage] = &c
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b20ba9b2113f8aa43d315266ae475afd9e802e8a
|
1.0
|
ZupIT/ritchie-server: server/fph/provider_handler_test.go; 3 LoC -
Found a possible issue in [ZupIT/ritchie-server](https://www.github.com/ZupIT/ritchie-server) at [server/fph/provider_handler_test.go](https://github.com/ZupIT/ritchie-server/blob/b20ba9b2113f8aa43d315266ae475afd9e802e8a/server/fph/provider_handler_test.go#L162-L164)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to c is reassigned at line 163
[Click here to see the code in its original context.](https://github.com/ZupIT/ritchie-server/blob/b20ba9b2113f8aa43d315266ae475afd9e802e8a/server/fph/provider_handler_test.go#L162-L164)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, c := range got.Commands {
commands[c.Parent+c.Usage] = &c
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b20ba9b2113f8aa43d315266ae475afd9e802e8a
|
test
|
zupit ritchie server server fph provider handler test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to c is reassigned at line click here to show the line s of go which triggered the analyzer go for c range got commands commands c leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 1
|
214,685
| 16,605,015,884
|
IssuesEvent
|
2021-06-02 01:57:28
|
bosagora/agora
|
https://api.github.com/repos/bosagora/agora
|
closed
|
Network tests: Remove override of `prepareNominatingSet`
|
difficulty-easy type-testing
|
In some of the network unit tests the test config field `txs_to_nominate` is used in an overridden `prepareNominatingSet` which effects if it is ready to nominate and how many transactions will be included in the nominating set. This behavior is too different to the production code so the tests are less useful. Most tests no longer use this method but `setTimeFor(Height)` which advances the current time to the network time for nomination at the given height.
The remaining tests should be ported to use this overridden clock method.
|
1.0
|
Network tests: Remove override of `prepareNominatingSet` - In some of the network unit tests the test config field `txs_to_nominate` is used in an overridden `prepareNominatingSet` which effects if it is ready to nominate and how many transactions will be included in the nominating set. This behavior is too different to the production code so the tests are less useful. Most tests no longer use this method but `setTimeFor(Height)` which advances the current time to the network time for nomination at the given height.
The remaining tests should be ported to use this overridden clock method.
|
test
|
network tests remove override of preparenominatingset in some of the network unit tests the test config field txs to nominate is used in an overridden preparenominatingset which effects if it is ready to nominate and how many transactions will be included in the nominating set this behavior is too different to the production code so the tests are less useful most tests no longer use this method but settimefor height which advances the current time to the network time for nomination at the given height the remaining tests should be ported to use this overridden clock method
| 1
|
67,097
| 7,035,061,000
|
IssuesEvent
|
2017-12-27 20:51:15
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
teamcity: failed tests on master: Jepsen/Jepsen: JepsenBank: JepsenBank/majority-ring+subcritical-skews
|
Robot test-failure
|
The following tests appear to have failed:
[#455794](https://teamcity.cockroachdb.com/viewLog.html?buildId=455794):
```
--- FAIL: Jepsen/Jepsen: JepsenBank: JepsenBank/majority-ring+subcritical-skews (13.144s)
None
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed tests on master: Jepsen/Jepsen: JepsenBank: JepsenBank/majority-ring+subcritical-skews - The following tests appear to have failed:
[#455794](https://teamcity.cockroachdb.com/viewLog.html?buildId=455794):
```
--- FAIL: Jepsen/Jepsen: JepsenBank: JepsenBank/majority-ring+subcritical-skews (13.144s)
None
```
Please assign, take a look and update the issue accordingly.
|
test
|
teamcity failed tests on master jepsen jepsen jepsenbank jepsenbank majority ring subcritical skews the following tests appear to have failed fail jepsen jepsen jepsenbank jepsenbank majority ring subcritical skews none please assign take a look and update the issue accordingly
| 1
|
439,263
| 30,687,627,485
|
IssuesEvent
|
2023-07-26 13:20:41
|
IdEvEbI/IdEvEbI
|
https://api.github.com/repos/IdEvEbI/IdEvEbI
|
closed
|
装机记录
|
documentation
|
# 装机记录备忘
2023年7月26日,重新安装系统 macOS Ventura v13.5,记录安装过程以留备忘。
## 一. 终端设置
1. zsh 配色
在 `~/.zshrc` 中增加以下配置内容:
```bash
# Tell ls to be colourful
export CLICOLOR=1
export LSCOLORS=Exfxcxdxbxegedabagacad
# Tell grep to highlight matches
export GREP_OPTIONS='--color=auto'
# alias
alias ll='ls -l'
```
2. [brew](https://brew.sh/index_zh-cn)
```bash
brew install wget
```
3. [git](https://git-scm.com/download)
- 全局配置
```bash
# 配置用户名
git config --global user.name "username"
# 配置邮箱
git config --global user.email "username@163.com"
```
- `.gitconfig`
```ini
[alias]
# one-line log
l = log --pretty=format:"%C(yellow)%h\\ %ad%Cred%d\\ %Creset%s%Cblue\\ [%cn]" --decorate --date=short
a = add
ap = add -p
c = commit --verbose
ca = commit -a --verbose
cm = commit -m
cam = commit -a -m
m = commit --amend --verbose
d = diff
ds = diff --stat
dc = diff --cached
s = status -s
co = checkout
cob = checkout -b
# list branches sorted by last modified
b = branch
ba = "!git for-each-ref --sort='-authordate' --format='%(authordate)%09%(objectname:short)%09%(refname)' refs/heads | sed -e 's-refs/heads/--'"
# list aliases
la = "!git config -l | grep alias | cut -c 7-"
```
4. [Node.js](https://nodejs.org/)
## 二. 常规工具软件安装
1. 办公软件
1. [ClashX](https://www.miyun.la/user)
2. [Google Chrome](https://www.google.com/intl/zh-CN/chrome/)
3. [Typora](https://typora.io/)
4. [XMind](https://xmind.cn/)
5. [Visual Studio Code](https://code.visualstudio.com/)
6. [Microsoft Office](https://account.microsoft.com/services/microsoft365/)
7. [OBS](https://obsproject.com/)
8. [IINA](https://iina.io/)
9. [企业微信](https://work.weixin.qq.com/)
10. [百度云盘](https://pan.baidu.com/)
11. [飞书](https://www.feishu.cn/)
12. [BetterZip](https://macitbetter.com/)
13. [Bartender](https://www.macbartender.com/)
14. [VMWare Fusion](https://www.vmware.com/hk/products/fusion.html)
2. App Store 软件
1. ScreenBrush + Xnip + iPic
2. Magnet
3. ASTimer
4. 网易邮箱大师
5. 微信
6. 印象笔记
7. 腾讯会议
8. SketchbookPro
|
1.0
|
装机记录 - # 装机记录备忘
2023年7月26日,重新安装系统 macOS Ventura v13.5,记录安装过程以留备忘。
## 一. 终端设置
1. zsh 配色
在 `~/.zshrc` 中增加以下配置内容:
```bash
# Tell ls to be colourful
export CLICOLOR=1
export LSCOLORS=Exfxcxdxbxegedabagacad
# Tell grep to highlight matches
export GREP_OPTIONS='--color=auto'
# alias
alias ll='ls -l'
```
2. [brew](https://brew.sh/index_zh-cn)
```bash
brew install wget
```
3. [git](https://git-scm.com/download)
- 全局配置
```bash
# 配置用户名
git config --global user.name "username"
# 配置邮箱
git config --global user.email "username@163.com"
```
- `.gitconfig`
```ini
[alias]
# one-line log
l = log --pretty=format:"%C(yellow)%h\\ %ad%Cred%d\\ %Creset%s%Cblue\\ [%cn]" --decorate --date=short
a = add
ap = add -p
c = commit --verbose
ca = commit -a --verbose
cm = commit -m
cam = commit -a -m
m = commit --amend --verbose
d = diff
ds = diff --stat
dc = diff --cached
s = status -s
co = checkout
cob = checkout -b
# list branches sorted by last modified
b = branch
ba = "!git for-each-ref --sort='-authordate' --format='%(authordate)%09%(objectname:short)%09%(refname)' refs/heads | sed -e 's-refs/heads/--'"
# list aliases
la = "!git config -l | grep alias | cut -c 7-"
```
4. [Node.js](https://nodejs.org/)
## 二. 常规工具软件安装
1. 办公软件
1. [ClashX](https://www.miyun.la/user)
2. [Google Chrome](https://www.google.com/intl/zh-CN/chrome/)
3. [Typora](https://typora.io/)
4. [XMind](https://xmind.cn/)
5. [Visual Studio Code](https://code.visualstudio.com/)
6. [Microsoft Office](https://account.microsoft.com/services/microsoft365/)
7. [OBS](https://obsproject.com/)
8. [IINA](https://iina.io/)
9. [企业微信](https://work.weixin.qq.com/)
10. [百度云盘](https://pan.baidu.com/)
11. [飞书](https://www.feishu.cn/)
12. [BetterZip](https://macitbetter.com/)
13. [Bartender](https://www.macbartender.com/)
14. [VMWare Fusion](https://www.vmware.com/hk/products/fusion.html)
2. App Store 软件
1. ScreenBrush + Xnip + iPic
2. Magnet
3. ASTimer
4. 网易邮箱大师
5. 微信
6. 印象笔记
7. 腾讯会议
8. SketchbookPro
|
non_test
|
装机记录 装机记录备忘 ,重新安装系统 macos ventura ,记录安装过程以留备忘。 一 终端设置 zsh 配色 在 zshrc 中增加以下配置内容: bash tell ls to be colourful export clicolor export lscolors exfxcxdxbxegedabagacad tell grep to highlight matches export grep options color auto alias alias ll ls l bash brew install wget 全局配置 bash 配置用户名 git config global user name username 配置邮箱 git config global user email username com gitconfig ini one line log l log pretty format c yellow h ad cred d creset s cblue decorate date short a add ap add p c commit verbose ca commit a verbose cm commit m cam commit a m m commit amend verbose d diff ds diff stat dc diff cached s status s co checkout cob checkout b list branches sorted by last modified b branch ba git for each ref sort authordate format authordate objectname short refname refs heads sed e s refs heads list aliases la git config l grep alias cut c 二 常规工具软件安装 办公软件 app store 软件 screenbrush xnip ipic magnet astimer 网易邮箱大师 微信 印象笔记 腾讯会议 sketchbookpro
| 0
|
67,159
| 7,036,999,612
|
IssuesEvent
|
2017-12-28 12:14:05
|
edenlabllc/ehealth.api
|
https://api.github.com/repos/edenlabllc/ehealth.api
|
closed
|
Змінити назву "Клініки" на "Медичні заклади" в адмінці ehealth
|
FE status/test
|
Адмінка ehealth на dev середовищі.
Зліва є перелік розділів.
Змінити назву "Клініки" на "Медичні заклади".
|
1.0
|
Змінити назву "Клініки" на "Медичні заклади" в адмінці ehealth - Адмінка ehealth на dev середовищі.
Зліва є перелік розділів.
Змінити назву "Клініки" на "Медичні заклади".
|
test
|
змінити назву клініки на медичні заклади в адмінці ehealth адмінка ehealth на dev середовищі зліва є перелік розділів змінити назву клініки на медичні заклади
| 1
|
25,778
| 11,217,505,947
|
IssuesEvent
|
2020-01-07 09:24:56
|
status-im/status-react
|
https://api.github.com/repos/status-im/status-react
|
closed
|
Timestamps for messages doesn't match real time if user send message from device with custom time
|
bug chat medium-severity security
|
### Description
*Type*: Bug
*Summary*: when anybody send a message from device with custom time (not another timezone, but custom time, i.e. when there is 10:00 UTC, and user set 09:45) - timestamps for such messages are shown with difference in minutes from nearest timezone, so in chat it looks like wrong message ordering (but it is not, wrong are message timestamps)
#### Expected behavior
timestamps for messages are set according to real message ordering and receiver time settings;
#### Actual behavior
timestamps for messages sent from device with custom time don't match real time
<img width="568" alt="us_t" src="https://user-images.githubusercontent.com/4557972/47223441-9494f280-d3b9-11e8-88d6-98bb1bd9a4db.png">
### Reproduction
*Prerequisites:* for `User A` timezone and time is set automatically; for `User B` time is custom and set 10 minutes earlier)
- `User A`: open Status, join any public channel (i.e. #chuone), send message
- `User B`: open Status, join any public channel (i.e. #chuone), send message
- `User A`: check chat history
### Additional Information
[comment]: # (Please do your best to fill this out.)
* Status version: nightly 19/10/2018
* Operating System: Android, iOS, desktop
|
True
|
Timestamps for messages doesn't match real time if user send message from device with custom time - ### Description
*Type*: Bug
*Summary*: when anybody send a message from device with custom time (not another timezone, but custom time, i.e. when there is 10:00 UTC, and user set 09:45) - timestamps for such messages are shown with difference in minutes from nearest timezone, so in chat it looks like wrong message ordering (but it is not, wrong are message timestamps)
#### Expected behavior
timestamps for messages are set according to real message ordering and receiver time settings;
#### Actual behavior
timestamps for messages sent from device with custom time don't match real time
<img width="568" alt="us_t" src="https://user-images.githubusercontent.com/4557972/47223441-9494f280-d3b9-11e8-88d6-98bb1bd9a4db.png">
### Reproduction
*Prerequisites:* for `User A` timezone and time is set automatically; for `User B` time is custom and set 10 minutes earlier)
- `User A`: open Status, join any public channel (i.e. #chuone), send message
- `User B`: open Status, join any public channel (i.e. #chuone), send message
- `User A`: check chat history
### Additional Information
[comment]: # (Please do your best to fill this out.)
* Status version: nightly 19/10/2018
* Operating System: Android, iOS, desktop
|
non_test
|
timestamps for messages doesn t match real time if user send message from device with custom time description type bug summary when anybody send a message from device with custom time not another timezone but custom time i e when there is utc and user set timestamps for such messages are shown with difference in minutes from nearest timezone so in chat it looks like wrong message ordering but it is not wrong are message timestamps expected behavior timestamps for messages are set according to real message ordering and receiver time settings actual behavior timestamps for messages sent from device with custom time don t match real time img width alt us t src reproduction prerequisites for user a timezone and time is set automatically for user b time is custom and set minutes earlier user a open status join any public channel i e chuone send message user b open status join any public channel i e chuone send message user a check chat history additional information please do your best to fill this out status version nightly operating system android ios desktop
| 0
|
111,292
| 9,525,441,396
|
IssuesEvent
|
2019-04-28 12:24:03
|
z80andrew/SerialDisk
|
https://api.github.com/repos/z80andrew/SerialDisk
|
closed
|
Virtual disk sizes above 31MiB do not work correctly
|
bug framework dependant ready for testing self contained
|
Specifying a disk size greater than 31 in the application parameters will cause the virtual disk drive to behave incorrectly on the Atari ST. Reading seems to work but writing causes various errors.
|
1.0
|
Virtual disk sizes above 31MiB do not work correctly - Specifying a disk size greater than 31 in the application parameters will cause the virtual disk drive to behave incorrectly on the Atari ST. Reading seems to work but writing causes various errors.
|
test
|
virtual disk sizes above do not work correctly specifying a disk size greater than in the application parameters will cause the virtual disk drive to behave incorrectly on the atari st reading seems to work but writing causes various errors
| 1
|
95,752
| 16,105,264,453
|
IssuesEvent
|
2021-04-27 14:16:25
|
crossplane/crossplane
|
https://api.github.com/repos/crossplane/crossplane
|
closed
|
docs: User guide for enabling multi-tenant support
|
credentials docs enhancement security
|
This question has come up a couple of times, so it would be great to write a user guide or doc about how to enable it.
Scenario: A platform team is running a single crossplane cluster. They want to enable multi-tenancy where each application team (or customer) that they service gets their own namespace and isolation from other teams/customers. An important part of this is that each namespace gets its own `ProviderConfig` so they are each using credentials/permissions in the cloud provider that are specific for them. Team A should NOT be able to use the credentials (`ProviderConfig`) for Team B.
@hasheddan had some great suggestions on this:
1. Hardcode the providerConfig in a composition. This is not optimal in some way because it means that anywhere that composition is used will use those same credentials. However, if you are thinking in an "object-based" auth model rather than a "user-based" auth model, it actually makes sense. Least privilege looks like making the provider use credentials scoped to the operations it needs for that resource or group of resources. Then you control who can create those resources with k8s RBAC. IMO this is a good model because you are not crossing the responsibilities of k8s user control and cloud provider user control. K8s is how you manage users RBAC, the cloud provider is how you handle provider RBAC. This is the area I really feel like we need to educate more folks on.
1. You can expose a providerConfig field on the XRD / XRC then write OPA (or other policy) around what values can be supplied in what namespace (i.e. the claim is created at the namespace scope, so you can write a policy that says for `MyCoolClusterClaim` in namespace `team-1` the only values acceptable for `spec.providerConfig.name` are `team-1-pc`)
1. We have also talked about adding support for "patching from data sources" (@negz is working on this area: https://github.com/crossplane/crossplane/issues/2099). This would mean you could create a composition that had a path that did something like interpolating the namespace into the providerConfig name. @muvaf describes how this is already possible with current mechanisms here: https://github.com/crossplane/crossplane/issues/2099#issuecomment-768301167
@muvaf suggestion in https://github.com/crossplane/crossplane/issues/2099#issuecomment-768301167 is pretty slick and works with current v1.0 functionality and without any additional tooling.
|
True
|
docs: User guide for enabling multi-tenant support - This question has come up a couple of times, so it would be great to write a user guide or doc about how to enable it.
Scenario: A platform team is running a single crossplane cluster. They want to enable multi-tenancy where each application team (or customer) that they service gets their own namespace and isolation from other teams/customers. An important part of this is that each namespace gets its own `ProviderConfig` so they are each using credentials/permissions in the cloud provider that are specific for them. Team A should NOT be able to use the credentials (`ProviderConfig`) for Team B.
@hasheddan had some great suggestions on this:
1. Hardcode the providerConfig in a composition. This is not optimal in some way because it means that anywhere that composition is used will use those same credentials. However, if you are thinking in an "object-based" auth model rather than a "user-based" auth model, it actually makes sense. Least privilege looks like making the provider use credentials scoped to the operations it needs for that resource or group of resources. Then you control who can create those resources with k8s RBAC. IMO this is a good model because you are not crossing the responsibilities of k8s user control and cloud provider user control. K8s is how you manage users RBAC, the cloud provider is how you handle provider RBAC. This is the area I really feel like we need to educate more folks on.
1. You can expose a providerConfig field on the XRD / XRC then write OPA (or other policy) around what values can be supplied in what namespace (i.e. the claim is created at the namespace scope, so you can write a policy that says for `MyCoolClusterClaim` in namespace `team-1` the only values acceptable for `spec.providerConfig.name` are `team-1-pc`)
1. We have also talked about adding support for "patching from data sources" (@negz is working on this area: https://github.com/crossplane/crossplane/issues/2099). This would mean you could create a composition that had a path that did something like interpolating the namespace into the providerConfig name. @muvaf describes how this is already possible with current mechanisms here: https://github.com/crossplane/crossplane/issues/2099#issuecomment-768301167
@muvaf suggestion in https://github.com/crossplane/crossplane/issues/2099#issuecomment-768301167 is pretty slick and works with current v1.0 functionality and without any additional tooling.
|
non_test
|
docs user guide for enabling multi tenant support this question has come up a couple of times so it would be great to write a user guide or doc about how to enable it scenario a platform team is running a single crossplane cluster they want to enable multi tenancy where each application team or customer that they service gets their own namespace and isolation from other teams customers an important part of this is that each namespace gets its own providerconfig so they are each using credentials permissions in the cloud provider that are specific for them team a should not be able to use the credentials providerconfig for team b hasheddan had some great suggestions on this hardcode the providerconfig in a composition this is not optimal in some way because it means that anywhere that composition is used will use those same credentials however if you are thinking in an object based auth model rather than a user based auth model it actually makes sense least privilege looks like making the provider use credentials scoped to the operations it needs for that resource or group of resources then you control who can create those resources with rbac imo this is a good model because you are not crossing the responsibilities of user control and cloud provider user control is how you manage users rbac the cloud provider is how you handle provider rbac this is the area i really feel like we need to educate more folks on you can expose a providerconfig field on the xrd xrc then write opa or other policy around what values can be supplied in what namespace i e the claim is created at the namespace scope so you can write a policy that says for mycoolclusterclaim in namespace team the only values acceptable for spec providerconfig name are team pc we have also talked about adding support for patching from data sources negz is working on this area this would mean you could create a composition that had a path that did something like interpolating the namespace into the providerconfig name muvaf describes how this is already possible with current mechanisms here muvaf suggestion in is pretty slick and works with current functionality and without any additional tooling
| 0
|
342,897
| 24,760,931,195
|
IssuesEvent
|
2022-10-22 00:06:12
|
pygamelib/pygamelib
|
https://api.github.com/repos/pygamelib/pygamelib
|
opened
|
Create a new particle emitter that uses a pygamelib.gfx.core.Sprite to initialize the particles.
|
enhancement Hacktoberfest New feature documentation
|
**Problem summary/missing feature:**
One of the cool things that can be done with regular particle systems (like the one in Godot) is sprite explosion. We cannot do that yet.
We definitely want to be able to do that!
**Expected behavior:**
First, have a look at that [video](https://www.youtube.com/watch?v=D7XSL0zBOwI) for inspiration.
The emitter should work similarly but since we do not have shaders (yet?), we can use a new particle emitter that is initialized with the sprixels of the sprite.
**Work to do:**
Without putting too much thought into this, I would suggest to:
- create a new particle emitter (SpriteModifierEmitter?). It ignores the particle type of the `EmitterProperties` object since its going to use the base `Particle` and use the sprixel of the sprite.
- inherit from `pygamelib.gfx.particles.ParticleEmitter`.
- initialize the particles with the sprixels of the sprite (maybe use `Sprixel.copy()` to make a copy of the sprixel). The particles need to be positioned correctly depending on their position in the sprite. The emitter's position (on screen) should probably be at the top left corner of the sprite (its own (0,0) internal coordinate). Remember that emitters are not actually drawn on screen, only the particles.
- overload the `update()` method if needed (to move the particles).
An item that is up for discussion is considering if the current randomness, variance, and different vectors are enough to do what we want. This way we could attain different types of effects without too much hassle (like the same emitter class being used for the an explosion effect or a fire effect - see the [particle benchmark](https://github.com/pygamelib/pygamelib/tree/master/examples/benchmark-particle-system) for examples).
|
1.0
|
Create a new particle emitter that uses a pygamelib.gfx.core.Sprite to initialize the particles. - **Problem summary/missing feature:**
One of the cool things that can be done with regular particle systems (like the one in Godot) is sprite explosion. We cannot do that yet.
We definitely want to be able to do that!
**Expected behavior:**
First, have a look at that [video](https://www.youtube.com/watch?v=D7XSL0zBOwI) for inspiration.
The emitter should work similarly but since we do not have shaders (yet?), we can use a new particle emitter that is initialized with the sprixels of the sprite.
**Work to do:**
Without putting too much thought into this, I would suggest to:
- create a new particle emitter (SpriteModifierEmitter?). It ignores the particle type of the `EmitterProperties` object since its going to use the base `Particle` and use the sprixel of the sprite.
- inherit from `pygamelib.gfx.particles.ParticleEmitter`.
- initialize the particles with the sprixels of the sprite (maybe use `Sprixel.copy()` to make a copy of the sprixel). The particles need to be positioned correctly depending on their position in the sprite. The emitter's position (on screen) should probably be at the top left corner of the sprite (its own (0,0) internal coordinate). Remember that emitters are not actually drawn on screen, only the particles.
- overload the `update()` method if needed (to move the particles).
An item that is up for discussion is considering if the current randomness, variance, and different vectors are enough to do what we want. This way we could attain different types of effects without too much hassle (like the same emitter class being used for the an explosion effect or a fire effect - see the [particle benchmark](https://github.com/pygamelib/pygamelib/tree/master/examples/benchmark-particle-system) for examples).
|
non_test
|
create a new particle emitter that uses a pygamelib gfx core sprite to initialize the particles problem summary missing feature one of the cool things that can be done with regular particle systems like the one in godot is sprite explosion we cannot do that yet we definitely want to be able to do that expected behavior first have a look at that for inspiration the emitter should work similarly but since we do not have shaders yet we can use a new particle emitter that is initialized with the sprixels of the sprite work to do without putting too much thought into this i would suggest to create a new particle emitter spritemodifieremitter it ignores the particle type of the emitterproperties object since its going to use the base particle and use the sprixel of the sprite inherit from pygamelib gfx particles particleemitter initialize the particles with the sprixels of the sprite maybe use sprixel copy to make a copy of the sprixel the particles need to be positioned correctly depending on their position in the sprite the emitter s position on screen should probably be at the top left corner of the sprite its own internal coordinate remember that emitters are not actually drawn on screen only the particles overload the update method if needed to move the particles an item that is up for discussion is considering if the current randomness variance and different vectors are enough to do what we want this way we could attain different types of effects without too much hassle like the same emitter class being used for the an explosion effect or a fire effect see the for examples
| 0
|
335,277
| 30,020,771,276
|
IssuesEvent
|
2023-06-26 23:05:52
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix comparison_ops.test_torch_less
|
PyTorch Frontend Sub Task Failing Test
|
| | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix comparison_ops.test_torch_less - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5383687338/jobs/9770742045"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix comparison ops test torch less jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
| 1
|
436,901
| 30,575,152,301
|
IssuesEvent
|
2023-07-21 04:20:03
|
galasa-dev/galasa.dev
|
https://api.github.com/repos/galasa-dev/galasa.dev
|
opened
|
Provide additional information about OBRs
|
documentation
|
Include a more detailed description of what an OBR actually is, perhaps referring to the OSGi docs here: https://felix.apache.org/documentation/subprojects/apache-felix-osgi-bundle-repository.html
|
1.0
|
Provide additional information about OBRs - Include a more detailed description of what an OBR actually is, perhaps referring to the OSGi docs here: https://felix.apache.org/documentation/subprojects/apache-felix-osgi-bundle-repository.html
|
non_test
|
provide additional information about obrs include a more detailed description of what an obr actually is perhaps referring to the osgi docs here
| 0
|
72,660
| 13,902,718,487
|
IssuesEvent
|
2020-10-20 05:57:07
|
quarkusio/quarkus
|
https://api.github.com/repos/quarkusio/quarkus
|
closed
|
Dockerfile codestart could be factorized to ease maintenance
|
area/codestarts kind/enhancement
|
**Description**
https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/tooling/dockerfiles/base/src/main/docker/Dockerfile.tpl.qute.fast-jar,
https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/tooling/dockerfiles/base/src/main/docker/Dockerfile.tpl.qute.jvm
and https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/tooling/dockerfiles/base/src/main/docker/Dockerfile.tpl.qute.native
have a lot of duplicated content.
**Implementation ideas**
You can use the qute include mechanism like we do for gradle: `https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/buildtool/gradle/base/build-layout.include.qute`
https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/buildtool/gradle/java/build.tpl.qute.gradle
Also, you can use buildtool data to provide the commands..
|
1.0
|
Dockerfile codestart could be factorized to ease maintenance - **Description**
https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/tooling/dockerfiles/base/src/main/docker/Dockerfile.tpl.qute.fast-jar,
https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/tooling/dockerfiles/base/src/main/docker/Dockerfile.tpl.qute.jvm
and https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/tooling/dockerfiles/base/src/main/docker/Dockerfile.tpl.qute.native
have a lot of duplicated content.
**Implementation ideas**
You can use the qute include mechanism like we do for gradle: `https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/buildtool/gradle/base/build-layout.include.qute`
https://github.com/quarkusio/quarkus/blob/master/devtools/platform-descriptor-json/src/main/resources/codestarts/quarkus/core/buildtool/gradle/java/build.tpl.qute.gradle
Also, you can use buildtool data to provide the commands..
|
non_test
|
dockerfile codestart could be factorized to ease maintenance description and have a lot of duplicated content implementation ideas you can use the qute include mechanism like we do for gradle also you can use buildtool data to provide the commands
| 0
|
332,722
| 29,491,427,681
|
IssuesEvent
|
2023-06-02 13:44:36
|
multi-ego/multi-eGO
|
https://api.github.com/repos/multi-ego/multi-eGO
|
closed
|
multiego.py --noheader flag to not write the header in the output files
|
enhancement regtests
|
@frantropy I think this would be better so to avoid updating the test output files when is not needed
|
1.0
|
multiego.py --noheader flag to not write the header in the output files - @frantropy I think this would be better so to avoid updating the test output files when is not needed
|
test
|
multiego py noheader flag to not write the header in the output files frantropy i think this would be better so to avoid updating the test output files when is not needed
| 1
|
64,170
| 3,205,939,625
|
IssuesEvent
|
2015-10-04 15:41:52
|
cs2103aug2015-f09-4c/main
|
https://api.github.com/repos/cs2103aug2015-f09-4c/main
|
closed
|
A user can save the data
|
comp.LOGIC priority.high (must have) type.story
|
so that the user can close the program and open the program with the saved data later.
|
1.0
|
A user can save the data - so that the user can close the program and open the program with the saved data later.
|
non_test
|
a user can save the data so that the user can close the program and open the program with the saved data later
| 0
|
57,575
| 24,155,576,542
|
IssuesEvent
|
2022-09-22 07:22:54
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Cloning an existing slot
|
app-service/svc triaged cxp doc-bug Pri2
|
In the section "Cloning an existing App Slot", there is the following command to clone a slot:
`$destapp = New-AzWebApp -ResourceGroupName DestinationAzureResourceGroup -Name dest-app -Location "North Central US" -AppServicePlan DestinationAppServicePlan -SourceWebApp $srcappslot`
I tried but always received the following error:
`New-AzWebApp : Long running operation failed with status 'InternalServerError'.`
After contacting Microsoft's support, they instructed me to use the **New-AzWebAppSlot** instead of **New-AzWebApp** and everything worked fine. :
`$destslot= New-AzWebAppSlot -ResourceGroupName DestinationAzureResourceGroup -Name dest-app -SourceWebApp $srcappslot -Slot staging`
If you contact me, I could provide us the support ticket privately.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: daf14087-1765-4636-805c-6faec03d8a9b
* Version Independent ID: 4a6b910d-403e-997a-b010-0942e8ef624c
* Content: [Clone app with PowerShell - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-app-cloning)
* Content Source: [articles/app-service/app-service-web-app-cloning.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/app-service-web-app-cloning.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
1.0
|
Cloning an existing slot - In the section "Cloning an existing App Slot", there is the following command to clone a slot:
`$destapp = New-AzWebApp -ResourceGroupName DestinationAzureResourceGroup -Name dest-app -Location "North Central US" -AppServicePlan DestinationAppServicePlan -SourceWebApp $srcappslot`
I tried but always received the following error:
`New-AzWebApp : Long running operation failed with status 'InternalServerError'.`
After contacting Microsoft's support, they instructed me to use the **New-AzWebAppSlot** instead of **New-AzWebApp** and everything worked fine. :
`$destslot= New-AzWebAppSlot -ResourceGroupName DestinationAzureResourceGroup -Name dest-app -SourceWebApp $srcappslot -Slot staging`
If you contact me, I could provide us the support ticket privately.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: daf14087-1765-4636-805c-6faec03d8a9b
* Version Independent ID: 4a6b910d-403e-997a-b010-0942e8ef624c
* Content: [Clone app with PowerShell - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-app-cloning)
* Content Source: [articles/app-service/app-service-web-app-cloning.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/app-service-web-app-cloning.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
non_test
|
cloning an existing slot in the section cloning an existing app slot there is the following command to clone a slot destapp new azwebapp resourcegroupname destinationazureresourcegroup name dest app location north central us appserviceplan destinationappserviceplan sourcewebapp srcappslot i tried but always received the following error new azwebapp long running operation failed with status internalservererror after contacting microsoft s support they instructed me to use the new azwebappslot instead of new azwebapp and everything worked fine destslot new azwebappslot resourcegroupname destinationazureresourcegroup name dest app sourcewebapp srcappslot slot staging if you contact me i could provide us the support ticket privately document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin
| 0
|
62,106
| 6,776,215,750
|
IssuesEvent
|
2017-10-27 16:58:09
|
tpfinal-pp1/tp-final
|
https://api.github.com/repos/tpfinal-pp1/tp-final
|
closed
|
Edicion de Inmueble con Inmobiliaria (no admite nuevas)
|
bug Liberado por desarrollo Liberado por testing
|
Al editar un inmueble existente y cargar una nueva inmobiliaria con el formulario, dicha inmobiliaria no se guarda y queda la anterior
|
1.0
|
Edicion de Inmueble con Inmobiliaria (no admite nuevas) - Al editar un inmueble existente y cargar una nueva inmobiliaria con el formulario, dicha inmobiliaria no se guarda y queda la anterior
|
test
|
edicion de inmueble con inmobiliaria no admite nuevas al editar un inmueble existente y cargar una nueva inmobiliaria con el formulario dicha inmobiliaria no se guarda y queda la anterior
| 1
|
242,254
| 20,207,275,085
|
IssuesEvent
|
2022-02-11 22:01:30
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Refactor `audit_test.go`
|
area/test kind/cleanup sig/api-machinery lifecycle/rotten triage/accepted
|
<!-- Feature requests are unlikely to make progress as an issue.
Instead, please suggest enhancements by engaging with SIGs on slack and mailing lists.
A proposal that works through the design along with the implications of the change can be opened as a KEP:
https://git.k8s.io/enhancements/keps#kubernetes-enhancement-proposals-keps
-->
#### What would you like to be added:
The `test/integration/controlplane/audit_test.go` contains a lot of duplicated code around checking if certain audit events are there.
The `testAudit*` functions combine both actions for the tests and the assertions for the audit events. Decoupling those two things will enable us to move the "actions" code higher of the stack and unify the audit events evaluation code.
#### Why is this needed:
Unifying this test will make it shorter and will also make some of the functions more reusable. Less code = easier maintenance.
|
1.0
|
Refactor `audit_test.go` - <!-- Feature requests are unlikely to make progress as an issue.
Instead, please suggest enhancements by engaging with SIGs on slack and mailing lists.
A proposal that works through the design along with the implications of the change can be opened as a KEP:
https://git.k8s.io/enhancements/keps#kubernetes-enhancement-proposals-keps
-->
#### What would you like to be added:
The `test/integration/controlplane/audit_test.go` contains a lot of duplicated code around checking if certain audit events are there.
The `testAudit*` functions combine both actions for the tests and the assertions for the audit events. Decoupling those two things will enable us to move the "actions" code higher of the stack and unify the audit events evaluation code.
#### Why is this needed:
Unifying this test will make it shorter and will also make some of the functions more reusable. Less code = easier maintenance.
|
test
|
refactor audit test go feature requests are unlikely to make progress as an issue instead please suggest enhancements by engaging with sigs on slack and mailing lists a proposal that works through the design along with the implications of the change can be opened as a kep what would you like to be added the test integration controlplane audit test go contains a lot of duplicated code around checking if certain audit events are there the testaudit functions combine both actions for the tests and the assertions for the audit events decoupling those two things will enable us to move the actions code higher of the stack and unify the audit events evaluation code why is this needed unifying this test will make it shorter and will also make some of the functions more reusable less code easier maintenance
| 1
|
245,089
| 20,744,928,764
|
IssuesEvent
|
2022-03-14 21:39:56
|
Uuvana-Studios/longvinter-windows-client
|
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
|
opened
|
Cheater Spamming Tents EU2
|
bug Not Tested
|
someone is putting hundreds of tents around the river and for some parts of the map on the EU2 server, it's the first time I'm on github, I don't know if I'm creating the post right or not, but this problem is definitely very annoying

.
|
1.0
|
Cheater Spamming Tents EU2 - someone is putting hundreds of tents around the river and for some parts of the map on the EU2 server, it's the first time I'm on github, I don't know if I'm creating the post right or not, but this problem is definitely very annoying

.
|
test
|
cheater spamming tents someone is putting hundreds of tents around the river and for some parts of the map on the server it s the first time i m on github i don t know if i m creating the post right or not but this problem is definitely very annoying
| 1
|
262,343
| 8,269,516,265
|
IssuesEvent
|
2018-09-15 07:01:18
|
trimstray/multitor
|
https://api.github.com/repos/trimstray/multitor
|
closed
|
Kali linux not hpts package for installing!!
|
Priority: Low Status: Feedback
|
root@AVI:~# apt-get install hpts
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package hpts
|
1.0
|
Kali linux not hpts package for installing!! - root@AVI:~# apt-get install hpts
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package hpts
|
non_test
|
kali linux not hpts package for installing root avi apt get install hpts reading package lists done building dependency tree reading state information done e unable to locate package hpts
| 0
|
41,818
| 5,398,438,402
|
IssuesEvent
|
2017-02-27 16:56:26
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Possible regression in pod-startup-time
|
kind/flake priority/failing-test sig/scalability
|
In the last 10 runs of kubemark-500, 4-of them failed with "too high pod-startup-time".
We've never had problems with it before.
The failures are:
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2777
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2782
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2784
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2785
Should be investigated.
@shyamjvs @gmarek
|
1.0
|
Possible regression in pod-startup-time - In the last 10 runs of kubemark-500, 4-of them failed with "too high pod-startup-time".
We've never had problems with it before.
The failures are:
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2777
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2782
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2784
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-kubemark-500-gce/2785
Should be investigated.
@shyamjvs @gmarek
|
test
|
possible regression in pod startup time in the last runs of kubemark of them failed with too high pod startup time we ve never had problems with it before the failures are should be investigated shyamjvs gmarek
| 1
|
124,855
| 17,782,852,413
|
IssuesEvent
|
2021-08-31 07:33:36
|
retaildevcrews/aks-secure-baseline-hack-template
|
https://api.github.com/repos/retaildevcrews/aks-secure-baseline-hack-template
|
closed
|
docker image sources
|
InternalFeedback Security
|
- should we move all of our 3rd party docker images sources to bitnami?
- security and compliance
|
True
|
docker image sources - - should we move all of our 3rd party docker images sources to bitnami?
- security and compliance
|
non_test
|
docker image sources should we move all of our party docker images sources to bitnami security and compliance
| 0
|
6,557
| 3,411,009,037
|
IssuesEvent
|
2015-12-04 23:03:49
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
performRebuild() should be nicer when you build null
|
affects: dev experience affects: framework ⚠ code health
|
```dart
void performRebuild() {
assert(_debugSetAllowIgnoredCallsToMarkNeedsBuild(true));
Widget built;
try {
built = _builder(this);
assert(built != null);
} catch (e, stack) {
```
The assert in the try-catch block here should be something like:
```dart
assert(() {
if (build == null) {
debugPrint('Widget: $widget');
assert(() {
'A build function returned null. Build functions must never return null.'
'The offending widget is displayed above.';
return false;
});
}
return true;
});
```
|
1.0
|
performRebuild() should be nicer when you build null - ```dart
void performRebuild() {
assert(_debugSetAllowIgnoredCallsToMarkNeedsBuild(true));
Widget built;
try {
built = _builder(this);
assert(built != null);
} catch (e, stack) {
```
The assert in the try-catch block here should be something like:
```dart
assert(() {
if (build == null) {
debugPrint('Widget: $widget');
assert(() {
'A build function returned null. Build functions must never return null.'
'The offending widget is displayed above.';
return false;
});
}
return true;
});
```
|
non_test
|
performrebuild should be nicer when you build null dart void performrebuild assert debugsetallowignoredcallstomarkneedsbuild true widget built try built builder this assert built null catch e stack the assert in the try catch block here should be something like dart assert if build null debugprint widget widget assert a build function returned null build functions must never return null the offending widget is displayed above return false return true
| 0
|
172,020
| 6,498,067,418
|
IssuesEvent
|
2017-08-22 15:59:51
|
healthlocker/oxleas-adhd
|
https://api.github.com/repos/healthlocker/oxleas-adhd
|
closed
|
redirect 'Super Admins' to Admin part of service not Healthlocker
|
priority-2
|
+ [x] When I log in with an email address assigned to Super Admin, I am taken to Admin service, not normal Healthlocker.
|
1.0
|
redirect 'Super Admins' to Admin part of service not Healthlocker - + [x] When I log in with an email address assigned to Super Admin, I am taken to Admin service, not normal Healthlocker.
|
non_test
|
redirect super admins to admin part of service not healthlocker when i log in with an email address assigned to super admin i am taken to admin service not normal healthlocker
| 0
|
119,459
| 10,053,936,549
|
IssuesEvent
|
2019-07-21 21:04:14
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: jepsen/bank/parts-start-kill-2 failed
|
C-test-failure O-roachtest O-robot
|
SHA: https://github.com/cockroachdb/cockroach/commits/f284f16a1224c87e9e9a54c809e7a73a5f59506d
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=jepsen/bank/parts-start-kill-2 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1379975&tab=buildLog
```
The test failed on branch=provisional_201907081642_v2.1.8, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190709-1379975/jepsen/bank/parts-start-kill-2/run_1
jepsen.go:256,jepsen.go:316,test_runner.go:670: exit status 1
```
|
2.0
|
roachtest: jepsen/bank/parts-start-kill-2 failed - SHA: https://github.com/cockroachdb/cockroach/commits/f284f16a1224c87e9e9a54c809e7a73a5f59506d
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=jepsen/bank/parts-start-kill-2 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1379975&tab=buildLog
```
The test failed on branch=provisional_201907081642_v2.1.8, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190709-1379975/jepsen/bank/parts-start-kill-2/run_1
jepsen.go:256,jepsen.go:316,test_runner.go:670: exit status 1
```
|
test
|
roachtest jepsen bank parts start kill failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests jepsen bank parts start kill pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch provisional cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts jepsen bank parts start kill run jepsen go jepsen go test runner go exit status
| 1
|
258,306
| 22,301,585,538
|
IssuesEvent
|
2022-06-13 09:12:33
|
ibissource/ladybug-frontend
|
https://api.github.com/repos/ibissource/ladybug-frontend
|
closed
|
Rerun report failure, replace, succeed
|
Testing
|
* Create reports in tab debug and copy to test tab.
* Go to test tab.
* Check that button “run all” is disabled.
* Select one report, rerun.
* Check that it has failed.
* Click “Replace”.
* Rerun the report.
* Check that it succeeds.
|
1.0
|
Rerun report failure, replace, succeed - * Create reports in tab debug and copy to test tab.
* Go to test tab.
* Check that button “run all” is disabled.
* Select one report, rerun.
* Check that it has failed.
* Click “Replace”.
* Rerun the report.
* Check that it succeeds.
|
test
|
rerun report failure replace succeed create reports in tab debug and copy to test tab go to test tab check that button “run all” is disabled select one report rerun check that it has failed click “replace” rerun the report check that it succeeds
| 1
|
290,641
| 25,082,279,493
|
IssuesEvent
|
2022-11-07 20:25:07
|
ZcashFoundation/zebra
|
https://api.github.com/repos/ZcashFoundation/zebra
|
opened
|
GitHub runners fail with linker error: "collect2: error: ld returned 1 exit status"
|
C-bug A-devops S-needs-triage P-Medium :zap: I-integration-fail C-testing
|
## Motivation
Sometimes our GitHub actions tests fail with:
> Running `rustdoc --edition=2021 --crate-type lib --crate-name zebrad
> ...
> = note: collect2: error: ld returned 1 exit status
> ...
> src/components/mempool/crawler.rs - components::mempool::crawler (line 16)
https://github.com/ZcashFoundation/zebra/actions/runs/3411951675/jobs/5676818960
This happened in the `Test beta on ubuntu-latest --features getblocktemplate-rpcs` job.
### Diagnostics
This usually seems to happen on beta Rust with `--features getblocktemplate-rpcs`, and only on GitHub Actions runners.
It could be a full disk, or a Rust build bug caused by rebuilding with extra features. It might also be a bug in Zebra's feature settings in `Cargo.toml`s.
The linker failure does not have a specific error message, we need to collect more logs to diagnose it.
## Related Work
This might be fixed by #5551.
|
1.0
|
GitHub runners fail with linker error: "collect2: error: ld returned 1 exit status" - ## Motivation
Sometimes our GitHub actions tests fail with:
> Running `rustdoc --edition=2021 --crate-type lib --crate-name zebrad
> ...
> = note: collect2: error: ld returned 1 exit status
> ...
> src/components/mempool/crawler.rs - components::mempool::crawler (line 16)
https://github.com/ZcashFoundation/zebra/actions/runs/3411951675/jobs/5676818960
This happened in the `Test beta on ubuntu-latest --features getblocktemplate-rpcs` job.
### Diagnostics
This usually seems to happen on beta Rust with `--features getblocktemplate-rpcs`, and only on GitHub Actions runners.
It could be a full disk, or a Rust build bug caused by rebuilding with extra features. It might also be a bug in Zebra's feature settings in `Cargo.toml`s.
The linker failure does not have a specific error message, we need to collect more logs to diagnose it.
## Related Work
This might be fixed by #5551.
|
test
|
github runners fail with linker error error ld returned exit status motivation sometimes our github actions tests fail with running rustdoc edition crate type lib crate name zebrad note error ld returned exit status src components mempool crawler rs components mempool crawler line this happened in the test beta on ubuntu latest features getblocktemplate rpcs job diagnostics this usually seems to happen on beta rust with features getblocktemplate rpcs and only on github actions runners it could be a full disk or a rust build bug caused by rebuilding with extra features it might also be a bug in zebra s feature settings in cargo toml s the linker failure does not have a specific error message we need to collect more logs to diagnose it related work this might be fixed by
| 1
|
390,921
| 11,565,820,893
|
IssuesEvent
|
2020-02-20 11:14:14
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
radar.weather.gov - see bug description
|
browser-firefox-mobile engine-gecko priority-normal
|
<!-- @browser: Firefox Nightly 68.5a1 & Firefox 68.5.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/48595 -->
**URL**: https://radar.weather.gov/ridge/radar_lite.php?rid=lwx&product=N0R&loop=yes
**Browser / Version**: Firefox Nightly 68.5a1 & Firefox 68.5.0
**Operating System**: Android 8.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Can't create title for shortcut on home screen
**Steps to Reproduce**:
When adding shortcut to home screen, can' t create unique title under shortcut and end up with a dozen shortcuts to various pages on NWS, all of which are captioned "National Weath...". Firefox Focus allows specifying title. Moto g5 Plus phone. Firefox won't add shortcut.
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/2/7d196afb-e4a9-42e7-8b2b-c272958e8aa4.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200120180435</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/2/c7612579-de4f-4443-8aba-fada53d98c1d)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
radar.weather.gov - see bug description - <!-- @browser: Firefox Nightly 68.5a1 & Firefox 68.5.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/48595 -->
**URL**: https://radar.weather.gov/ridge/radar_lite.php?rid=lwx&product=N0R&loop=yes
**Browser / Version**: Firefox Nightly 68.5a1 & Firefox 68.5.0
**Operating System**: Android 8.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Can't create title for shortcut on home screen
**Steps to Reproduce**:
When adding shortcut to home screen, can' t create unique title under shortcut and end up with a dozen shortcuts to various pages on NWS, all of which are captioned "National Weath...". Firefox Focus allows specifying title. Moto g5 Plus phone. Firefox won't add shortcut.
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/2/7d196afb-e4a9-42e7-8b2b-c272958e8aa4.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200120180435</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/2/c7612579-de4f-4443-8aba-fada53d98c1d)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
radar weather gov see bug description url browser version firefox nightly firefox operating system android tested another browser yes problem type something else description can t create title for shortcut on home screen steps to reproduce when adding shortcut to home screen can t create unique title under shortcut and end up with a dozen shortcuts to various pages on nws all of which are captioned national weath firefox focus allows specifying title moto plus phone firefox won t add shortcut view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
214,118
| 16,563,581,407
|
IssuesEvent
|
2021-05-29 01:50:04
|
mozilla-mobile/focus-ios
|
https://api.github.com/repos/mozilla-mobile/focus-ios
|
closed
|
Temporarily disable UI Tests
|
eng:disabled-test eng:intermittent-test
|
Both locally in Xcode and on CI our UI tests have suddenly become super unreliable.
* In Xcode I was consistently getting green tests but right now there are always between 6 and 9 failing tests.
* In CI pretty much every build fails in either Klar or Focus with a random failing UI Test. Which tests fail seems to be completely random. Sometimes it in Klar, sometimes it in Focus.
Let's disable the tests until we figure out what is going on.
|
2.0
|
Temporarily disable UI Tests - Both locally in Xcode and on CI our UI tests have suddenly become super unreliable.
* In Xcode I was consistently getting green tests but right now there are always between 6 and 9 failing tests.
* In CI pretty much every build fails in either Klar or Focus with a random failing UI Test. Which tests fail seems to be completely random. Sometimes it in Klar, sometimes it in Focus.
Let's disable the tests until we figure out what is going on.
|
test
|
temporarily disable ui tests both locally in xcode and on ci our ui tests have suddenly become super unreliable in xcode i was consistently getting green tests but right now there are always between and failing tests in ci pretty much every build fails in either klar or focus with a random failing ui test which tests fail seems to be completely random sometimes it in klar sometimes it in focus let s disable the tests until we figure out what is going on
| 1
|
15,069
| 3,440,023,414
|
IssuesEvent
|
2015-12-14 12:39:02
|
ppekrol/ravenqa
|
https://api.github.com/repos/ppekrol/ravenqa
|
opened
|
Can download info package
|
test
|
1. Go to Gather Debug Info.
2. Create info package with stacktraces.
3. Download info package.
4. Refresh.
5. Import.
6. Verify.
|
1.0
|
Can download info package - 1. Go to Gather Debug Info.
2. Create info package with stacktraces.
3. Download info package.
4. Refresh.
5. Import.
6. Verify.
|
test
|
can download info package go to gather debug info create info package with stacktraces download info package refresh import verify
| 1
|
151,255
| 23,789,355,056
|
IssuesEvent
|
2022-09-02 13:13:52
|
department-of-veterans-affairs/vets-design-system-documentation
|
https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation
|
opened
|
Form components error messaging - Audit
|
vsp-design-system-team
|
## Description
Conduct an audit of the error messaging used with form components on VA.gov. Identify as many different types of error messages in form components in use on VA.gov as possible and share with the Design System Team.
## Tasks
- [ ] Work with engineers and the Governance team to find examples of form component error messaging
- [ ] Add screenshots of component usage examples to a Mural board, including links to sources
- [ ] Present findings to the DST
- [ ] Work with the DST to determine what steps will be necessary to bring error messaging into alignment
- [ ] Add link to Mural board as a comment in this ticket and to the component design ticket if applicable
## Acceptance Criteria
- [ ] Error messaging examples have been collected on a Mural board and a link to the board has been added to this ticket
- [ ] Audit findings have been shared with the DST and a plan to move forward has been created.
|
1.0
|
Form components error messaging - Audit - ## Description
Conduct an audit of the error messaging used with form components on VA.gov. Identify as many different types of error messages in form components in use on VA.gov as possible and share with the Design System Team.
## Tasks
- [ ] Work with engineers and the Governance team to find examples of form component error messaging
- [ ] Add screenshots of component usage examples to a Mural board, including links to sources
- [ ] Present findings to the DST
- [ ] Work with the DST to determine what steps will be necessary to bring error messaging into alignment
- [ ] Add link to Mural board as a comment in this ticket and to the component design ticket if applicable
## Acceptance Criteria
- [ ] Error messaging examples have been collected on a Mural board and a link to the board has been added to this ticket
- [ ] Audit findings have been shared with the DST and a plan to move forward has been created.
|
non_test
|
form components error messaging audit description conduct an audit of the error messaging used with form components on va gov identify as many different types of error messages in form components in use on va gov as possible and share with the design system team tasks work with engineers and the governance team to find examples of form component error messaging add screenshots of component usage examples to a mural board including links to sources present findings to the dst work with the dst to determine what steps will be necessary to bring error messaging into alignment add link to mural board as a comment in this ticket and to the component design ticket if applicable acceptance criteria error messaging examples have been collected on a mural board and a link to the board has been added to this ticket audit findings have been shared with the dst and a plan to move forward has been created
| 0
|
301,039
| 26,012,197,083
|
IssuesEvent
|
2022-12-21 03:37:05
|
oxidecomputer/omicron
|
https://api.github.com/repos/oxidecomputer/omicron
|
closed
|
test hung due to Go runtime hang during panic
|
⁉️ Test Flake
|
While trying to reproduce #1223 with CockroachDB 22.1.9, I ran into a case where a test hung because the `cockroach version` command that we run during the test suite itself hung, apparently while panicking. Details coming.
|
1.0
|
test hung due to Go runtime hang during panic - While trying to reproduce #1223 with CockroachDB 22.1.9, I ran into a case where a test hung because the `cockroach version` command that we run during the test suite itself hung, apparently while panicking. Details coming.
|
test
|
test hung due to go runtime hang during panic while trying to reproduce with cockroachdb i ran into a case where a test hung because the cockroach version command that we run during the test suite itself hung apparently while panicking details coming
| 1
|
318,568
| 23,727,465,494
|
IssuesEvent
|
2022-08-30 21:05:18
|
sympy/sympy
|
https://api.github.com/repos/sympy/sympy
|
opened
|
Document _eval_mpmath
|
Documentation functions
|
I missed _eval_mpmath when writing [the custom functions guide](https://docs.sympy.org/latest/guides/custom-functions.html#numerical-evaluation-with-evalf). See https://github.com/sympy/sympy/blob/c07abcc94e5b1521062c84d6d0a21b96dedfe07a/sympy/core/function.py#L545-L550
Also, IMO, we ought to use this method everywhere instead of the `MPMATH_TRANSLATIONS` dictionary, which is confusingly repurposed from `lambdify`.
|
1.0
|
Document _eval_mpmath - I missed _eval_mpmath when writing [the custom functions guide](https://docs.sympy.org/latest/guides/custom-functions.html#numerical-evaluation-with-evalf). See https://github.com/sympy/sympy/blob/c07abcc94e5b1521062c84d6d0a21b96dedfe07a/sympy/core/function.py#L545-L550
Also, IMO, we ought to use this method everywhere instead of the `MPMATH_TRANSLATIONS` dictionary, which is confusingly repurposed from `lambdify`.
|
non_test
|
document eval mpmath i missed eval mpmath when writing see also imo we ought to use this method everywhere instead of the mpmath translations dictionary which is confusingly repurposed from lambdify
| 0
|
60,793
| 6,715,245,935
|
IssuesEvent
|
2017-10-13 20:17:48
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
test: fixtures module returning absolute paths causes test failures in Windows with 'x' or 'u' in path after delimiter
|
test windows
|
* **Version**: any with `fixtures` module
* **Platform**: Windows
* **Subsystem**: test
Refs: https://github.com/nodejs/node/issues/16023 (see https://github.com/nodejs/node/issues/16023#issuecomment-334885754)
cc @jasnell
|
1.0
|
test: fixtures module returning absolute paths causes test failures in Windows with 'x' or 'u' in path after delimiter - * **Version**: any with `fixtures` module
* **Platform**: Windows
* **Subsystem**: test
Refs: https://github.com/nodejs/node/issues/16023 (see https://github.com/nodejs/node/issues/16023#issuecomment-334885754)
cc @jasnell
|
test
|
test fixtures module returning absolute paths causes test failures in windows with x or u in path after delimiter version any with fixtures module platform windows subsystem test refs see cc jasnell
| 1
|
111,423
| 9,530,925,799
|
IssuesEvent
|
2019-04-29 14:54:33
|
opesci/devito
|
https://api.github.com/repos/opesci/devito
|
closed
|
aliasing fixture in the devito testing infrastructure
|
good for newcomers testing
|
the ``pytest_fixture`` ``a`` is defined at least twice: once in ``conftest.py`` and then again in ``interpolation.py``. Depending on the execution order of the tests, this may lead to "false bugs".
This really sucks. It's a mistake to pollute ``conftest.py`` with lots of symbols/dimensions/... to be shared by different tests. It also sucks that pytest registers two fixtures with the same name and doesn't complain
|
1.0
|
aliasing fixture in the devito testing infrastructure - the ``pytest_fixture`` ``a`` is defined at least twice: once in ``conftest.py`` and then again in ``interpolation.py``. Depending on the execution order of the tests, this may lead to "false bugs".
This really sucks. It's a mistake to pollute ``conftest.py`` with lots of symbols/dimensions/... to be shared by different tests. It also sucks that pytest registers two fixtures with the same name and doesn't complain
|
test
|
aliasing fixture in the devito testing infrastructure the pytest fixture a is defined at least twice once in conftest py and then again in interpolation py depending on the execution order of the tests this may lead to false bugs this really sucks it s a mistake to pollute conftest py with lots of symbols dimensions to be shared by different tests it also sucks that pytest registers two fixtures with the same name and doesn t complain
| 1
|
377,724
| 26,267,339,805
|
IssuesEvent
|
2023-01-06 13:49:27
|
JetBrains/skiko
|
https://api.github.com/repos/JetBrains/skiko
|
closed
|
No sample or proper documentation on how to create BufferedImages with the Canvas content
|
documentation question
|
There's no sample or documentation about certain methods skiaLayer.screenshot(), createImage(), prepareImage() and some other methods that make it sound as if you could get an image out of the canvas. At the moment i've made countless attempt at trying to make it draw to the JFrame and then try to also save it as an image (without using the Robot class) and i've failed.
I've managed to make it work by using Surface and then applying the Canvas that i made but the result is much worse in quality.
```
SwingUtilities.invokeLater {
val window = JFrame("Card").apply {
preferredSize = Dimension(600, 200)
}
window.background = java.awt.Color(30, 33, 36)
window.isUndecorated = true
skiaLayer.attachTo(window.contentPane)
window.pack()
window.isVisible = true
val image = skiaLayer.screenshot()!!.toBufferedImage()
val arrayByte = ByteArrayOutputStream()
ImageIO.write(image, "png", arrayByte)
}
```
This does not work.
```
SwingUtilities.invokeLater {
val window = JFrame("Card").apply {
preferredSize = Dimension(600, 200)
}
window.background = java.awt.Color(30, 33, 36)
window.isUndecorated = true
//skiaLayer.attachTo(window.contentPane)
skiaLayer.needRedraw()
window.contentPane.repaint()
window.pack()
window.isVisible = true
BufferedImage(600, 200, BufferedImage.TYPE_INT_ARGB).let {
skiaLayer.needRedraw()
it.createGraphics()
skiaLayer.paint(it.graphics)
val arrayByte = ByteArrayOutputStream()
ImageIO.write(it, "png", arrayByte)
```
And this one the BufferedImage comes out with only the intended size and totally transparent.
|
1.0
|
No sample or proper documentation on how to create BufferedImages with the Canvas content - There's no sample or documentation about certain methods skiaLayer.screenshot(), createImage(), prepareImage() and some other methods that make it sound as if you could get an image out of the canvas. At the moment i've made countless attempt at trying to make it draw to the JFrame and then try to also save it as an image (without using the Robot class) and i've failed.
I've managed to make it work by using Surface and then applying the Canvas that i made but the result is much worse in quality.
```
SwingUtilities.invokeLater {
val window = JFrame("Card").apply {
preferredSize = Dimension(600, 200)
}
window.background = java.awt.Color(30, 33, 36)
window.isUndecorated = true
skiaLayer.attachTo(window.contentPane)
window.pack()
window.isVisible = true
val image = skiaLayer.screenshot()!!.toBufferedImage()
val arrayByte = ByteArrayOutputStream()
ImageIO.write(image, "png", arrayByte)
}
```
This does not work.
```
SwingUtilities.invokeLater {
val window = JFrame("Card").apply {
preferredSize = Dimension(600, 200)
}
window.background = java.awt.Color(30, 33, 36)
window.isUndecorated = true
//skiaLayer.attachTo(window.contentPane)
skiaLayer.needRedraw()
window.contentPane.repaint()
window.pack()
window.isVisible = true
BufferedImage(600, 200, BufferedImage.TYPE_INT_ARGB).let {
skiaLayer.needRedraw()
it.createGraphics()
skiaLayer.paint(it.graphics)
val arrayByte = ByteArrayOutputStream()
ImageIO.write(it, "png", arrayByte)
```
And this one the BufferedImage comes out with only the intended size and totally transparent.
|
non_test
|
no sample or proper documentation on how to create bufferedimages with the canvas content there s no sample or documentation about certain methods skialayer screenshot createimage prepareimage and some other methods that make it sound as if you could get an image out of the canvas at the moment i ve made countless attempt at trying to make it draw to the jframe and then try to also save it as an image without using the robot class and i ve failed i ve managed to make it work by using surface and then applying the canvas that i made but the result is much worse in quality swingutilities invokelater val window jframe card apply preferredsize dimension window background java awt color window isundecorated true skialayer attachto window contentpane window pack window isvisible true val image skialayer screenshot tobufferedimage val arraybyte bytearrayoutputstream imageio write image png arraybyte this does not work swingutilities invokelater val window jframe card apply preferredsize dimension window background java awt color window isundecorated true skialayer attachto window contentpane skialayer needredraw window contentpane repaint window pack window isvisible true bufferedimage bufferedimage type int argb let skialayer needredraw it creategraphics skialayer paint it graphics val arraybyte bytearrayoutputstream imageio write it png arraybyte and this one the bufferedimage comes out with only the intended size and totally transparent
| 0
|
284,786
| 30,913,688,955
|
IssuesEvent
|
2023-08-05 02:37:15
|
Nivaskumark/kernel_v4.19.72_old
|
https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old
|
reopened
|
CVE-2022-4379 (High) detected in linux-yoctov5.4.51
|
Mend: dependency security vulnerability
|
## CVE-2022-4379 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability was found in __nfs42_ssc_open() in fs/nfs/nfs4file.c in the Linux kernel. This flaw allows an attacker to conduct a remote denial
<p>Publish Date: 2023-01-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4379>CVE-2022-4379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4379">https://www.linuxkernelcves.com/cves/CVE-2022-4379</a></p>
<p>Release Date: 2023-01-10</p>
<p>Fix Resolution: v6.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-4379 (High) detected in linux-yoctov5.4.51 - ## CVE-2022-4379 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/nfsd/nfs4proc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability was found in __nfs42_ssc_open() in fs/nfs/nfs4file.c in the Linux kernel. This flaw allows an attacker to conduct a remote denial
<p>Publish Date: 2023-01-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4379>CVE-2022-4379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4379">https://www.linuxkernelcves.com/cves/CVE-2022-4379</a></p>
<p>Release Date: 2023-01-10</p>
<p>Fix Resolution: v6.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files fs nfsd c fs nfsd c vulnerability details a use after free vulnerability was found in ssc open in fs nfs c in the linux kernel this flaw allows an attacker to conduct a remote denial publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
101,880
| 31,717,846,276
|
IssuesEvent
|
2023-09-10 03:53:40
|
PaddlePaddle/Paddle
|
https://api.github.com/repos/PaddlePaddle/Paddle
|
opened
|
`WITH_MKL=ON`的时候编译出错
|
status/new-issue type/build
|
### 问题描述 Issue Description
```
cmake .. -DPY_VERSION=3.7 -DCMAKE_BUILD_TYPE=Release -DWITH_PYTHON=ON -DWITH_MKL=OFF -DWITH_GPU=ON -DWITH_DISTRIBUTE=ON -DWITH_TESTING=ON -DWITH_FLUID_ONLY=ON
```
<img width="1917" alt="image" src="https://github.com/PaddlePaddle/Paddle/assets/10721757/b65fb084-a9a4-4b04-8614-4e9f452a5eb2">
### 版本&环境信息 Version & Environment Information
****************************************
Paddle version: N/A
Paddle With CUDA: N/A
OS: ubuntu 18.04
GCC version: (GCC) 8.2.0
Clang version: 3.8.0 (tags/RELEASE_380/final)
CMake version: version 3.20.0
Libc version: glibc 2.26
Python version: 3.7.15
CUDA version: 11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
cuDNN version: 8.6.0
Nvidia driver version: N/A
Nvidia driver List: N/A
****************************************
|
1.0
|
`WITH_MKL=ON`的时候编译出错 - ### 问题描述 Issue Description
```
cmake .. -DPY_VERSION=3.7 -DCMAKE_BUILD_TYPE=Release -DWITH_PYTHON=ON -DWITH_MKL=OFF -DWITH_GPU=ON -DWITH_DISTRIBUTE=ON -DWITH_TESTING=ON -DWITH_FLUID_ONLY=ON
```
<img width="1917" alt="image" src="https://github.com/PaddlePaddle/Paddle/assets/10721757/b65fb084-a9a4-4b04-8614-4e9f452a5eb2">
### 版本&环境信息 Version & Environment Information
****************************************
Paddle version: N/A
Paddle With CUDA: N/A
OS: ubuntu 18.04
GCC version: (GCC) 8.2.0
Clang version: 3.8.0 (tags/RELEASE_380/final)
CMake version: version 3.20.0
Libc version: glibc 2.26
Python version: 3.7.15
CUDA version: 11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
cuDNN version: 8.6.0
Nvidia driver version: N/A
Nvidia driver List: N/A
****************************************
|
non_test
|
with mkl on 的时候编译出错 问题描述 issue description cmake dpy version dcmake build type release dwith python on dwith mkl off dwith gpu on dwith distribute on dwith testing on dwith fluid only on img width alt image src 版本 环境信息 version environment information paddle version n a paddle with cuda n a os ubuntu gcc version gcc clang version tags release final cmake version version libc version glibc python version cuda version build cuda compiler cudnn version nvidia driver version n a nvidia driver list n a
| 0
|
131,837
| 10,719,569,663
|
IssuesEvent
|
2019-10-26 11:04:11
|
d-r-q/qbit
|
https://api.github.com/repos/d-r-q/qbit
|
opened
|
Fuzzer
|
enhancement research tests
|
Implement fuzzer for qbit. It's a test, that generates random "program" (set of inserts, updates, deletes and selects), "executes" it and tries to evaluate if it get expected results
|
1.0
|
Fuzzer - Implement fuzzer for qbit. It's a test, that generates random "program" (set of inserts, updates, deletes and selects), "executes" it and tries to evaluate if it get expected results
|
test
|
fuzzer implement fuzzer for qbit it s a test that generates random program set of inserts updates deletes and selects executes it and tries to evaluate if it get expected results
| 1
|
239,872
| 19,974,918,958
|
IssuesEvent
|
2022-01-29 00:52:02
|
nasa/osal
|
https://api.github.com/repos/nasa/osal
|
opened
|
select-test failed, non-repeatable
|
unit-test
|
**Is your feature request related to a problem? Please describe.**
First time I recall seeing it on Caelum+ osal, unfortunately it was a batch run so details not available:
```
64/119 Test #64: select-test .............................***Failed 3.20 sec
```
**Describe the solution you'd like**
Investigate, probably needs many runs since it's frequently tested and I haven't seen it fail.
**Describe alternatives you've considered**
None
**Additional context**
OSAL git hash that failed: e3d2f4c1c1e455dc7b5b42a7776451d3a1853c1a, which is a commit past v6.0.0-rc4+dev29
**Requester Info**
Jacob Hageman - NASA/GSFC
|
1.0
|
select-test failed, non-repeatable - **Is your feature request related to a problem? Please describe.**
First time I recall seeing it on Caelum+ osal, unfortunately it was a batch run so details not available:
```
64/119 Test #64: select-test .............................***Failed 3.20 sec
```
**Describe the solution you'd like**
Investigate, probably needs many runs since it's frequently tested and I haven't seen it fail.
**Describe alternatives you've considered**
None
**Additional context**
OSAL git hash that failed: e3d2f4c1c1e455dc7b5b42a7776451d3a1853c1a, which is a commit past v6.0.0-rc4+dev29
**Requester Info**
Jacob Hageman - NASA/GSFC
|
test
|
select test failed non repeatable is your feature request related to a problem please describe first time i recall seeing it on caelum osal unfortunately it was a batch run so details not available test select test failed sec describe the solution you d like investigate probably needs many runs since it s frequently tested and i haven t seen it fail describe alternatives you ve considered none additional context osal git hash that failed which is a commit past requester info jacob hageman nasa gsfc
| 1
|
136,721
| 11,081,453,590
|
IssuesEvent
|
2019-12-13 09:50:15
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
[CI] org.elasticsearch.xpack.ml.integration.ClassificationIT.testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows failure
|
:ml >test-failure
|
Example failure: https://gradle-enterprise.elastic.co/s/uohtdst7yu7lk
Started occurring after https://github.com/elastic/elasticsearch/pull/50040 got merged.
Stack trace:
```
org.elasticsearch.xpack.ml.integration.ClassificationIT > testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows FAILED |
-- | --
| java.lang.AssertionError: |
| Expected: <1> |
| but: was <0> |
| at __randomizedtesting.SeedInfo.seed([6B678FC96FDF9BF5:7F732A40232997FC]:0) |
| at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) |
| at org.junit.Assert.assertThat(Assert.java:956) |
| at org.junit.Assert.assertThat(Assert.java:923) |
| at org.elasticsearch.xpack.ml.integration.MlNativeDataFrameAnalyticsIntegTestCase.assertModelStatePersisted(MlNativeDataFrameAnalyticsIntegTestCase.java:282) |
| at org.elasticsearch.xpack.ml.integration.ClassificationIT.testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows(ClassificationIT.java:98)
```
Fails in https://github.com/elastic/elasticsearch/blob/68c739fff3739afc1d610855e9b3f95a9ab0ccf2/x-pack/plugin/ml/qa/native-multi-node-tests/src/test/java/org/elasticsearch/xpack/ml/integration/MlNativeDataFrameAnalyticsIntegTestCase.java#L282
returning 0 hits.
I can reproduce it locally with:
```
./gradlew ':x-pack:plugin:ml:qa:native-multi-node-tests:integTestRunner' --tests "org.elasticsearch.xpack.ml.integration.ClassificationIT.testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows" -Dtests.seed=6B678FC96FDF9BF5 -Dtests.security.manager=true -Dbuild.snapshot=false -Dtests.jvm.argline="-Dbuild.snapshot=false" -Dtests.locale=fi-FI -Dtests.timezone=Europe/Isle_of_Man -Dcompiler.java=13 -Dlicense.key=<path_to_release_key>
```
|
1.0
|
[CI] org.elasticsearch.xpack.ml.integration.ClassificationIT.testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows failure - Example failure: https://gradle-enterprise.elastic.co/s/uohtdst7yu7lk
Started occurring after https://github.com/elastic/elasticsearch/pull/50040 got merged.
Stack trace:
```
org.elasticsearch.xpack.ml.integration.ClassificationIT > testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows FAILED |
-- | --
| java.lang.AssertionError: |
| Expected: <1> |
| but: was <0> |
| at __randomizedtesting.SeedInfo.seed([6B678FC96FDF9BF5:7F732A40232997FC]:0) |
| at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) |
| at org.junit.Assert.assertThat(Assert.java:956) |
| at org.junit.Assert.assertThat(Assert.java:923) |
| at org.elasticsearch.xpack.ml.integration.MlNativeDataFrameAnalyticsIntegTestCase.assertModelStatePersisted(MlNativeDataFrameAnalyticsIntegTestCase.java:282) |
| at org.elasticsearch.xpack.ml.integration.ClassificationIT.testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows(ClassificationIT.java:98)
```
Fails in https://github.com/elastic/elasticsearch/blob/68c739fff3739afc1d610855e9b3f95a9ab0ccf2/x-pack/plugin/ml/qa/native-multi-node-tests/src/test/java/org/elasticsearch/xpack/ml/integration/MlNativeDataFrameAnalyticsIntegTestCase.java#L282
returning 0 hits.
I can reproduce it locally with:
```
./gradlew ':x-pack:plugin:ml:qa:native-multi-node-tests:integTestRunner' --tests "org.elasticsearch.xpack.ml.integration.ClassificationIT.testSingleNumericFeatureAndMixedTrainingAndNonTrainingRows" -Dtests.seed=6B678FC96FDF9BF5 -Dtests.security.manager=true -Dbuild.snapshot=false -Dtests.jvm.argline="-Dbuild.snapshot=false" -Dtests.locale=fi-FI -Dtests.timezone=Europe/Isle_of_Man -Dcompiler.java=13 -Dlicense.key=<path_to_release_key>
```
|
test
|
org elasticsearch xpack ml integration classificationit testsinglenumericfeatureandmixedtrainingandnontrainingrows failure example failure started occurring after got merged stack trace org elasticsearch xpack ml integration classificationit testsinglenumericfeatureandmixedtrainingandnontrainingrows failed java lang assertionerror expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch xpack ml integration mlnativedataframeanalyticsintegtestcase assertmodelstatepersisted mlnativedataframeanalyticsintegtestcase java at org elasticsearch xpack ml integration classificationit testsinglenumericfeatureandmixedtrainingandnontrainingrows classificationit java fails in returning hits i can reproduce it locally with gradlew x pack plugin ml qa native multi node tests integtestrunner tests org elasticsearch xpack ml integration classificationit testsinglenumericfeatureandmixedtrainingandnontrainingrows dtests seed dtests security manager true dbuild snapshot false dtests jvm argline dbuild snapshot false dtests locale fi fi dtests timezone europe isle of man dcompiler java dlicense key
| 1
|
56,820
| 6,529,182,334
|
IssuesEvent
|
2017-08-30 10:30:00
|
DEIB-GECO/GMQL
|
https://api.github.com/repos/DEIB-GECO/GMQL
|
closed
|
MERGE error when merging several samples
|
test Urgent
|
In the following query MERGE is applied on a dataset of 89 samples and gives the below error (with lower number of samples it works):
Data = SELECT(Assay == "ChIP-seq" AND Biosample_term_name == "H1-hESC" AND Output_type == "peaks") HG19_ENCODE_NARROW_MAY_2017;
Merged = MERGE() Data;
MATERIALIZE Merged INTO Merged;
logjob_test_merge_guest_new671_20170801_180015
....
2017-08-01 18:42:43,584 ERROR [GMQLSparkExecutor] Job aborted due to stage failure: ResultStage 14 (saveAsHadoopDataset at writeMultiOutputFiles.scala:86) has failed the maximum allowable number of times: 4. Most recent failure reason: org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 7
at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convert
|
1.0
|
MERGE error when merging several samples - In the following query MERGE is applied on a dataset of 89 samples and gives the below error (with lower number of samples it works):
Data = SELECT(Assay == "ChIP-seq" AND Biosample_term_name == "H1-hESC" AND Output_type == "peaks") HG19_ENCODE_NARROW_MAY_2017;
Merged = MERGE() Data;
MATERIALIZE Merged INTO Merged;
logjob_test_merge_guest_new671_20170801_180015
....
2017-08-01 18:42:43,584 ERROR [GMQLSparkExecutor] Job aborted due to stage failure: ResultStage 14 (saveAsHadoopDataset at writeMultiOutputFiles.scala:86) has failed the maximum allowable number of times: 4. Most recent failure reason: org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 7
at org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convert
|
test
|
merge error when merging several samples in the following query merge is applied on a dataset of samples and gives the below error with lower number of samples it works data select assay chip seq and biosample term name hesc and output type peaks encode narrow may merged merge data materialize merged into merged logjob test merge guest error job aborted due to stage failure resultstage saveashadoopdataset at writemultioutputfiles scala has failed the maximum allowable number of times most recent failure reason org apache spark shuffle metadatafetchfailedexception missing an output location for shuffle at org apache spark mapoutputtracker anonfun org apache spark mapoutputtracker convert
| 1
|
291,830
| 25,178,317,674
|
IssuesEvent
|
2022-11-11 11:17:35
|
IntellectualSites/PlotSquared
|
https://api.github.com/repos/IntellectualSites/PlotSquared
|
closed
|
Plotsquared sag Du bist nicht auf einem Plot
|
Requires Testing
|
### Server Implementation
Spigot
### Server Version
1.18.2
### Describe the bug
Ich habe eine Plotwelt erstellt und der Server sagt wenn ich /p claim mache you are not at a plot. Was kann ich tuhen
### To Reproduce
Kann ich nicjt sagen
### Expected behaviour
Kann ich nicjt sagen
### Screenshots / Videos
Gibt keinen
### Error log (if applicable)
_No response_
### Plot Debugpaste
/Plot debugpaste
### PlotSquared Version
6.10.1 Premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
Nein
|
1.0
|
Plotsquared sag Du bist nicht auf einem Plot - ### Server Implementation
Spigot
### Server Version
1.18.2
### Describe the bug
Ich habe eine Plotwelt erstellt und der Server sagt wenn ich /p claim mache you are not at a plot. Was kann ich tuhen
### To Reproduce
Kann ich nicjt sagen
### Expected behaviour
Kann ich nicjt sagen
### Screenshots / Videos
Gibt keinen
### Error log (if applicable)
_No response_
### Plot Debugpaste
/Plot debugpaste
### PlotSquared Version
6.10.1 Premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
Nein
|
test
|
plotsquared sag du bist nicht auf einem plot server implementation spigot server version describe the bug ich habe eine plotwelt erstellt und der server sagt wenn ich p claim mache you are not at a plot was kann ich tuhen to reproduce kann ich nicjt sagen expected behaviour kann ich nicjt sagen screenshots videos gibt keinen error log if applicable no response plot debugpaste plot debugpaste plotsquared version premium checklist i have included a plot debugpaste i am using the newest build from and the issue still persists anything else nein
| 1
|
351,871
| 32,032,971,098
|
IssuesEvent
|
2023-09-22 13:33:02
|
eclipse-openj9/openj9
|
https://api.github.com/repos/eclipse-openj9/openj9
|
opened
|
VirtualThread states changed in jdk22
|
test failure jdk22
|
Failure links
------------
The values were changed in https://github.com/ibmruntimes/openj9-openjdk-jdk/commit/ceb174ba8004f6361a307f6d599d786eef9307c7.
* https://openj9-jenkins.osuosl.org/job/Test_openjdknext_j9_sanity.functional_s390x_linux_Personal_testList_0/40/consoleText
* https://openj9-jenkins.osuosl.org/job/Test_openjdknext_j9_sanity.functional_s390x_linux_Personal_testList_1/40/consoleText
Failure output (captured from console output)
---------------------------------------------
```
[2023-09-22T05:48:47.201Z] FAILED: test_verifyJVMTIMacros
[2023-09-22T05:48:47.201Z] java.lang.AssertionError: JVMTI_VTHREAD_STATE_YIELDING (7) does not match VirtualThread.YIELDING (10)
[2023-09-22T05:48:47.201Z] at org.testng.Assert.fail(Assert.java:96)
[2023-09-22T05:48:47.201Z] at org.openj9.test.jep425.VirtualThreadTests.test_verifyJVMTIMacros(VirtualThreadTests.java:321)
[2023-09-22T05:48:47.201Z] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[2023-09-22T05:48:47.201Z] at java.base/java.lang.reflect.Method.invoke(Method.java:580)
```
|
1.0
|
VirtualThread states changed in jdk22 - Failure links
------------
The values were changed in https://github.com/ibmruntimes/openj9-openjdk-jdk/commit/ceb174ba8004f6361a307f6d599d786eef9307c7.
* https://openj9-jenkins.osuosl.org/job/Test_openjdknext_j9_sanity.functional_s390x_linux_Personal_testList_0/40/consoleText
* https://openj9-jenkins.osuosl.org/job/Test_openjdknext_j9_sanity.functional_s390x_linux_Personal_testList_1/40/consoleText
Failure output (captured from console output)
---------------------------------------------
```
[2023-09-22T05:48:47.201Z] FAILED: test_verifyJVMTIMacros
[2023-09-22T05:48:47.201Z] java.lang.AssertionError: JVMTI_VTHREAD_STATE_YIELDING (7) does not match VirtualThread.YIELDING (10)
[2023-09-22T05:48:47.201Z] at org.testng.Assert.fail(Assert.java:96)
[2023-09-22T05:48:47.201Z] at org.openj9.test.jep425.VirtualThreadTests.test_verifyJVMTIMacros(VirtualThreadTests.java:321)
[2023-09-22T05:48:47.201Z] at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
[2023-09-22T05:48:47.201Z] at java.base/java.lang.reflect.Method.invoke(Method.java:580)
```
|
test
|
virtualthread states changed in failure links the values were changed in failure output captured from console output failed test verifyjvmtimacros java lang assertionerror jvmti vthread state yielding does not match virtualthread yielding at org testng assert fail assert java at org test virtualthreadtests test verifyjvmtimacros virtualthreadtests java at java base jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java base java lang reflect method invoke method java
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.