Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
145,649
| 11,701,393,689
|
IssuesEvent
|
2020-03-06 19:35:14
|
pierre-ernst/snyk-github-issue-creator
|
https://api.github.com/repos/pierre-ernst/snyk-github-issue-creator
|
closed
|
goof - Arbitrary Code Execution in ejs 0.8.8
|
snyk test
|
This issue has been created automatically by a source code scanner
SNYKUID:npm:ejs:20161128
## Third party component with known security vulnerabilities
Introduced to goof through:
* goof > ejs-locals@1.0.2 > ejs@0.8.8
* goof > ejs@1.0.0
## Overview
[`ejs`](https://www.npmjs.com/package/ejs) is a popular JavaScript templating engine.
Affected versions of the package are vulnerable to _Remote Code Execution_ by letting the attacker under certain conditions control the source folder from which the engine renders include files.
You can read more about this vulnerability on the [Snyk blog](https://snyk.io/blog/fixing-ejs-rce-vuln).
There's also a [Cross-site Scripting](https://snyk.io/vuln/npm:ejs:20161130) & [Denial of Service](https://snyk.io/vuln/npm:ejs:20161130-1) vulnerabilities caused by the same behaviour.
## Details
`ejs` provides a few different options for you to render a template, two being very similar: `ejs.render()` and `ejs.renderFile()`. The only difference being that `render` expects a string to be used for the template and `renderFile` expects a path to a template file.
Both functions can be invoked in two ways. The first is calling them with `template`, `data`, and `options`:
```js
ejs.render(str, data, options);
ejs.renderFile(filename, data, options, callback)
```
The second way would be by calling only the `template` and `data`, while `ejs` lets the `options` be passed as part of the `data`:
```js
ejs.render(str, dataAndOptions);
ejs.renderFile(filename, dataAndOptions, callback)
```
If used with a variable list supplied by the user (e.g. by reading it from the URI with `qs` or equivalent), an attacker can control `ejs` options. This includes the `root` option, which allows changing the project root for includes with an absolute path.
```js
ejs.renderFile('my-template', {root:'/bad/root/'}, callback);
```
By passing along the root directive in the line above, any includes would now be pulled from `/bad/root` instead of the path intended. This allows the attacker to take control of the root directory for included scripts and divert it to a library under his control, thus leading to remote code execution.
The [fix](https://github.com/mde/ejs/commit/3d447c5a335844b25faec04b1132dbc721f9c8f6) introduced in version `2.5.3` blacklisted `root` options from options passed via the `data` object.
## Disclosure Timeline
- November 27th, 2016 - Reported the issue to package owner.
- November 27th, 2016 - Issue acknowledged by package owner.
- November 28th, 2016 - Issue fixed and version `2.5.3` released.
## Remediation
The vulnerability can be resolved by either using the GitHub integration to [generate a pull-request](https://snyk.io/org/projects) from your dashboard or by running `snyk wizard` from the command-line interface.
Otherwise, Upgrade `ejs` to version `2.5.3` or higher.
## References
- [Snyk Blog](https://snyk.io/blog/fixing-ejs-rce-vuln)
- [Fix commit](https://github.com/mde/ejs/commit/3d447c5a335844b25faec04b1132dbc721f9c8f6)
- [npm:ejs:20161128](https://snyk.io/vuln/npm:ejs:20161128)
|
1.0
|
goof - Arbitrary Code Execution in ejs 0.8.8 - This issue has been created automatically by a source code scanner
SNYKUID:npm:ejs:20161128
## Third party component with known security vulnerabilities
Introduced to goof through:
* goof > ejs-locals@1.0.2 > ejs@0.8.8
* goof > ejs@1.0.0
## Overview
[`ejs`](https://www.npmjs.com/package/ejs) is a popular JavaScript templating engine.
Affected versions of the package are vulnerable to _Remote Code Execution_ by letting the attacker under certain conditions control the source folder from which the engine renders include files.
You can read more about this vulnerability on the [Snyk blog](https://snyk.io/blog/fixing-ejs-rce-vuln).
There's also a [Cross-site Scripting](https://snyk.io/vuln/npm:ejs:20161130) & [Denial of Service](https://snyk.io/vuln/npm:ejs:20161130-1) vulnerabilities caused by the same behaviour.
## Details
`ejs` provides a few different options for you to render a template, two being very similar: `ejs.render()` and `ejs.renderFile()`. The only difference being that `render` expects a string to be used for the template and `renderFile` expects a path to a template file.
Both functions can be invoked in two ways. The first is calling them with `template`, `data`, and `options`:
```js
ejs.render(str, data, options);
ejs.renderFile(filename, data, options, callback)
```
The second way would be by calling only the `template` and `data`, while `ejs` lets the `options` be passed as part of the `data`:
```js
ejs.render(str, dataAndOptions);
ejs.renderFile(filename, dataAndOptions, callback)
```
If used with a variable list supplied by the user (e.g. by reading it from the URI with `qs` or equivalent), an attacker can control `ejs` options. This includes the `root` option, which allows changing the project root for includes with an absolute path.
```js
ejs.renderFile('my-template', {root:'/bad/root/'}, callback);
```
By passing along the root directive in the line above, any includes would now be pulled from `/bad/root` instead of the path intended. This allows the attacker to take control of the root directory for included scripts and divert it to a library under his control, thus leading to remote code execution.
The [fix](https://github.com/mde/ejs/commit/3d447c5a335844b25faec04b1132dbc721f9c8f6) introduced in version `2.5.3` blacklisted `root` options from options passed via the `data` object.
## Disclosure Timeline
- November 27th, 2016 - Reported the issue to package owner.
- November 27th, 2016 - Issue acknowledged by package owner.
- November 28th, 2016 - Issue fixed and version `2.5.3` released.
## Remediation
The vulnerability can be resolved by either using the GitHub integration to [generate a pull-request](https://snyk.io/org/projects) from your dashboard or by running `snyk wizard` from the command-line interface.
Otherwise, Upgrade `ejs` to version `2.5.3` or higher.
## References
- [Snyk Blog](https://snyk.io/blog/fixing-ejs-rce-vuln)
- [Fix commit](https://github.com/mde/ejs/commit/3d447c5a335844b25faec04b1132dbc721f9c8f6)
- [npm:ejs:20161128](https://snyk.io/vuln/npm:ejs:20161128)
|
test
|
goof arbitrary code execution in ejs this issue has been created automatically by a source code scanner snykuid npm ejs third party component with known security vulnerabilities introduced to goof through goof ejs locals ejs goof ejs overview is a popular javascript templating engine affected versions of the package are vulnerable to remote code execution by letting the attacker under certain conditions control the source folder from which the engine renders include files you can read more about this vulnerability on the there s also a vulnerabilities caused by the same behaviour details ejs provides a few different options for you to render a template two being very similar ejs render and ejs renderfile the only difference being that render expects a string to be used for the template and renderfile expects a path to a template file both functions can be invoked in two ways the first is calling them with template data and options js ejs render str data options ejs renderfile filename data options callback the second way would be by calling only the template and data while ejs lets the options be passed as part of the data js ejs render str dataandoptions ejs renderfile filename dataandoptions callback if used with a variable list supplied by the user e g by reading it from the uri with qs or equivalent an attacker can control ejs options this includes the root option which allows changing the project root for includes with an absolute path js ejs renderfile my template root bad root callback by passing along the root directive in the line above any includes would now be pulled from bad root instead of the path intended this allows the attacker to take control of the root directory for included scripts and divert it to a library under his control thus leading to remote code execution the introduced in version blacklisted root options from options passed via the data object disclosure timeline november reported the issue to package owner november issue acknowledged by package owner november issue fixed and version released remediation the vulnerability can be resolved by either using the github integration to from your dashboard or by running snyk wizard from the command line interface otherwise upgrade ejs to version or higher references
| 1
|
169,756
| 13,159,796,529
|
IssuesEvent
|
2020-08-10 16:26:13
|
nearprotocol/nearcore
|
https://api.github.com/repos/nearprotocol/nearcore
|
closed
|
Transaction-based load testing on mocknet
|
testing
|
Each node in mocknet will repeatedly send many transactions (of various types) to itself. During this time we expect nodes to maintain the block production time and memory usage when not under load.
|
1.0
|
Transaction-based load testing on mocknet - Each node in mocknet will repeatedly send many transactions (of various types) to itself. During this time we expect nodes to maintain the block production time and memory usage when not under load.
|
test
|
transaction based load testing on mocknet each node in mocknet will repeatedly send many transactions of various types to itself during this time we expect nodes to maintain the block production time and memory usage when not under load
| 1
|
250,008
| 21,259,157,028
|
IssuesEvent
|
2022-04-13 00:52:16
|
RamiMustafa/WAF_Sec_Test
|
https://api.github.com/repos/RamiMustafa/WAF_Sec_Test
|
opened
|
Microsoft Defender for SQL on machines should be enabled on workspaces for 1 Microsoft.OperationalInsights/workspaces
|
WARP-Import WAF_Sec_Test Security Azure Advisor
|
<a href="https://aka.ms/azure-advisor-portal">Microsoft Defender for SQL on machines should be enabled on workspaces for 1 Microsoft.OperationalInsights/workspaces</a>
|
1.0
|
Microsoft Defender for SQL on machines should be enabled on workspaces for 1 Microsoft.OperationalInsights/workspaces - <a href="https://aka.ms/azure-advisor-portal">Microsoft Defender for SQL on machines should be enabled on workspaces for 1 Microsoft.OperationalInsights/workspaces</a>
|
test
|
microsoft defender for sql on machines should be enabled on workspaces for microsoft operationalinsights workspaces
| 1
|
34,194
| 4,892,901,805
|
IssuesEvent
|
2016-11-18 21:14:44
|
glensc/nagios-plugin-check_raid
|
https://api.github.com/repos/glensc/nagios-plugin-check_raid
|
closed
|
Unknown status in dual HP raid controller (hpssacli plugin)
|
hpacucli/hpssacli need test data ready
|
HP Apollo 4510 with the following setup:
- Smart Array P840 in Slot 1 (HBA Mode)
Controller Status: OK
68 x 6TB Disks in HBA Mode
- Smart HBA H244br in Slot 0 (Embedded) (RAID Mode)
Controller Status: OK
2 x 800GB Disks in RAID
The execution with debug ON results are:
```
# check_raid.pl -d
DEBUG EXEC: /proc/mdstat at ./raid.pl line 474.
DEBUG EXEC: /sbin/hpssacli controller all show status at ./raid.pl line 474.
DEBUG EXEC: /sbin/hpssacli controller slot=1 (HBA Mode) logicaldrive all show at ./raid.pl line 474.
DEBUG EXEC: /sbin/hpssacli controller slot=0 logicaldrive all show at ./raid.pl line 474.
UNKNOWN: hpssacli:[Smart HBA H244br: Array A(OK)[LUN1:OK], Smart Array P840: ]
```
Content of every EXEC:
```
# cat /proc/mdstat
Personalities :
unused devices: <none>
```
```
# /sbin/hpssacli controller slot=1 (HBA Mode) logicaldrive all show
-bash: syntax error near unexpected token `('
```
```
# /sbin/hpssacli controller slot=0 logicaldrive all show
Smart HBA H244br in Slot 0 (Embedded)
array A
logicaldrive 1 (838.3 GB, RAID 1, OK)
```
|
1.0
|
Unknown status in dual HP raid controller (hpssacli plugin) - HP Apollo 4510 with the following setup:
- Smart Array P840 in Slot 1 (HBA Mode)
Controller Status: OK
68 x 6TB Disks in HBA Mode
- Smart HBA H244br in Slot 0 (Embedded) (RAID Mode)
Controller Status: OK
2 x 800GB Disks in RAID
The execution with debug ON results are:
```
# check_raid.pl -d
DEBUG EXEC: /proc/mdstat at ./raid.pl line 474.
DEBUG EXEC: /sbin/hpssacli controller all show status at ./raid.pl line 474.
DEBUG EXEC: /sbin/hpssacli controller slot=1 (HBA Mode) logicaldrive all show at ./raid.pl line 474.
DEBUG EXEC: /sbin/hpssacli controller slot=0 logicaldrive all show at ./raid.pl line 474.
UNKNOWN: hpssacli:[Smart HBA H244br: Array A(OK)[LUN1:OK], Smart Array P840: ]
```
Content of every EXEC:
```
# cat /proc/mdstat
Personalities :
unused devices: <none>
```
```
# /sbin/hpssacli controller slot=1 (HBA Mode) logicaldrive all show
-bash: syntax error near unexpected token `('
```
```
# /sbin/hpssacli controller slot=0 logicaldrive all show
Smart HBA H244br in Slot 0 (Embedded)
array A
logicaldrive 1 (838.3 GB, RAID 1, OK)
```
|
test
|
unknown status in dual hp raid controller hpssacli plugin hp apollo with the following setup smart array in slot hba mode controller status ok x disks in hba mode smart hba in slot embedded raid mode controller status ok x disks in raid the execution with debug on results are check raid pl d debug exec proc mdstat at raid pl line debug exec sbin hpssacli controller all show status at raid pl line debug exec sbin hpssacli controller slot hba mode logicaldrive all show at raid pl line debug exec sbin hpssacli controller slot logicaldrive all show at raid pl line unknown hpssacli smart array content of every exec cat proc mdstat personalities unused devices sbin hpssacli controller slot hba mode logicaldrive all show bash syntax error near unexpected token sbin hpssacli controller slot logicaldrive all show smart hba in slot embedded array a logicaldrive gb raid ok
| 1
|
78,028
| 7,613,031,519
|
IssuesEvent
|
2018-05-01 19:43:09
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
teamcity: failed tests on master: acceptance/TestDockerC
|
C-test-failure O-robot
|
The following tests appear to have failed:
[#633295](https://teamcity.cockroachdb.com/viewLog.html?buildId=633295):
```
--- FAIL: acceptance/TestDockerC/Success/runMode=docker (0.000s)
Test ended in panic.
------- Stdout: -------
xenial-20170214: Pulling from library/ubuntu
d54efb8db41d: Pulling fs layer
f8b845f45a87: Pulling fs layer
e8db7bf7c39f: Pulling fs layer
9654c40e9079: Pulling fs layer
6d9ef359eaaa: Pulling fs layer
9654c40e9079: Waiting
6d9ef359eaaa: Waiting
f8b845f45a87: Download complete
6d9ef359eaaa: Download complete
9654c40e9079: Verifying Checksum
9654c40e9079: Download complete
d54efb8db41d: Verifying Checksum
d54efb8db41d: Download complete
d54efb8db41d: Pull complete
f8b845f45a87: Pull complete
e8db7bf7c39f: Pull complete
9654c40e9079: Pull complete
6d9ef359eaaa: Pull complete
Digest: sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535
Status: Downloaded newer image for ubuntu:xenial-20170214
Cluster successfully initialized
20180416-090309: Pulling from cockroachdb/acceptance
0684026fc261: Pulling fs layer
9ad750f7aa72: Pulling fs layer
fb7d47255ebc: Pulling fs layer
bbbf55c113f3: Pulling fs layer
2abcb5a4809b: Pulling fs layer
c21ddd74d96a: Pulling fs layer
859d8893d3e6: Pulling fs layer
c66ee29d0fd1: Pulling fs layer
8bbad7b26462: Pulling fs layer
779d0e19370b: Pulling fs layer
7eb023140197: Pulling fs layer
4189e975b671: Pulling fs layer
2d918979ee37: Pulling fs layer
c16accc04fc3: Pulling fs layer
34f9e4e61086: Pulling fs layer
859d8893d3e6: Waiting
c66ee29d0fd1: Waiting
bbbf55c113f3: Waiting
8bbad7b26462: Waiting
779d0e19370b: Waiting
34f9e4e61086: Waiting
c16accc04fc3: Waiting
7eb023140197: Waiting
2abcb5a4809b: Waiting
2d918979ee37: Waiting
c21ddd74d96a: Waiting
9ad750f7aa72: Verifying Checksum
9ad750f7aa72: Download complete
fb7d47255ebc: Download complete
bbbf55c113f3: Download complete
2abcb5a4809b: Download complete
0684026fc261: Verifying Checksum
0684026fc261: Download complete
c21ddd74d96a: Verifying Checksum
c21ddd74d96a: Download complete
859d8893d3e6: Verifying Checksum
859d8893d3e6: Download complete
779d0e19370b: Verifying Checksum
779d0e19370b: Download complete
7eb023140197: Download complete
0684026fc261: Pull complete
9ad750f7aa72: Pull complete
fb7d47255ebc: Pull complete
bbbf55c113f3: Pull complete
4189e975b671: Verifying Checksum
4189e975b671: Download complete
2abcb5a4809b: Pull complete
c21ddd74d96a: Pull complete
859d8893d3e6: Pull complete
8bbad7b26462: Verifying Checksum
8bbad7b26462: Download complete
c66ee29d0fd1: Verifying Checksum
c66ee29d0fd1: Download complete
34f9e4e61086: Download complete
c16accc04fc3: Verifying Checksum
c16accc04fc3: Download complete
c66ee29d0fd1: Pull complete
8bbad7b26462: Pull complete
779d0e19370b: Pull complete
7eb023140197: Pull complete
4189e975b671: Pull complete
panic: test timed out after 30m0s
goroutine 71 [running]:
testing.(*M).startAlarm.func1()
/usr/local/go/src/testing/testing.go:1240 +0xfc
created by time.goFunc
/usr/local/go/src/time/sleep.go:172 +0x44
goroutine 1 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bea50, 0x1d8bcec, 0xb, 0x1e51288, 0x69a386)
/usr/local/go/src/testing/testing.go:825 +0x301
testing.runTests.func1(0xc4201be960)
/usr/local/go/src/testing/testing.go:1063 +0x64
testing.tRunner(0xc4201be960, 0xc4207e3d88)
/usr/local/go/src/testing/testing.go:777 +0xd0
testing.runTests(0xc4202ee000, 0x2be2040, 0x1c, 0x1c, 0x6)
/usr/local/go/src/testing/testing.go:1061 +0x2c4
testing.(*M).Run(0xc42003a380, 0x0)
/usr/local/go/src/testing/testing.go:978 +0x171
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests(0xc42003a380, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:58 +0xa6
github.com/cockroachdb/cockroach/pkg/acceptance.MainTest(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:35 +0x2b
github.com/cockroachdb/cockroach/pkg/acceptance.TestMain(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/main_test.go:22 +0x2b
main.main()
_testmain.go:94 +0x151
goroutine 6 [syscall, 30 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 52 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.flushDaemon()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1158 +0xf1
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:591 +0x110
goroutine 53 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/util/log.signalFlusher()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:598 +0xab
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:592 +0x128
goroutine 10 [select, 30 minutes, locked to thread]:
runtime.gopark(0x1e55420, 0x0, 0x1d84733, 0x6, 0x18, 0x1)
/usr/local/go/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc42048ff50, 0xc4201b6060)
/usr/local/go/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:549 +0x1f4
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1
goroutine 146 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 54 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests.func1(0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:49 +0xa0
created by github.com/cockroachdb/cockroach/pkg/acceptance.RunTests
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:45 +0x98
goroutine 56 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201beb40, 0x1d85488, 0x7, 0xc4202ee060, 0xc4201bea50)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC(0xc4201bea50)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:30 +0xea
testing.tRunner(0xc4201bea50, 0x1e51288)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 57 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bec30, 0x1d9090e, 0xe, 0xc4202d4600, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.RunDocker(0xc4201beb40, 0xc4202d4600)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_cluster.go:47 +0x4a
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:72 +0x114
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSingleNode(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:107 +0xa0
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSuccess(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0xc4208781b0, 0x3, 0x3)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:57 +0xea
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC.func1(0xc4201beb40)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:31 +0xc4
testing.tRunner(0xc4201beb40, 0xc4202ee060)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 58 [IO wait]:
internal/poll.runtime_pollWait(0x7fce0317cf00, 0x72, 0xc4207be650)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc42003a418, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc42003a418, 0xc4205d4000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x46)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0048, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4200e8b40, 0xc4205d4000, 0x1000, 0x1000, 0xc42087c040, 0x36, 0xc420402090)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc4205a4660)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc4205a4660, 0x1d1fe0a, 0xc42081e090, 0x199, 0x0, 0xc42087c040, 0x198)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc4205a4660, 0x2, 0x556, 0xc420402088, 0x7, 0xc420402060)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc420857920)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc420857920, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207be9e0, 0x6368f9, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207bebd0, 0xdc6391, 0xc4200f2cc0)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc420844cc0, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4209c05a0, 0xc4207beb0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4209c05a0, 0x0, 0x0, 0x1d1fec0)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4209c05a0, 0x1b42880, 0xc420446990, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage.DisplayJSONMessagesStream(0x1fe8600, 0xc420844cc0, 0x1fe88a0, 0xc42000e020, 0x2, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage/jsonmessage.go:253 +0x141
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.pullImage(0x2004180, 0xc4200420c0, 0xc420902000, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/docker.go:146 +0x386
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).OneShot(0xc420902000, 0x2004180, 0xc4200420c0, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:231 +0xa2
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker.func1(0xc4201bec30)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:95 +0x5ea
testing.tRunner(0xc4201bec30, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 60 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 61 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 14 [IO wait, 29 minutes]:
internal/poll.runtime_pollWait(0x7fce0317ce30, 0x72, 0xc420965798)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420584298, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc420584298, 0xc420098000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0xc42044707d, 0x5)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0030, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4201a8000, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0x1a15e7d, 0x5)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc42025a720)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc42025a720, 0x1d0e30a, 0xc420446fc0, 0x199, 0x0, 0x0, 0x186)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc42025a720, 0x8, 0x432, 0xa, 0x433, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc4205c28d0)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc4205c28d0, 0xc42026a601, 0x5ff, 0x5ff, 0xc420965b28, 0x6368f9, 0xc400000008)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0xc4203b31a0, 0xc420965ef0, 0xc420965ca8)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc4203f0380, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4201c41e0, 0xc420965e0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4201c41e0, 0x0, 0x0, 0x1d0e380)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4201c41e0, 0x1a94920, 0xc420576000, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events.func1(0xc42025a180, 0xc42003a180, 0x0, 0x0, 0x0, 0x0, 0xc4205faf30, 0xc4205ca060, 0x2004140, 0xc4203f0100, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:54 +0x2da
created by github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:26 +0x11d
goroutine 35 [select, 29 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor.func1(0x1)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:623 +0x2bd
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor(0xc420902000, 0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:643 +0x7b
created by github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).Start
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:660 +0x24a
goroutine 113 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
goroutine 16 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 66 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 111 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 110 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
--- FAIL: acceptance/TestDockerC (0.000s)
Test ended in panic.
--- FAIL: acceptance/TestDockerC/Success (0.000s)
Test ended in panic.
--- FAIL: acceptance/TestDockerC/Success/runMode=docker (0.000s)
Test ended in panic.
------- Stdout: -------
xenial-20170214: Pulling from library/ubuntu
d54efb8db41d: Pulling fs layer
f8b845f45a87: Pulling fs layer
e8db7bf7c39f: Pulling fs layer
9654c40e9079: Pulling fs layer
6d9ef359eaaa: Pulling fs layer
9654c40e9079: Waiting
6d9ef359eaaa: Waiting
f8b845f45a87: Download complete
6d9ef359eaaa: Download complete
9654c40e9079: Verifying Checksum
9654c40e9079: Download complete
d54efb8db41d: Verifying Checksum
d54efb8db41d: Download complete
d54efb8db41d: Pull complete
f8b845f45a87: Pull complete
e8db7bf7c39f: Pull complete
9654c40e9079: Pull complete
6d9ef359eaaa: Pull complete
Digest: sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535
Status: Downloaded newer image for ubuntu:xenial-20170214
Cluster successfully initialized
20180416-090309: Pulling from cockroachdb/acceptance
0684026fc261: Pulling fs layer
9ad750f7aa72: Pulling fs layer
fb7d47255ebc: Pulling fs layer
bbbf55c113f3: Pulling fs layer
2abcb5a4809b: Pulling fs layer
c21ddd74d96a: Pulling fs layer
859d8893d3e6: Pulling fs layer
c66ee29d0fd1: Pulling fs layer
8bbad7b26462: Pulling fs layer
779d0e19370b: Pulling fs layer
7eb023140197: Pulling fs layer
4189e975b671: Pulling fs layer
2d918979ee37: Pulling fs layer
c16accc04fc3: Pulling fs layer
34f9e4e61086: Pulling fs layer
859d8893d3e6: Waiting
c66ee29d0fd1: Waiting
bbbf55c113f3: Waiting
8bbad7b26462: Waiting
779d0e19370b: Waiting
34f9e4e61086: Waiting
c16accc04fc3: Waiting
7eb023140197: Waiting
2abcb5a4809b: Waiting
2d918979ee37: Waiting
c21ddd74d96a: Waiting
9ad750f7aa72: Verifying Checksum
9ad750f7aa72: Download complete
fb7d47255ebc: Download complete
bbbf55c113f3: Download complete
2abcb5a4809b: Download complete
0684026fc261: Verifying Checksum
0684026fc261: Download complete
c21ddd74d96a: Verifying Checksum
c21ddd74d96a: Download complete
859d8893d3e6: Verifying Checksum
859d8893d3e6: Download complete
779d0e19370b: Verifying Checksum
779d0e19370b: Download complete
7eb023140197: Download complete
0684026fc261: Pull complete
9ad750f7aa72: Pull complete
fb7d47255ebc: Pull complete
bbbf55c113f3: Pull complete
4189e975b671: Verifying Checksum
4189e975b671: Download complete
2abcb5a4809b: Pull complete
c21ddd74d96a: Pull complete
859d8893d3e6: Pull complete
8bbad7b26462: Verifying Checksum
8bbad7b26462: Download complete
c66ee29d0fd1: Verifying Checksum
c66ee29d0fd1: Download complete
34f9e4e61086: Download complete
c16accc04fc3: Verifying Checksum
c16accc04fc3: Download complete
c66ee29d0fd1: Pull complete
8bbad7b26462: Pull complete
779d0e19370b: Pull complete
7eb023140197: Pull complete
4189e975b671: Pull complete
panic: test timed out after 30m0s
goroutine 71 [running]:
testing.(*M).startAlarm.func1()
/usr/local/go/src/testing/testing.go:1240 +0xfc
created by time.goFunc
/usr/local/go/src/time/sleep.go:172 +0x44
goroutine 1 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bea50, 0x1d8bcec, 0xb, 0x1e51288, 0x69a386)
/usr/local/go/src/testing/testing.go:825 +0x301
testing.runTests.func1(0xc4201be960)
/usr/local/go/src/testing/testing.go:1063 +0x64
testing.tRunner(0xc4201be960, 0xc4207e3d88)
/usr/local/go/src/testing/testing.go:777 +0xd0
testing.runTests(0xc4202ee000, 0x2be2040, 0x1c, 0x1c, 0x6)
/usr/local/go/src/testing/testing.go:1061 +0x2c4
testing.(*M).Run(0xc42003a380, 0x0)
/usr/local/go/src/testing/testing.go:978 +0x171
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests(0xc42003a380, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:58 +0xa6
github.com/cockroachdb/cockroach/pkg/acceptance.MainTest(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:35 +0x2b
github.com/cockroachdb/cockroach/pkg/acceptance.TestMain(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/main_test.go:22 +0x2b
main.main()
_testmain.go:94 +0x151
goroutine 6 [syscall, 30 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 52 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.flushDaemon()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1158 +0xf1
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:591 +0x110
goroutine 53 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/util/log.signalFlusher()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:598 +0xab
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:592 +0x128
goroutine 10 [select, 30 minutes, locked to thread]:
runtime.gopark(0x1e55420, 0x0, 0x1d84733, 0x6, 0x18, 0x1)
/usr/local/go/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc42048ff50, 0xc4201b6060)
/usr/local/go/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:549 +0x1f4
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1
goroutine 146 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 54 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests.func1(0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:49 +0xa0
created by github.com/cockroachdb/cockroach/pkg/acceptance.RunTests
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:45 +0x98
goroutine 56 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201beb40, 0x1d85488, 0x7, 0xc4202ee060, 0xc4201bea50)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC(0xc4201bea50)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:30 +0xea
testing.tRunner(0xc4201bea50, 0x1e51288)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 57 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bec30, 0x1d9090e, 0xe, 0xc4202d4600, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.RunDocker(0xc4201beb40, 0xc4202d4600)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_cluster.go:47 +0x4a
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:72 +0x114
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSingleNode(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:107 +0xa0
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSuccess(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0xc4208781b0, 0x3, 0x3)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:57 +0xea
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC.func1(0xc4201beb40)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:31 +0xc4
testing.tRunner(0xc4201beb40, 0xc4202ee060)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 58 [IO wait]:
internal/poll.runtime_pollWait(0x7fce0317cf00, 0x72, 0xc4207be650)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc42003a418, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc42003a418, 0xc4205d4000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x46)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0048, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4200e8b40, 0xc4205d4000, 0x1000, 0x1000, 0xc42087c040, 0x36, 0xc420402090)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc4205a4660)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc4205a4660, 0x1d1fe0a, 0xc42081e090, 0x199, 0x0, 0xc42087c040, 0x198)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc4205a4660, 0x2, 0x556, 0xc420402088, 0x7, 0xc420402060)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc420857920)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc420857920, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207be9e0, 0x6368f9, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207bebd0, 0xdc6391, 0xc4200f2cc0)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc420844cc0, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4209c05a0, 0xc4207beb0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4209c05a0, 0x0, 0x0, 0x1d1fec0)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4209c05a0, 0x1b42880, 0xc420446990, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage.DisplayJSONMessagesStream(0x1fe8600, 0xc420844cc0, 0x1fe88a0, 0xc42000e020, 0x2, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage/jsonmessage.go:253 +0x141
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.pullImage(0x2004180, 0xc4200420c0, 0xc420902000, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/docker.go:146 +0x386
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).OneShot(0xc420902000, 0x2004180, 0xc4200420c0, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:231 +0xa2
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker.func1(0xc4201bec30)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:95 +0x5ea
testing.tRunner(0xc4201bec30, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 60 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 61 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 14 [IO wait, 29 minutes]:
internal/poll.runtime_pollWait(0x7fce0317ce30, 0x72, 0xc420965798)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420584298, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc420584298, 0xc420098000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0xc42044707d, 0x5)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0030, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4201a8000, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0x1a15e7d, 0x5)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc42025a720)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc42025a720, 0x1d0e30a, 0xc420446fc0, 0x199, 0x0, 0x0, 0x186)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc42025a720, 0x8, 0x432, 0xa, 0x433, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc4205c28d0)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc4205c28d0, 0xc42026a601, 0x5ff, 0x5ff, 0xc420965b28, 0x6368f9, 0xc400000008)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0xc4203b31a0, 0xc420965ef0, 0xc420965ca8)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc4203f0380, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4201c41e0, 0xc420965e0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4201c41e0, 0x0, 0x0, 0x1d0e380)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4201c41e0, 0x1a94920, 0xc420576000, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events.func1(0xc42025a180, 0xc42003a180, 0x0, 0x0, 0x0, 0x0, 0xc4205faf30, 0xc4205ca060, 0x2004140, 0xc4203f0100, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:54 +0x2da
created by github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:26 +0x11d
goroutine 35 [select, 29 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor.func1(0x1)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:623 +0x2bd
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor(0xc420902000, 0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:643 +0x7b
created by github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).Start
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:660 +0x24a
goroutine 113 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
goroutine 16 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 66 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 111 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 110 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
--- FAIL: acceptance/TestDockerC (0.000s)
Test ended in panic.
--- FAIL: acceptance/TestDockerC/Success (0.000s)
Test ended in panic.
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed tests on master: acceptance/TestDockerC - The following tests appear to have failed:
[#633295](https://teamcity.cockroachdb.com/viewLog.html?buildId=633295):
```
--- FAIL: acceptance/TestDockerC/Success/runMode=docker (0.000s)
Test ended in panic.
------- Stdout: -------
xenial-20170214: Pulling from library/ubuntu
d54efb8db41d: Pulling fs layer
f8b845f45a87: Pulling fs layer
e8db7bf7c39f: Pulling fs layer
9654c40e9079: Pulling fs layer
6d9ef359eaaa: Pulling fs layer
9654c40e9079: Waiting
6d9ef359eaaa: Waiting
f8b845f45a87: Download complete
6d9ef359eaaa: Download complete
9654c40e9079: Verifying Checksum
9654c40e9079: Download complete
d54efb8db41d: Verifying Checksum
d54efb8db41d: Download complete
d54efb8db41d: Pull complete
f8b845f45a87: Pull complete
e8db7bf7c39f: Pull complete
9654c40e9079: Pull complete
6d9ef359eaaa: Pull complete
Digest: sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535
Status: Downloaded newer image for ubuntu:xenial-20170214
Cluster successfully initialized
20180416-090309: Pulling from cockroachdb/acceptance
0684026fc261: Pulling fs layer
9ad750f7aa72: Pulling fs layer
fb7d47255ebc: Pulling fs layer
bbbf55c113f3: Pulling fs layer
2abcb5a4809b: Pulling fs layer
c21ddd74d96a: Pulling fs layer
859d8893d3e6: Pulling fs layer
c66ee29d0fd1: Pulling fs layer
8bbad7b26462: Pulling fs layer
779d0e19370b: Pulling fs layer
7eb023140197: Pulling fs layer
4189e975b671: Pulling fs layer
2d918979ee37: Pulling fs layer
c16accc04fc3: Pulling fs layer
34f9e4e61086: Pulling fs layer
859d8893d3e6: Waiting
c66ee29d0fd1: Waiting
bbbf55c113f3: Waiting
8bbad7b26462: Waiting
779d0e19370b: Waiting
34f9e4e61086: Waiting
c16accc04fc3: Waiting
7eb023140197: Waiting
2abcb5a4809b: Waiting
2d918979ee37: Waiting
c21ddd74d96a: Waiting
9ad750f7aa72: Verifying Checksum
9ad750f7aa72: Download complete
fb7d47255ebc: Download complete
bbbf55c113f3: Download complete
2abcb5a4809b: Download complete
0684026fc261: Verifying Checksum
0684026fc261: Download complete
c21ddd74d96a: Verifying Checksum
c21ddd74d96a: Download complete
859d8893d3e6: Verifying Checksum
859d8893d3e6: Download complete
779d0e19370b: Verifying Checksum
779d0e19370b: Download complete
7eb023140197: Download complete
0684026fc261: Pull complete
9ad750f7aa72: Pull complete
fb7d47255ebc: Pull complete
bbbf55c113f3: Pull complete
4189e975b671: Verifying Checksum
4189e975b671: Download complete
2abcb5a4809b: Pull complete
c21ddd74d96a: Pull complete
859d8893d3e6: Pull complete
8bbad7b26462: Verifying Checksum
8bbad7b26462: Download complete
c66ee29d0fd1: Verifying Checksum
c66ee29d0fd1: Download complete
34f9e4e61086: Download complete
c16accc04fc3: Verifying Checksum
c16accc04fc3: Download complete
c66ee29d0fd1: Pull complete
8bbad7b26462: Pull complete
779d0e19370b: Pull complete
7eb023140197: Pull complete
4189e975b671: Pull complete
panic: test timed out after 30m0s
goroutine 71 [running]:
testing.(*M).startAlarm.func1()
/usr/local/go/src/testing/testing.go:1240 +0xfc
created by time.goFunc
/usr/local/go/src/time/sleep.go:172 +0x44
goroutine 1 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bea50, 0x1d8bcec, 0xb, 0x1e51288, 0x69a386)
/usr/local/go/src/testing/testing.go:825 +0x301
testing.runTests.func1(0xc4201be960)
/usr/local/go/src/testing/testing.go:1063 +0x64
testing.tRunner(0xc4201be960, 0xc4207e3d88)
/usr/local/go/src/testing/testing.go:777 +0xd0
testing.runTests(0xc4202ee000, 0x2be2040, 0x1c, 0x1c, 0x6)
/usr/local/go/src/testing/testing.go:1061 +0x2c4
testing.(*M).Run(0xc42003a380, 0x0)
/usr/local/go/src/testing/testing.go:978 +0x171
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests(0xc42003a380, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:58 +0xa6
github.com/cockroachdb/cockroach/pkg/acceptance.MainTest(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:35 +0x2b
github.com/cockroachdb/cockroach/pkg/acceptance.TestMain(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/main_test.go:22 +0x2b
main.main()
_testmain.go:94 +0x151
goroutine 6 [syscall, 30 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 52 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.flushDaemon()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1158 +0xf1
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:591 +0x110
goroutine 53 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/util/log.signalFlusher()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:598 +0xab
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:592 +0x128
goroutine 10 [select, 30 minutes, locked to thread]:
runtime.gopark(0x1e55420, 0x0, 0x1d84733, 0x6, 0x18, 0x1)
/usr/local/go/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc42048ff50, 0xc4201b6060)
/usr/local/go/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:549 +0x1f4
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1
goroutine 146 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 54 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests.func1(0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:49 +0xa0
created by github.com/cockroachdb/cockroach/pkg/acceptance.RunTests
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:45 +0x98
goroutine 56 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201beb40, 0x1d85488, 0x7, 0xc4202ee060, 0xc4201bea50)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC(0xc4201bea50)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:30 +0xea
testing.tRunner(0xc4201bea50, 0x1e51288)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 57 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bec30, 0x1d9090e, 0xe, 0xc4202d4600, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.RunDocker(0xc4201beb40, 0xc4202d4600)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_cluster.go:47 +0x4a
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:72 +0x114
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSingleNode(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:107 +0xa0
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSuccess(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0xc4208781b0, 0x3, 0x3)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:57 +0xea
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC.func1(0xc4201beb40)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:31 +0xc4
testing.tRunner(0xc4201beb40, 0xc4202ee060)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 58 [IO wait]:
internal/poll.runtime_pollWait(0x7fce0317cf00, 0x72, 0xc4207be650)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc42003a418, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc42003a418, 0xc4205d4000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x46)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0048, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4200e8b40, 0xc4205d4000, 0x1000, 0x1000, 0xc42087c040, 0x36, 0xc420402090)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc4205a4660)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc4205a4660, 0x1d1fe0a, 0xc42081e090, 0x199, 0x0, 0xc42087c040, 0x198)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc4205a4660, 0x2, 0x556, 0xc420402088, 0x7, 0xc420402060)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc420857920)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc420857920, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207be9e0, 0x6368f9, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207bebd0, 0xdc6391, 0xc4200f2cc0)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc420844cc0, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4209c05a0, 0xc4207beb0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4209c05a0, 0x0, 0x0, 0x1d1fec0)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4209c05a0, 0x1b42880, 0xc420446990, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage.DisplayJSONMessagesStream(0x1fe8600, 0xc420844cc0, 0x1fe88a0, 0xc42000e020, 0x2, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage/jsonmessage.go:253 +0x141
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.pullImage(0x2004180, 0xc4200420c0, 0xc420902000, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/docker.go:146 +0x386
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).OneShot(0xc420902000, 0x2004180, 0xc4200420c0, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:231 +0xa2
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker.func1(0xc4201bec30)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:95 +0x5ea
testing.tRunner(0xc4201bec30, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 60 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 61 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 14 [IO wait, 29 minutes]:
internal/poll.runtime_pollWait(0x7fce0317ce30, 0x72, 0xc420965798)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420584298, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc420584298, 0xc420098000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0xc42044707d, 0x5)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0030, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4201a8000, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0x1a15e7d, 0x5)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc42025a720)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc42025a720, 0x1d0e30a, 0xc420446fc0, 0x199, 0x0, 0x0, 0x186)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc42025a720, 0x8, 0x432, 0xa, 0x433, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc4205c28d0)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc4205c28d0, 0xc42026a601, 0x5ff, 0x5ff, 0xc420965b28, 0x6368f9, 0xc400000008)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0xc4203b31a0, 0xc420965ef0, 0xc420965ca8)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc4203f0380, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4201c41e0, 0xc420965e0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4201c41e0, 0x0, 0x0, 0x1d0e380)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4201c41e0, 0x1a94920, 0xc420576000, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events.func1(0xc42025a180, 0xc42003a180, 0x0, 0x0, 0x0, 0x0, 0xc4205faf30, 0xc4205ca060, 0x2004140, 0xc4203f0100, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:54 +0x2da
created by github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:26 +0x11d
goroutine 35 [select, 29 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor.func1(0x1)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:623 +0x2bd
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor(0xc420902000, 0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:643 +0x7b
created by github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).Start
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:660 +0x24a
goroutine 113 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
goroutine 16 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 66 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 111 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 110 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
--- FAIL: acceptance/TestDockerC (0.000s)
Test ended in panic.
--- FAIL: acceptance/TestDockerC/Success (0.000s)
Test ended in panic.
--- FAIL: acceptance/TestDockerC/Success/runMode=docker (0.000s)
Test ended in panic.
------- Stdout: -------
xenial-20170214: Pulling from library/ubuntu
d54efb8db41d: Pulling fs layer
f8b845f45a87: Pulling fs layer
e8db7bf7c39f: Pulling fs layer
9654c40e9079: Pulling fs layer
6d9ef359eaaa: Pulling fs layer
9654c40e9079: Waiting
6d9ef359eaaa: Waiting
f8b845f45a87: Download complete
6d9ef359eaaa: Download complete
9654c40e9079: Verifying Checksum
9654c40e9079: Download complete
d54efb8db41d: Verifying Checksum
d54efb8db41d: Download complete
d54efb8db41d: Pull complete
f8b845f45a87: Pull complete
e8db7bf7c39f: Pull complete
9654c40e9079: Pull complete
6d9ef359eaaa: Pull complete
Digest: sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535
Status: Downloaded newer image for ubuntu:xenial-20170214
Cluster successfully initialized
20180416-090309: Pulling from cockroachdb/acceptance
0684026fc261: Pulling fs layer
9ad750f7aa72: Pulling fs layer
fb7d47255ebc: Pulling fs layer
bbbf55c113f3: Pulling fs layer
2abcb5a4809b: Pulling fs layer
c21ddd74d96a: Pulling fs layer
859d8893d3e6: Pulling fs layer
c66ee29d0fd1: Pulling fs layer
8bbad7b26462: Pulling fs layer
779d0e19370b: Pulling fs layer
7eb023140197: Pulling fs layer
4189e975b671: Pulling fs layer
2d918979ee37: Pulling fs layer
c16accc04fc3: Pulling fs layer
34f9e4e61086: Pulling fs layer
859d8893d3e6: Waiting
c66ee29d0fd1: Waiting
bbbf55c113f3: Waiting
8bbad7b26462: Waiting
779d0e19370b: Waiting
34f9e4e61086: Waiting
c16accc04fc3: Waiting
7eb023140197: Waiting
2abcb5a4809b: Waiting
2d918979ee37: Waiting
c21ddd74d96a: Waiting
9ad750f7aa72: Verifying Checksum
9ad750f7aa72: Download complete
fb7d47255ebc: Download complete
bbbf55c113f3: Download complete
2abcb5a4809b: Download complete
0684026fc261: Verifying Checksum
0684026fc261: Download complete
c21ddd74d96a: Verifying Checksum
c21ddd74d96a: Download complete
859d8893d3e6: Verifying Checksum
859d8893d3e6: Download complete
779d0e19370b: Verifying Checksum
779d0e19370b: Download complete
7eb023140197: Download complete
0684026fc261: Pull complete
9ad750f7aa72: Pull complete
fb7d47255ebc: Pull complete
bbbf55c113f3: Pull complete
4189e975b671: Verifying Checksum
4189e975b671: Download complete
2abcb5a4809b: Pull complete
c21ddd74d96a: Pull complete
859d8893d3e6: Pull complete
8bbad7b26462: Verifying Checksum
8bbad7b26462: Download complete
c66ee29d0fd1: Verifying Checksum
c66ee29d0fd1: Download complete
34f9e4e61086: Download complete
c16accc04fc3: Verifying Checksum
c16accc04fc3: Download complete
c66ee29d0fd1: Pull complete
8bbad7b26462: Pull complete
779d0e19370b: Pull complete
7eb023140197: Pull complete
4189e975b671: Pull complete
panic: test timed out after 30m0s
goroutine 71 [running]:
testing.(*M).startAlarm.func1()
/usr/local/go/src/testing/testing.go:1240 +0xfc
created by time.goFunc
/usr/local/go/src/time/sleep.go:172 +0x44
goroutine 1 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bea50, 0x1d8bcec, 0xb, 0x1e51288, 0x69a386)
/usr/local/go/src/testing/testing.go:825 +0x301
testing.runTests.func1(0xc4201be960)
/usr/local/go/src/testing/testing.go:1063 +0x64
testing.tRunner(0xc4201be960, 0xc4207e3d88)
/usr/local/go/src/testing/testing.go:777 +0xd0
testing.runTests(0xc4202ee000, 0x2be2040, 0x1c, 0x1c, 0x6)
/usr/local/go/src/testing/testing.go:1061 +0x2c4
testing.(*M).Run(0xc42003a380, 0x0)
/usr/local/go/src/testing/testing.go:978 +0x171
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests(0xc42003a380, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:58 +0xa6
github.com/cockroachdb/cockroach/pkg/acceptance.MainTest(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:35 +0x2b
github.com/cockroachdb/cockroach/pkg/acceptance.TestMain(0xc42003a380)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/main_test.go:22 +0x2b
main.main()
_testmain.go:94 +0x151
goroutine 6 [syscall, 30 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 52 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.flushDaemon()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1158 +0xf1
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:591 +0x110
goroutine 53 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/util/log.signalFlusher()
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:598 +0xab
created by github.com/cockroachdb/cockroach/pkg/util/log.init.0
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:592 +0x128
goroutine 10 [select, 30 minutes, locked to thread]:
runtime.gopark(0x1e55420, 0x0, 0x1d84733, 0x6, 0x18, 0x1)
/usr/local/go/src/runtime/proc.go:291 +0x11a
runtime.selectgo(0xc42048ff50, 0xc4201b6060)
/usr/local/go/src/runtime/select.go:392 +0xe50
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:549 +0x1f4
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1
goroutine 146 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 54 [chan receive, 30 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance.RunTests.func1(0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:49 +0xa0
created by github.com/cockroachdb/cockroach/pkg/acceptance.RunTests
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/test_acceptance.go:45 +0x98
goroutine 56 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201beb40, 0x1d85488, 0x7, 0xc4202ee060, 0xc4201bea50)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC(0xc4201bea50)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:30 +0xea
testing.tRunner(0xc4201bea50, 0x1e51288)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 57 [chan receive, 30 minutes]:
testing.(*T).Run(0xc4201bec30, 0x1d9090e, 0xe, 0xc4202d4600, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:825 +0x301
github.com/cockroachdb/cockroach/pkg/acceptance.RunDocker(0xc4201beb40, 0xc4202d4600)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_cluster.go:47 +0x4a
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:72 +0x114
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSingleNode(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:107 +0xa0
github.com/cockroachdb/cockroach/pkg/acceptance.testDockerSuccess(0x2004180, 0xc4200420c0, 0xc4201beb40, 0x1d7faf1, 0x1, 0xc4208781b0, 0x3, 0x3)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:57 +0xea
github.com/cockroachdb/cockroach/pkg/acceptance.TestDockerC.func1(0xc4201beb40)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/adapter_test.go:31 +0xc4
testing.tRunner(0xc4201beb40, 0xc4202ee060)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 58 [IO wait]:
internal/poll.runtime_pollWait(0x7fce0317cf00, 0x72, 0xc4207be650)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc42003a418, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc42003a418, 0xc4205d4000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc42003a400, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x46)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0048, 0xc4205d4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4200e8b40, 0xc4205d4000, 0x1000, 0x1000, 0xc42087c040, 0x36, 0xc420402090)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc4205a4660)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc4205a4660, 0x1d1fe0a, 0xc42081e090, 0x199, 0x0, 0xc42087c040, 0x198)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc4205a4660, 0x2, 0x556, 0xc420402088, 0x7, 0xc420402060)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc420857920)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc420857920, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207be9e0, 0x6368f9, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0xc4207bebd0, 0xdc6391, 0xc4200f2cc0)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc420844c80, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc420844cc0, 0xc4209a4c02, 0x5fe, 0x5fe, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4209c05a0, 0xc4207beb0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4209c05a0, 0x0, 0x0, 0x1d1fec0)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4209c05a0, 0x1b42880, 0xc420446990, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage.DisplayJSONMessagesStream(0x1fe8600, 0xc420844cc0, 0x1fe88a0, 0xc42000e020, 0x2, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/pkg/jsonmessage/jsonmessage.go:253 +0x141
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.pullImage(0x2004180, 0xc4200420c0, 0xc420902000, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/docker.go:146 +0x386
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).OneShot(0xc420902000, 0x2004180, 0xc4200420c0, 0x1dce035, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:231 +0xa2
github.com/cockroachdb/cockroach/pkg/acceptance.testDocker.func1(0xc4201bec30)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/util_docker.go:95 +0x5ea
testing.tRunner(0xc4201bec30, 0xc4202d4600)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
goroutine 60 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 61 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4200e8b40)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 14 [IO wait, 29 minutes]:
internal/poll.runtime_pollWait(0x7fce0317ce30, 0x72, 0xc420965798)
/usr/local/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420584298, 0x72, 0xffffffffffffff00, 0x1fea160, 0x2bf45f8)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc420584298, 0xc420098000, 0x1000, 0x1000)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:157 +0x17d
net.(*netFD).Read(0xc420584280, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0xc42044707d, 0x5)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc4204f0030, 0xc420098000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:176 +0x6a
net/http.(*persistConn).Read(0xc4201a8000, 0xc420098000, 0x1000, 0x1000, 0xc4201c4228, 0x1a15e7d, 0x5)
/usr/local/go/src/net/http/transport.go:1453 +0x136
bufio.(*Reader).fill(0xc42025a720)
/usr/local/go/src/bufio/bufio.go:100 +0x11e
bufio.(*Reader).ReadSlice(0xc42025a720, 0x1d0e30a, 0xc420446fc0, 0x199, 0x0, 0x0, 0x186)
/usr/local/go/src/bufio/bufio.go:341 +0x2c
net/http/internal.readChunkLine(0xc42025a720, 0x8, 0x432, 0xa, 0x433, 0x8)
/usr/local/go/src/net/http/internal/chunked.go:122 +0x34
net/http/internal.(*chunkedReader).beginChunk(0xc4205c28d0)
/usr/local/go/src/net/http/internal/chunked.go:48 +0x32
net/http/internal.(*chunkedReader).Read(0xc4205c28d0, 0xc42026a601, 0x5ff, 0x5ff, 0xc420965b28, 0x6368f9, 0xc400000008)
/usr/local/go/src/net/http/internal/chunked.go:93 +0x113
net/http.(*body).readLocked(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0xc4203b31a0, 0xc420965ef0, 0xc420965ca8)
/usr/local/go/src/net/http/transfer.go:778 +0x61
net/http.(*body).Read(0xc4203f0340, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transfer.go:770 +0xdd
net/http.(*bodyEOFSignal).Read(0xc4203f0380, 0xc42026a601, 0x5ff, 0x5ff, 0x0, 0x0, 0x0)
/usr/local/go/src/net/http/transport.go:2187 +0xdc
encoding/json.(*Decoder).refill(0xc4201c41e0, 0xc420965e0a, 0x9)
/usr/local/go/src/encoding/json/stream.go:159 +0x132
encoding/json.(*Decoder).readValue(0xc4201c41e0, 0x0, 0x0, 0x1d0e380)
/usr/local/go/src/encoding/json/stream.go:134 +0x23d
encoding/json.(*Decoder).Decode(0xc4201c41e0, 0x1a94920, 0xc420576000, 0x0, 0x0)
/usr/local/go/src/encoding/json/stream.go:63 +0x78
github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events.func1(0xc42025a180, 0xc42003a180, 0x0, 0x0, 0x0, 0x0, 0xc4205faf30, 0xc4205ca060, 0x2004140, 0xc4203f0100, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:54 +0x2da
created by github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client.(*Client).Events
/go/src/github.com/cockroachdb/cockroach/vendor/github.com/docker/docker/client/events.go:26 +0x11d
goroutine 35 [select, 29 minutes]:
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor.func1(0x1)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:623 +0x2bd
github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).monitor(0xc420902000, 0x2004180, 0xc4200420c0)
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:643 +0x7b
created by github.com/cockroachdb/cockroach/pkg/acceptance/cluster.(*DockerCluster).Start
/go/src/github.com/cockroachdb/cockroach/pkg/acceptance/cluster/dockercluster.go:660 +0x24a
goroutine 113 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203575e0, 0x2004140, 0xc420983300)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
goroutine 16 [select, 29 minutes]:
net/http.(*persistConn).readLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1717 +0x743
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1237 +0x95a
goroutine 66 [select, 29 minutes]:
net/http.(*persistConn).writeLoop(0xc4201a8000)
/usr/local/go/src/net/http/transport.go:1822 +0x14b
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1238 +0x97f
goroutine 111 [select, 29 minutes]:
database/sql.(*DB).connectionResetter(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:948 +0x12a
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:635 +0x1ae
goroutine 110 [select, 29 minutes]:
database/sql.(*DB).connectionOpener(0xc4203c3360, 0x2004140, 0xc420982dc0)
/usr/local/go/src/database/sql/sql.go:935 +0x119
created by database/sql.OpenDB
/usr/local/go/src/database/sql/sql.go:634 +0x178
--- FAIL: acceptance/TestDockerC (0.000s)
Test ended in panic.
--- FAIL: acceptance/TestDockerC/Success (0.000s)
Test ended in panic.
```
Please assign, take a look and update the issue accordingly.
|
test
|
teamcity failed tests on master acceptance testdockerc the following tests appear to have failed fail acceptance testdockerc success runmode docker test ended in panic stdout xenial pulling from library ubuntu pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer waiting waiting download complete download complete verifying checksum download complete verifying checksum download complete pull complete pull complete pull complete pull complete pull complete digest status downloaded newer image for ubuntu xenial cluster successfully initialized pulling from cockroachdb acceptance pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting verifying checksum download complete download complete download complete download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete download complete pull complete pull complete pull complete pull complete verifying checksum download complete pull complete pull complete pull complete verifying checksum download complete verifying checksum download complete download complete verifying checksum download complete pull complete pull complete pull complete pull complete pull complete panic test timed out after goroutine testing m startalarm usr local go src testing testing go created by time gofunc usr local go src time sleep go goroutine testing t run usr local go src testing testing go testing runtests usr local go src testing testing go testing trunner usr local go src testing testing go testing runtests usr local go src testing testing go testing m run usr local go src testing testing go github com cockroachdb cockroach pkg acceptance runtests go src github com cockroachdb cockroach pkg acceptance test acceptance go github com cockroachdb cockroach pkg acceptance maintest go src github com cockroachdb cockroach pkg acceptance test acceptance go github com cockroachdb cockroach pkg acceptance testmain go src github com cockroachdb cockroach pkg acceptance main test go main main testmain go goroutine os signal signal recv usr local go src runtime sigqueue go os signal loop usr local go src os signal signal unix go created by os signal init usr local go src os signal signal unix go goroutine github com cockroachdb cockroach pkg util log flushdaemon go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine github com cockroachdb cockroach pkg util log signalflusher go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine runtime gopark usr local go src runtime proc go runtime selectgo usr local go src runtime select go runtime ensuresigm usr local go src runtime signal unix go runtime goexit usr local go src runtime asm s goroutine database sql db connectionresetter usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go goroutine github com cockroachdb cockroach pkg acceptance runtests go src github com cockroachdb cockroach pkg acceptance test acceptance go created by github com cockroachdb cockroach pkg acceptance runtests go src github com cockroachdb cockroach pkg acceptance test acceptance go goroutine testing t run usr local go src testing testing go github com cockroachdb cockroach pkg acceptance testdockerc go src github com cockroachdb cockroach pkg acceptance adapter test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine testing t run usr local go src testing testing go github com cockroachdb cockroach pkg acceptance rundocker go src github com cockroachdb cockroach pkg acceptance util cluster go github com cockroachdb cockroach pkg acceptance testdocker go src github com cockroachdb cockroach pkg acceptance util docker go github com cockroachdb cockroach pkg acceptance testdockersinglenode go src github com cockroachdb cockroach pkg acceptance util docker go github com cockroachdb cockroach pkg acceptance testdockersuccess go src github com cockroachdb cockroach pkg acceptance util docker go github com cockroachdb cockroach pkg acceptance testdockerc go src github com cockroachdb cockroach pkg acceptance adapter test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine internal poll runtime pollwait usr local go src runtime netpoll go internal poll polldesc wait usr local go src internal poll fd poll runtime go internal poll polldesc waitread usr local go src internal poll fd poll runtime go internal poll fd read usr local go src internal poll fd unix go net netfd read usr local go src net fd unix go net conn read usr local go src net net go net http persistconn read usr local go src net http transport go bufio reader fill usr local go src bufio bufio go bufio reader readslice usr local go src bufio bufio go net http internal readchunkline usr local go src net http internal chunked go net http internal chunkedreader beginchunk usr local go src net http internal chunked go net http internal chunkedreader read usr local go src net http internal chunked go net http body readlocked usr local go src net http transfer go net http body read usr local go src net http transfer go net http bodyeofsignal read usr local go src net http transport go encoding json decoder refill usr local go src encoding json stream go encoding json decoder readvalue usr local go src encoding json stream go encoding json decoder decode usr local go src encoding json stream go github com cockroachdb cockroach vendor github com docker docker pkg jsonmessage displayjsonmessagesstream go src github com cockroachdb cockroach vendor github com docker docker pkg jsonmessage jsonmessage go github com cockroachdb cockroach pkg acceptance cluster pullimage go src github com cockroachdb cockroach pkg acceptance cluster docker go github com cockroachdb cockroach pkg acceptance cluster dockercluster oneshot go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance testdocker go src github com cockroachdb cockroach pkg acceptance util docker go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine net http persistconn readloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine net http persistconn writeloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine internal poll runtime pollwait usr local go src runtime netpoll go internal poll polldesc wait usr local go src internal poll fd poll runtime go internal poll polldesc waitread usr local go src internal poll fd poll runtime go internal poll fd read usr local go src internal poll fd unix go net netfd read usr local go src net fd unix go net conn read usr local go src net net go net http persistconn read usr local go src net http transport go bufio reader fill usr local go src bufio bufio go bufio reader readslice usr local go src bufio bufio go net http internal readchunkline usr local go src net http internal chunked go net http internal chunkedreader beginchunk usr local go src net http internal chunked go net http internal chunkedreader read usr local go src net http internal chunked go net http body readlocked usr local go src net http transfer go net http body read usr local go src net http transfer go net http bodyeofsignal read usr local go src net http transport go encoding json decoder refill usr local go src encoding json stream go encoding json decoder readvalue usr local go src encoding json stream go encoding json decoder decode usr local go src encoding json stream go github com cockroachdb cockroach vendor github com docker docker client client events go src github com cockroachdb cockroach vendor github com docker docker client events go created by github com cockroachdb cockroach vendor github com docker docker client client events go src github com cockroachdb cockroach vendor github com docker docker client events go goroutine github com cockroachdb cockroach pkg acceptance cluster dockercluster monitor go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance cluster dockercluster monitor go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go created by github com cockroachdb cockroach pkg acceptance cluster dockercluster start go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go goroutine database sql db connectionopener usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go goroutine net http persistconn readloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine net http persistconn writeloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine database sql db connectionresetter usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go goroutine database sql db connectionopener usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go fail acceptance testdockerc test ended in panic fail acceptance testdockerc success test ended in panic fail acceptance testdockerc success runmode docker test ended in panic stdout xenial pulling from library ubuntu pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer waiting waiting download complete download complete verifying checksum download complete verifying checksum download complete pull complete pull complete pull complete pull complete pull complete digest status downloaded newer image for ubuntu xenial cluster successfully initialized pulling from cockroachdb acceptance pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer pulling fs layer waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting waiting verifying checksum download complete download complete download complete download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete verifying checksum download complete download complete pull complete pull complete pull complete pull complete verifying checksum download complete pull complete pull complete pull complete verifying checksum download complete verifying checksum download complete download complete verifying checksum download complete pull complete pull complete pull complete pull complete pull complete panic test timed out after goroutine testing m startalarm usr local go src testing testing go created by time gofunc usr local go src time sleep go goroutine testing t run usr local go src testing testing go testing runtests usr local go src testing testing go testing trunner usr local go src testing testing go testing runtests usr local go src testing testing go testing m run usr local go src testing testing go github com cockroachdb cockroach pkg acceptance runtests go src github com cockroachdb cockroach pkg acceptance test acceptance go github com cockroachdb cockroach pkg acceptance maintest go src github com cockroachdb cockroach pkg acceptance test acceptance go github com cockroachdb cockroach pkg acceptance testmain go src github com cockroachdb cockroach pkg acceptance main test go main main testmain go goroutine os signal signal recv usr local go src runtime sigqueue go os signal loop usr local go src os signal signal unix go created by os signal init usr local go src os signal signal unix go goroutine github com cockroachdb cockroach pkg util log flushdaemon go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine github com cockroachdb cockroach pkg util log signalflusher go src github com cockroachdb cockroach pkg util log clog go created by github com cockroachdb cockroach pkg util log init go src github com cockroachdb cockroach pkg util log clog go goroutine runtime gopark usr local go src runtime proc go runtime selectgo usr local go src runtime select go runtime ensuresigm usr local go src runtime signal unix go runtime goexit usr local go src runtime asm s goroutine database sql db connectionresetter usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go goroutine github com cockroachdb cockroach pkg acceptance runtests go src github com cockroachdb cockroach pkg acceptance test acceptance go created by github com cockroachdb cockroach pkg acceptance runtests go src github com cockroachdb cockroach pkg acceptance test acceptance go goroutine testing t run usr local go src testing testing go github com cockroachdb cockroach pkg acceptance testdockerc go src github com cockroachdb cockroach pkg acceptance adapter test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine testing t run usr local go src testing testing go github com cockroachdb cockroach pkg acceptance rundocker go src github com cockroachdb cockroach pkg acceptance util cluster go github com cockroachdb cockroach pkg acceptance testdocker go src github com cockroachdb cockroach pkg acceptance util docker go github com cockroachdb cockroach pkg acceptance testdockersinglenode go src github com cockroachdb cockroach pkg acceptance util docker go github com cockroachdb cockroach pkg acceptance testdockersuccess go src github com cockroachdb cockroach pkg acceptance util docker go github com cockroachdb cockroach pkg acceptance testdockerc go src github com cockroachdb cockroach pkg acceptance adapter test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine internal poll runtime pollwait usr local go src runtime netpoll go internal poll polldesc wait usr local go src internal poll fd poll runtime go internal poll polldesc waitread usr local go src internal poll fd poll runtime go internal poll fd read usr local go src internal poll fd unix go net netfd read usr local go src net fd unix go net conn read usr local go src net net go net http persistconn read usr local go src net http transport go bufio reader fill usr local go src bufio bufio go bufio reader readslice usr local go src bufio bufio go net http internal readchunkline usr local go src net http internal chunked go net http internal chunkedreader beginchunk usr local go src net http internal chunked go net http internal chunkedreader read usr local go src net http internal chunked go net http body readlocked usr local go src net http transfer go net http body read usr local go src net http transfer go net http bodyeofsignal read usr local go src net http transport go encoding json decoder refill usr local go src encoding json stream go encoding json decoder readvalue usr local go src encoding json stream go encoding json decoder decode usr local go src encoding json stream go github com cockroachdb cockroach vendor github com docker docker pkg jsonmessage displayjsonmessagesstream go src github com cockroachdb cockroach vendor github com docker docker pkg jsonmessage jsonmessage go github com cockroachdb cockroach pkg acceptance cluster pullimage go src github com cockroachdb cockroach pkg acceptance cluster docker go github com cockroachdb cockroach pkg acceptance cluster dockercluster oneshot go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance testdocker go src github com cockroachdb cockroach pkg acceptance util docker go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine net http persistconn readloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine net http persistconn writeloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine internal poll runtime pollwait usr local go src runtime netpoll go internal poll polldesc wait usr local go src internal poll fd poll runtime go internal poll polldesc waitread usr local go src internal poll fd poll runtime go internal poll fd read usr local go src internal poll fd unix go net netfd read usr local go src net fd unix go net conn read usr local go src net net go net http persistconn read usr local go src net http transport go bufio reader fill usr local go src bufio bufio go bufio reader readslice usr local go src bufio bufio go net http internal readchunkline usr local go src net http internal chunked go net http internal chunkedreader beginchunk usr local go src net http internal chunked go net http internal chunkedreader read usr local go src net http internal chunked go net http body readlocked usr local go src net http transfer go net http body read usr local go src net http transfer go net http bodyeofsignal read usr local go src net http transport go encoding json decoder refill usr local go src encoding json stream go encoding json decoder readvalue usr local go src encoding json stream go encoding json decoder decode usr local go src encoding json stream go github com cockroachdb cockroach vendor github com docker docker client client events go src github com cockroachdb cockroach vendor github com docker docker client events go created by github com cockroachdb cockroach vendor github com docker docker client client events go src github com cockroachdb cockroach vendor github com docker docker client events go goroutine github com cockroachdb cockroach pkg acceptance cluster dockercluster monitor go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go github com cockroachdb cockroach pkg acceptance cluster dockercluster monitor go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go created by github com cockroachdb cockroach pkg acceptance cluster dockercluster start go src github com cockroachdb cockroach pkg acceptance cluster dockercluster go goroutine database sql db connectionopener usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go goroutine net http persistconn readloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine net http persistconn writeloop usr local go src net http transport go created by net http transport dialconn usr local go src net http transport go goroutine database sql db connectionresetter usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go goroutine database sql db connectionopener usr local go src database sql sql go created by database sql opendb usr local go src database sql sql go fail acceptance testdockerc test ended in panic fail acceptance testdockerc success test ended in panic please assign take a look and update the issue accordingly
| 1
|
17,211
| 22,795,111,678
|
IssuesEvent
|
2022-07-10 15:47:46
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
opened
|
"Backfill" memory range check requests to op exec row instead of AuxTable memory trace row
|
enhancement processor
|
With the current implementation of range check lookups for stack and memory in the `p1` column (implemented in #286), it's possible for requests from the stack and requests from memory to occur in the same row of the execution trace, since the memory requests occur during each row of the memory segment of the Auxiliary Table trace, whereas the stack requests occur when the operation is executed. In order to keep the AIR constraint degrees of the `p1` column below 9, this requires an additional aux trace column `q` of intermediate values which holds the stack lookups.
A better approach would be to include the range check requests from memory at the time the `mload` and `mstore` operations are executed on the stack. This would guarantee that there would never be more than one set of range check lookups requested by any operation at any given cycle. We could then remove the extra `q` column.
The complication here is that memory information isn't available until the execution trace is being finalized and all memory accesses are known. Thus, we can't include memory range checks at the time that `mload` or `mstore` operations are executed. Instead, when the trace is being finalized, we need to "backfill" the memory lookup information into the decoder columns at the cycle where the operations were executed, and then update the range checker's `AuxTraceBuilder` so that range check requests by memory are performed at the row where the memory operation was executed rather than at the row where it's included in the memory trace. (This means we need to track those cycles, which is currently not done.)
This requires a refactor of the current approach to the range checker's `p1` column in the [range checker's `AuxTraceBuilder`](https://github.com/maticnetwork/miden/blob/next/processor/src/range/aux_trace.rs). This will allow us to:
1. remove the `q` column, since all required AIR constraints for `p1` will be 9 degrees (or fewer)
2. remove the `cycle_range_checks` BTree and `CycleRangeChecks` struct and use vectors of "hints" and "rows" instead, similar to the `AuxTraceBuilder` pattern in the [stack](https://github.com/maticnetwork/miden/blob/next/processor/src/stack/aux_trace.rs) or [hasher](https://github.com/maticnetwork/miden/blob/next/processor/src/hasher/aux_trace.rs).
|
1.0
|
"Backfill" memory range check requests to op exec row instead of AuxTable memory trace row - With the current implementation of range check lookups for stack and memory in the `p1` column (implemented in #286), it's possible for requests from the stack and requests from memory to occur in the same row of the execution trace, since the memory requests occur during each row of the memory segment of the Auxiliary Table trace, whereas the stack requests occur when the operation is executed. In order to keep the AIR constraint degrees of the `p1` column below 9, this requires an additional aux trace column `q` of intermediate values which holds the stack lookups.
A better approach would be to include the range check requests from memory at the time the `mload` and `mstore` operations are executed on the stack. This would guarantee that there would never be more than one set of range check lookups requested by any operation at any given cycle. We could then remove the extra `q` column.
The complication here is that memory information isn't available until the execution trace is being finalized and all memory accesses are known. Thus, we can't include memory range checks at the time that `mload` or `mstore` operations are executed. Instead, when the trace is being finalized, we need to "backfill" the memory lookup information into the decoder columns at the cycle where the operations were executed, and then update the range checker's `AuxTraceBuilder` so that range check requests by memory are performed at the row where the memory operation was executed rather than at the row where it's included in the memory trace. (This means we need to track those cycles, which is currently not done.)
This requires a refactor of the current approach to the range checker's `p1` column in the [range checker's `AuxTraceBuilder`](https://github.com/maticnetwork/miden/blob/next/processor/src/range/aux_trace.rs). This will allow us to:
1. remove the `q` column, since all required AIR constraints for `p1` will be 9 degrees (or fewer)
2. remove the `cycle_range_checks` BTree and `CycleRangeChecks` struct and use vectors of "hints" and "rows" instead, similar to the `AuxTraceBuilder` pattern in the [stack](https://github.com/maticnetwork/miden/blob/next/processor/src/stack/aux_trace.rs) or [hasher](https://github.com/maticnetwork/miden/blob/next/processor/src/hasher/aux_trace.rs).
|
non_test
|
backfill memory range check requests to op exec row instead of auxtable memory trace row with the current implementation of range check lookups for stack and memory in the column implemented in it s possible for requests from the stack and requests from memory to occur in the same row of the execution trace since the memory requests occur during each row of the memory segment of the auxiliary table trace whereas the stack requests occur when the operation is executed in order to keep the air constraint degrees of the column below this requires an additional aux trace column q of intermediate values which holds the stack lookups a better approach would be to include the range check requests from memory at the time the mload and mstore operations are executed on the stack this would guarantee that there would never be more than one set of range check lookups requested by any operation at any given cycle we could then remove the extra q column the complication here is that memory information isn t available until the execution trace is being finalized and all memory accesses are known thus we can t include memory range checks at the time that mload or mstore operations are executed instead when the trace is being finalized we need to backfill the memory lookup information into the decoder columns at the cycle where the operations were executed and then update the range checker s auxtracebuilder so that range check requests by memory are performed at the row where the memory operation was executed rather than at the row where it s included in the memory trace this means we need to track those cycles which is currently not done this requires a refactor of the current approach to the range checker s column in the this will allow us to remove the q column since all required air constraints for will be degrees or fewer remove the cycle range checks btree and cyclerangechecks struct and use vectors of hints and rows instead similar to the auxtracebuilder pattern in the or
| 0
|
24,983
| 4,119,686,302
|
IssuesEvent
|
2016-06-08 15:33:41
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
closed
|
Manual tests for Linux 0.10.3 RC2
|
tests
|
## Installer
1. [x] Check that installer is close to the size of last release.
## About pages
1. [x] Test that about:bookmarks loads bookmarks
2. [x] Test that about:downloads loads downloads
3. [x] Test that about:preferences changing a preference takes effect right away
4. [x] Test that about:preferences language change takes effect on re-start
5. [x] Test that about:passwords loads
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Site hacks
1. [x] Test twitch.tv sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs and Pinning
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar works
2. [x] Test that creating a bookmark folder on the bookmarks toolbar works
3. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
4. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
5. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to apple.com
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in badssl.com works.
6. [x] Test that Safe Browsing works (excellentmovies.net)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for excellentmovies.net.
8. [x] Visit brianbondy.com and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
## Content tests
1. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
2. [x] Go to brianbondy.com and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
3. [ ] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`.
5. [x] Open a github issue and type some misspellings, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Make sure that clicking links from gmail or inbox.google.com works.
## Per release specialty tests
1. [x] Test each item in release notes for the release that's going out.
## Session storage
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Make sure that data from the last version appears in the new version OK.
3. [x] Test that windows and tabs restore when closed, including active tab.
4. [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Update tests
1. [x] Test that .deb upgrade works
|
1.0
|
Manual tests for Linux 0.10.3 RC2 - ## Installer
1. [x] Check that installer is close to the size of last release.
## About pages
1. [x] Test that about:bookmarks loads bookmarks
2. [x] Test that about:downloads loads downloads
3. [x] Test that about:preferences changing a preference takes effect right away
4. [x] Test that about:preferences language change takes effect on re-start
5. [x] Test that about:passwords loads
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Site hacks
1. [x] Test twitch.tv sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs and Pinning
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar works
2. [x] Test that creating a bookmark folder on the bookmarks toolbar works
3. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
4. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
5. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to apple.com
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in badssl.com works.
6. [x] Test that Safe Browsing works (excellentmovies.net)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for excellentmovies.net.
8. [x] Visit brianbondy.com and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
## Content tests
1. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
2. [x] Go to brianbondy.com and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
3. [ ] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`.
5. [x] Open a github issue and type some misspellings, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Make sure that clicking links from gmail or inbox.google.com works.
## Per release specialty tests
1. [x] Test each item in release notes for the release that's going out.
## Session storage
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Make sure that data from the last version appears in the new version OK.
3. [x] Test that windows and tabs restore when closed, including active tab.
4. [x] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Update tests
1. [x] Test that .deb upgrade works
|
test
|
manual tests for linux installer check that installer is close to the size of last release about pages test that about bookmarks loads bookmarks test that about downloads loads downloads test that about preferences changing a preference takes effect right away test that about preferences language change takes effect on re start test that about passwords loads context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control input textarea or contenteditable find on page ensure search box is shown with shortcut test successful find test forward and backward find navigation test failed find shows results test match case find site hacks test twitch tv sub page loads a video and you can play it downloads test downloading a file works and that all actions on the download item works fullscreen test that entering full screen window works view toggle full screen and exit back not esc test that entering full screen works and esc to go back youtube com tabs and pinning test that tabs are pinnable test that tabs are unpinnable test that tabs are draggable to same tabset test that tabs are draggable to alternate tabset zoom test zoom in out shortcut works test hamburger menu zooms test zoom saved when you close the browser and restore on a single site test zoom saved when you navigate within a single origin site test that navigating to a different origin resets the zoom bookmarks test that creating a bookmark on the bookmarks toolbar works test that creating a bookmark folder on the bookmarks toolbar works test that moving a bookmark into a folder by drag and drop on the bookmarks folder works test that clicking a bookmark in the toolbar loads the bookmark test that clicking a bookmark in a bookmark toolbar folder loads the bookmark bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to apple com check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in badssl com works test that safe browsing works excellentmovies net turning safe browsing off and shields off both disable safe browsing for excellentmovies net visit brianbondy com and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that about preferences default bravery settings take effect on pages with no site settings content tests load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to brianbondy com and click on the twitter icon on the top right test that context menus work in the new twitter tab go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords open a github issue and type some misspellings make sure they are underlined make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text make sure that command click control click on windows control click on ubuntu on a link opens a new tab but does not switch to it click on it and make sure it is already loaded make sure that clicking links from gmail or inbox google com works per release specialty tests test each item in release notes for the release that s going out session storage temporarily move away your library application support brave session store and test that clean session storage works appdata brave in windows config brave in ubuntu make sure that data from the last version appears in the new version ok test that windows and tabs restore when closed including active tab move away your entire library application support brave folder appdata brave in windows config brave in ubuntu update tests test that deb upgrade works
| 1
|
100,855
| 8,757,182,546
|
IssuesEvent
|
2018-12-14 20:17:52
|
NuGet/Home
|
https://api.github.com/repos/NuGet/Home
|
closed
|
[Test Failure][Signing] Error “Bad gateway (502)” occur after signing a package when timestamp signing certificate does not satisfy certificate requirements
|
Area:PackageSigning Area:Test Test Failure
|
## Details about Problem
VS Version: D15.9 28307.145
OS Version: 17763.RS5_Release.180914-1434.
NuGet Version: Release-4.9.2-RTM\4.9.2.5701
## Detailed repro steps
1. Create a new test certificate: .\CreateTestCertificate.ps1 –AddAsTrustedRootAuthority.
2. sign a package: NuGet.exe sign <PackageFilePath> -CertificatePath <PfxFilePath> -Timestamper http://freetsa.org/tsr
3. Verify the previous step failed with an error saying "NU3024: The timestamp response has an unsupported digest algorithm (SHA1). The following algorithms are supported: SHA256, SHA384, SHA512."
4. Verify the original package was not changed
## Expected
There is an error saying “NU3024: The timestamp response has an unsupported digest algorithm (SHA1). The following algorithms are supported: SHA256, SHA384, SHA512"
## Actual
Error “Bad gateway (502)” occur after signing a package when timestamp signing certificate does not satisfy certificate requirements as below screenshot.

|
2.0
|
[Test Failure][Signing] Error “Bad gateway (502)” occur after signing a package when timestamp signing certificate does not satisfy certificate requirements - ## Details about Problem
VS Version: D15.9 28307.145
OS Version: 17763.RS5_Release.180914-1434.
NuGet Version: Release-4.9.2-RTM\4.9.2.5701
## Detailed repro steps
1. Create a new test certificate: .\CreateTestCertificate.ps1 –AddAsTrustedRootAuthority.
2. sign a package: NuGet.exe sign <PackageFilePath> -CertificatePath <PfxFilePath> -Timestamper http://freetsa.org/tsr
3. Verify the previous step failed with an error saying "NU3024: The timestamp response has an unsupported digest algorithm (SHA1). The following algorithms are supported: SHA256, SHA384, SHA512."
4. Verify the original package was not changed
## Expected
There is an error saying “NU3024: The timestamp response has an unsupported digest algorithm (SHA1). The following algorithms are supported: SHA256, SHA384, SHA512"
## Actual
Error “Bad gateway (502)” occur after signing a package when timestamp signing certificate does not satisfy certificate requirements as below screenshot.

|
test
|
error “bad gateway ” occur after signing a package when timestamp signing certificate does not satisfy certificate requirements details about problem vs version os version release nuget version release rtm detailed repro steps create a new test certificate createtestcertificate –addastrustedrootauthority sign a package nuget exe sign certificatepath timestamper verify the previous step failed with an error saying the timestamp response has an unsupported digest algorithm the following algorithms are supported verify the original package was not changed expected there is an error saying “ the timestamp response has an unsupported digest algorithm the following algorithms are supported actual error “bad gateway ” occur after signing a package when timestamp signing certificate does not satisfy certificate requirements as below screenshot
| 1
|
14,222
| 10,708,621,061
|
IssuesEvent
|
2019-10-24 20:07:03
|
flutter/website
|
https://api.github.com/repos/flutter/website
|
closed
|
CI: build and test over stable flutter channel version
|
e1-hours infrastructure p1-high
|
@johnpryan wrote:
> Ideally, we should build and test on the latest stable channel, and also test on master, dev, and beta, but allow the failures similar to dart-lang/site-www.
Related: #3089
|
1.0
|
CI: build and test over stable flutter channel version - @johnpryan wrote:
> Ideally, we should build and test on the latest stable channel, and also test on master, dev, and beta, but allow the failures similar to dart-lang/site-www.
Related: #3089
|
non_test
|
ci build and test over stable flutter channel version johnpryan wrote ideally we should build and test on the latest stable channel and also test on master dev and beta but allow the failures similar to dart lang site www related
| 0
|
293,734
| 25,319,138,087
|
IssuesEvent
|
2022-11-18 01:14:24
|
vehicle-lang/vehicle
|
https://api.github.com/repos/vehicle-lang/vehicle
|
closed
|
Speed up of tests with parallelization
|
enhancement test-suite
|
The test suite takes quite a long time to run each time - any chance the tests could be parallelized in `shake` similar to how it is done with make, e.g. `make -j 8`
|
1.0
|
Speed up of tests with parallelization - The test suite takes quite a long time to run each time - any chance the tests could be parallelized in `shake` similar to how it is done with make, e.g. `make -j 8`
|
test
|
speed up of tests with parallelization the test suite takes quite a long time to run each time any chance the tests could be parallelized in shake similar to how it is done with make e g make j
| 1
|
94,055
| 8,468,408,729
|
IssuesEvent
|
2018-10-23 19:39:43
|
palafix/Flash
|
https://api.github.com/repos/palafix/Flash
|
closed
|
ImageActivity.kt line 108
|
for testing
|
#### in nl.arnhem.flash.activities.ImageActivity.
* Number of crashes: 1
* Impacted devices: 1
There's a lot more information about this crash on crashlytics.com:
[https://fabric.io/phase/android/apps/nl.arnhem.flash/issues/5bc9d85ff8b88c2963658e09?utm_medium=service_hooks-github&utm_source=issue_impact](https://fabric.io/phase/android/apps/nl.arnhem.flash/issues/5bc9d85ff8b88c2963658e09?utm_medium=service_hooks-github&utm_source=issue_impact)
|
1.0
|
ImageActivity.kt line 108 - #### in nl.arnhem.flash.activities.ImageActivity.
* Number of crashes: 1
* Impacted devices: 1
There's a lot more information about this crash on crashlytics.com:
[https://fabric.io/phase/android/apps/nl.arnhem.flash/issues/5bc9d85ff8b88c2963658e09?utm_medium=service_hooks-github&utm_source=issue_impact](https://fabric.io/phase/android/apps/nl.arnhem.flash/issues/5bc9d85ff8b88c2963658e09?utm_medium=service_hooks-github&utm_source=issue_impact)
|
test
|
imageactivity kt line in nl arnhem flash activities imageactivity number of crashes impacted devices there s a lot more information about this crash on crashlytics com
| 1
|
80,637
| 30,454,362,960
|
IssuesEvent
|
2023-07-16 17:48:01
|
vector-im/element-desktop
|
https://api.github.com/repos/vector-im/element-desktop
|
opened
|
Matrix formatting doesn't handle color "gold" across auto/light/dark/black modes
|
T-Defect
|
### Steps to reproduce
Submit formatted room message:
<b><font color=\"gold\"> hello world</font></b>
Client in Auto mode (light): displays
Client in light mode: displays
Client in Black mode: displays
Client in Dark mode: Text shows up as black and is unreadable.
### Outcome
#### What did you expect?
Text to show up in gold even in dark mode given the color
#### What happened instead?
Text showed up as black instead of gold making it unreadable.
### Operating system
Windows 10 x64
### Application version
Element version: 1.11.35 Olm version: 3.2.14
### How did you install the app?
element.io web site as desktop app
### Homeserver
matrix-synapse==1.84.1 matrix-synapse-ldap3==0.1.4
### Will you send logs?
No
|
1.0
|
Matrix formatting doesn't handle color "gold" across auto/light/dark/black modes - ### Steps to reproduce
Submit formatted room message:
<b><font color=\"gold\"> hello world</font></b>
Client in Auto mode (light): displays
Client in light mode: displays
Client in Black mode: displays
Client in Dark mode: Text shows up as black and is unreadable.
### Outcome
#### What did you expect?
Text to show up in gold even in dark mode given the color
#### What happened instead?
Text showed up as black instead of gold making it unreadable.
### Operating system
Windows 10 x64
### Application version
Element version: 1.11.35 Olm version: 3.2.14
### How did you install the app?
element.io web site as desktop app
### Homeserver
matrix-synapse==1.84.1 matrix-synapse-ldap3==0.1.4
### Will you send logs?
No
|
non_test
|
matrix formatting doesn t handle color gold across auto light dark black modes steps to reproduce submit formatted room message hello world client in auto mode light displays client in light mode displays client in black mode displays client in dark mode text shows up as black and is unreadable outcome what did you expect text to show up in gold even in dark mode given the color what happened instead text showed up as black instead of gold making it unreadable operating system windows application version element version olm version how did you install the app element io web site as desktop app homeserver matrix synapse matrix synapse will you send logs no
| 0
|
40,410
| 20,826,318,040
|
IssuesEvent
|
2022-03-18 21:26:34
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
Enable Lint: `RedundantNamespace`
|
Performance Good First Issue! Lint
|
We perform lint checking to ensure code correctness and improve performance, but many of the system lint checks have not been enabled.
The goal of this issue is to enable the lint check: `RedundantNamespace`
Line: https://github.com/ankidroid/Anki-Android/blob/6260cd6e25f67f331833bbd0981f2cda5a426dde/lint-release.xml#L74
To do this:
* Change `ignore` to `fatal`
* Move the line in the XML to be somewhere near the top of the file (before the large block of `ignore` elements.
* The exact position is up to you, but if there's no comment associated with the check, a large nearly alphabetical block of `fatal` warnings seems sensible
* Run `gradlew lint` in the Android Studio terminal and fix the errors which occur.
* `@Suppress(str)` (Kotlin) or `@SuppressWarnings(str)` (Java) the warnings if they are valid. Android Studio's quickfixes menu will offer options to suppress
* Use your judgment, but aim to suppress as close to the error as possible (line/method) unless the class specifically has a reason why it will violate a lint check many times
* If it was an error that we missed, fix it.
* Submit a Pull Request
|
True
|
Enable Lint: `RedundantNamespace` - We perform lint checking to ensure code correctness and improve performance, but many of the system lint checks have not been enabled.
The goal of this issue is to enable the lint check: `RedundantNamespace`
Line: https://github.com/ankidroid/Anki-Android/blob/6260cd6e25f67f331833bbd0981f2cda5a426dde/lint-release.xml#L74
To do this:
* Change `ignore` to `fatal`
* Move the line in the XML to be somewhere near the top of the file (before the large block of `ignore` elements.
* The exact position is up to you, but if there's no comment associated with the check, a large nearly alphabetical block of `fatal` warnings seems sensible
* Run `gradlew lint` in the Android Studio terminal and fix the errors which occur.
* `@Suppress(str)` (Kotlin) or `@SuppressWarnings(str)` (Java) the warnings if they are valid. Android Studio's quickfixes menu will offer options to suppress
* Use your judgment, but aim to suppress as close to the error as possible (line/method) unless the class specifically has a reason why it will violate a lint check many times
* If it was an error that we missed, fix it.
* Submit a Pull Request
|
non_test
|
enable lint redundantnamespace we perform lint checking to ensure code correctness and improve performance but many of the system lint checks have not been enabled the goal of this issue is to enable the lint check redundantnamespace line to do this change ignore to fatal move the line in the xml to be somewhere near the top of the file before the large block of ignore elements the exact position is up to you but if there s no comment associated with the check a large nearly alphabetical block of fatal warnings seems sensible run gradlew lint in the android studio terminal and fix the errors which occur suppress str kotlin or suppresswarnings str java the warnings if they are valid android studio s quickfixes menu will offer options to suppress use your judgment but aim to suppress as close to the error as possible line method unless the class specifically has a reason why it will violate a lint check many times if it was an error that we missed fix it submit a pull request
| 0
|
5,577
| 8,414,995,093
|
IssuesEvent
|
2018-10-13 09:42:04
|
bitshares/bitshares-community-ui
|
https://api.github.com/repos/bitshares/bitshares-community-ui
|
closed
|
Signup component UI
|
Signup feature process ui
|
Use Login.vue for examples
- Should have two forms (password/private key) (see zeplin) with tabs switch
- Should be able to copy generated password from the input field to clipboard
- Should have validations: all fields are required, passwords/pins should match, account name shouldn't be used on bitshares
|
1.0
|
Signup component UI - Use Login.vue for examples
- Should have two forms (password/private key) (see zeplin) with tabs switch
- Should be able to copy generated password from the input field to clipboard
- Should have validations: all fields are required, passwords/pins should match, account name shouldn't be used on bitshares
|
non_test
|
signup component ui use login vue for examples should have two forms password private key see zeplin with tabs switch should be able to copy generated password from the input field to clipboard should have validations all fields are required passwords pins should match account name shouldn t be used on bitshares
| 0
|
8,082
| 11,382,250,885
|
IssuesEvent
|
2020-01-29 01:03:27
|
microsoft/botframework-solutions
|
https://api.github.com/repos/microsoft/botframework-solutions
|
closed
|
allow user to remove conversation/user state from db
|
Needs Mockup Needs Requirements Needs User Story Stale
|
#### Is your feature request related to a problem? Please describe.
In Teams, conversation id and user id could not be changed for a bot. So if dialog/model is changed and db fails to deserialize, nothing could be done to recover it.
#### What is the solution you are looking for?
Just like cancel, provide something to remove states and propagates it to skills.
#### What alternatives have you considered?
#### Is there any other context you can provide?
|
1.0
|
allow user to remove conversation/user state from db - #### Is your feature request related to a problem? Please describe.
In Teams, conversation id and user id could not be changed for a bot. So if dialog/model is changed and db fails to deserialize, nothing could be done to recover it.
#### What is the solution you are looking for?
Just like cancel, provide something to remove states and propagates it to skills.
#### What alternatives have you considered?
#### Is there any other context you can provide?
|
non_test
|
allow user to remove conversation user state from db is your feature request related to a problem please describe in teams conversation id and user id could not be changed for a bot so if dialog model is changed and db fails to deserialize nothing could be done to recover it what is the solution you are looking for just like cancel provide something to remove states and propagates it to skills what alternatives have you considered is there any other context you can provide
| 0
|
46,937
| 7,295,979,537
|
IssuesEvent
|
2018-02-26 09:14:52
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
opened
|
Publish the documentation on `rubydoc.info`
|
team: documentation
|
# This is a Feature Proposal
#### :tophat: Description
We should publish decidim's documentation in http://www.rubydoc.info/. The code is extensively documented and that might help new developers get acquainted with its main abstractions.
#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
*None*
|
1.0
|
Publish the documentation on `rubydoc.info` - # This is a Feature Proposal
#### :tophat: Description
We should publish decidim's documentation in http://www.rubydoc.info/. The code is extensively documented and that might help new developers get acquainted with its main abstractions.
#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
*None*
|
non_test
|
publish the documentation on rubydoc info this is a feature proposal tophat description we should publish decidim s documentation in the code is extensively documented and that might help new developers get acquainted with its main abstractions pushpin related issues none clipboard additional data none
| 0
|
25,242
| 4,150,668,957
|
IssuesEvent
|
2016-06-15 18:05:36
|
Microsoft/RTVS
|
https://api.github.com/repos/Microsoft/RTVS
|
opened
|
Add fluent assertion for 2D arrays
|
type:test issue
|
Need to add `.Should().Equal()` for 2D arrays, including arrays with non-zero lower boundaries. That can then be used to simplify data grid tests.
https://github.com/Microsoft/RTVS/pull/1927/commits/b8ecadb5042d91c33d8d62208054703d6f9d09e6#diff-2ae440de6d36febd3f6fd2e406060fcaR61
|
1.0
|
Add fluent assertion for 2D arrays - Need to add `.Should().Equal()` for 2D arrays, including arrays with non-zero lower boundaries. That can then be used to simplify data grid tests.
https://github.com/Microsoft/RTVS/pull/1927/commits/b8ecadb5042d91c33d8d62208054703d6f9d09e6#diff-2ae440de6d36febd3f6fd2e406060fcaR61
|
test
|
add fluent assertion for arrays need to add should equal for arrays including arrays with non zero lower boundaries that can then be used to simplify data grid tests
| 1
|
102,088
| 12,743,846,086
|
IssuesEvent
|
2020-06-26 11:16:34
|
geocollections/sarv-edit
|
https://api.github.com/repos/geocollections/sarv-edit
|
closed
|
Replace Vue icon in address bar
|
design
|
The default Vue icon should be replaced by DataCite icon (check https://twitter.com/datacite) or DOI icon (https://en.wikipedia.org/wiki/Digital_object_identifier#/media/File:DOI_logo.svg).
|
1.0
|
Replace Vue icon in address bar - The default Vue icon should be replaced by DataCite icon (check https://twitter.com/datacite) or DOI icon (https://en.wikipedia.org/wiki/Digital_object_identifier#/media/File:DOI_logo.svg).
|
non_test
|
replace vue icon in address bar the default vue icon should be replaced by datacite icon check or doi icon
| 0
|
6,897
| 10,039,752,311
|
IssuesEvent
|
2019-07-18 18:13:05
|
dCentralizedSystems/customer-support
|
https://api.github.com/repos/dCentralizedSystems/customer-support
|
opened
|
7 day timespan report takes too . long and is inconsistent
|
data sample client fleet management stream processing
|
known issue, internal issue created to track. We currently use unicast (single node query) to retrieve data, since merging across nodes takes too long. But since the load balancer randomly picks a node, if each node has gaps in its index, the results will vary.
we have enabled full replication since middle of this week, so in the future, even unicast should return complete data sets, regardless of node.
i will resolve this issue once the performance is improved.
internal issue:
https://github.com/dCentralizedSystems/cap-ui/issues/61
|
1.0
|
7 day timespan report takes too . long and is inconsistent - known issue, internal issue created to track. We currently use unicast (single node query) to retrieve data, since merging across nodes takes too long. But since the load balancer randomly picks a node, if each node has gaps in its index, the results will vary.
we have enabled full replication since middle of this week, so in the future, even unicast should return complete data sets, regardless of node.
i will resolve this issue once the performance is improved.
internal issue:
https://github.com/dCentralizedSystems/cap-ui/issues/61
|
non_test
|
day timespan report takes too long and is inconsistent known issue internal issue created to track we currently use unicast single node query to retrieve data since merging across nodes takes too long but since the load balancer randomly picks a node if each node has gaps in its index the results will vary we have enabled full replication since middle of this week so in the future even unicast should return complete data sets regardless of node i will resolve this issue once the performance is improved internal issue
| 0
|
328,108
| 28,101,821,933
|
IssuesEvent
|
2023-03-30 20:09:15
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: kv95/enc=false/nodes=4/ssds=8 failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker T-testeng
|
roachtest.kv95/enc=false/nodes=4/ssds=8 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=artifacts#/kv95/enc=false/nodes=4/ssds=8) on master @ [1f8024bf14433ca169e5a8c3768c5d223dc5018c](https://github.com/cockroachdb/cockroach/commits/1f8024bf14433ca169e5a8c3768c5d223dc5018c):
```
test artifacts and logs in: /artifacts/kv95/enc=false/nodes=4/ssds=8/run_1
(cluster.go:1977).Run: output in run_182106.546123556_n5_workload-run-kv-tole: ./workload run kv --tolerate-errors --init --histograms=perf/stats.json --concurrency=256 --splits=1000 --duration=30m0s --read-percent=95 {pgurl:1-4} returned: COMMAND_PROBLEM: ssh verbose log retained in ssh_182107.299333448_n5_workload-run-kv-tole.log: exit status 1
(monitor.go:127).Wait: monitor failure: monitor task failed: t.Fatal() was called
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=8</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv95/enc=false/nodes=4/ssds=8.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-26294
|
3.0
|
roachtest: kv95/enc=false/nodes=4/ssds=8 failed - roachtest.kv95/enc=false/nodes=4/ssds=8 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=artifacts#/kv95/enc=false/nodes=4/ssds=8) on master @ [1f8024bf14433ca169e5a8c3768c5d223dc5018c](https://github.com/cockroachdb/cockroach/commits/1f8024bf14433ca169e5a8c3768c5d223dc5018c):
```
test artifacts and logs in: /artifacts/kv95/enc=false/nodes=4/ssds=8/run_1
(cluster.go:1977).Run: output in run_182106.546123556_n5_workload-run-kv-tole: ./workload run kv --tolerate-errors --init --histograms=perf/stats.json --concurrency=256 --splits=1000 --duration=30m0s --read-percent=95 {pgurl:1-4} returned: COMMAND_PROBLEM: ssh verbose log retained in ssh_182107.299333448_n5_workload-run-kv-tole.log: exit status 1
(monitor.go:127).Wait: monitor failure: monitor task failed: t.Fatal() was called
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=8</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv95/enc=false/nodes=4/ssds=8.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-26294
|
test
|
roachtest enc false nodes ssds failed roachtest enc false nodes ssds with on master test artifacts and logs in artifacts enc false nodes ssds run cluster go run output in run workload run kv tole workload run kv tolerate errors init histograms perf stats json concurrency splits duration read percent pgurl returned command problem ssh verbose log retained in ssh workload run kv tole log exit status monitor go wait monitor failure monitor task failed t fatal was called parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb test eng jira issue crdb
| 1
|
14,163
| 4,833,281,588
|
IssuesEvent
|
2016-11-08 10:28:25
|
PapirusDevelopmentTeam/papirus-icon-theme-gtk
|
https://api.github.com/repos/PapirusDevelopmentTeam/papirus-icon-theme-gtk
|
closed
|
Missing monochrome icons
|
completed hardcoded
|
The following icons are not monochrome in the notification area:
- CopyQ
- Guake Indicator
- Google Chrome
- Google Chrome hangouts plugin (I'm not sure this is even possible to address)
|
1.0
|
Missing monochrome icons - The following icons are not monochrome in the notification area:
- CopyQ
- Guake Indicator
- Google Chrome
- Google Chrome hangouts plugin (I'm not sure this is even possible to address)
|
non_test
|
missing monochrome icons the following icons are not monochrome in the notification area copyq guake indicator google chrome google chrome hangouts plugin i m not sure this is even possible to address
| 0
|
199,785
| 15,075,013,171
|
IssuesEvent
|
2021-02-05 01:04:17
|
Azure/azure-sdk-for-js
|
https://api.github.com/repos/Azure/azure-sdk-for-js
|
closed
|
[storage][reporter] ERROR [reporter.remap-istanbul]: TypeError: sourceText.split is not a function
|
Client test-utils-recorder
|
Ran into this error after running browser tests in playback mode.
```
=============================== Coverage summary ===============================
Statements : 44.58% ( 3890/8725 )
Branches : 27.06% ( 1088/4021 )
Functions : 32.49% ( 295/908 )
Lines : 44.04% ( 3617/8213 )
================================================================================
18 08 2020 11:00:40.119:ERROR [reporter.remap-istanbul]: TypeError: sourceText.split is not a function
at HtmlReport.writeDetailPage (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:412:31)
at S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:489:26
at SyncFileWriter.writeFile (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\util\file-writer.js:57:9)
at FileWriter.writeFile (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\util\file-writer.js:147:23)
at S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:488:24
at Array.forEach (<anonymous>)
at HtmlReport.writeFiles (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:482:23)
at S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:484:22
at Array.forEach (<anonymous>)
at HtmlReport.writeFiles (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:482:23)
```
|
1.0
|
[storage][reporter] ERROR [reporter.remap-istanbul]: TypeError: sourceText.split is not a function - Ran into this error after running browser tests in playback mode.
```
=============================== Coverage summary ===============================
Statements : 44.58% ( 3890/8725 )
Branches : 27.06% ( 1088/4021 )
Functions : 32.49% ( 295/908 )
Lines : 44.04% ( 3617/8213 )
================================================================================
18 08 2020 11:00:40.119:ERROR [reporter.remap-istanbul]: TypeError: sourceText.split is not a function
at HtmlReport.writeDetailPage (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:412:31)
at S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:489:26
at SyncFileWriter.writeFile (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\util\file-writer.js:57:9)
at FileWriter.writeFile (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\util\file-writer.js:147:23)
at S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:488:24
at Array.forEach (<anonymous>)
at HtmlReport.writeFiles (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:482:23)
at S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:484:22
at Array.forEach (<anonymous>)
at HtmlReport.writeFiles (S:\SDKs\azure-sdk-for-js\common\temp\node_modules\.pnpm\istanbul@0.4.5\node_modules\istanbul\lib\report\html.js:482:23)
```
|
test
|
error typeerror sourcetext split is not a function ran into this error after running browser tests in playback mode coverage summary statements branches functions lines error typeerror sourcetext split is not a function at htmlreport writedetailpage s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib report html js at s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib report html js at syncfilewriter writefile s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib util file writer js at filewriter writefile s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib util file writer js at s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib report html js at array foreach at htmlreport writefiles s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib report html js at s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib report html js at array foreach at htmlreport writefiles s sdks azure sdk for js common temp node modules pnpm istanbul node modules istanbul lib report html js
| 1
|
271,857
| 29,660,272,145
|
IssuesEvent
|
2023-06-10 04:05:38
|
artichoke/project-infrastructure
|
https://api.github.com/repos/artichoke/project-infrastructure
|
closed
|
Enable GitHub Secret Scanning and Push Protection
|
A-github A-security
|
In the **Code security and analysis** organization settings.
Changes made for the following organizations:
- @artichoke
- @artichokeruby
- @artichoke-ruby
<img width="959" alt="Screenshot 2023-06-09 at 8 32 26 PM" src="https://github.com/artichoke/project-infrastructure/assets/860434/68ba6b65-a57f-4c99-a3e7-cb24ea11724f">
|
True
|
Enable GitHub Secret Scanning and Push Protection - In the **Code security and analysis** organization settings.
Changes made for the following organizations:
- @artichoke
- @artichokeruby
- @artichoke-ruby
<img width="959" alt="Screenshot 2023-06-09 at 8 32 26 PM" src="https://github.com/artichoke/project-infrastructure/assets/860434/68ba6b65-a57f-4c99-a3e7-cb24ea11724f">
|
non_test
|
enable github secret scanning and push protection in the code security and analysis organization settings changes made for the following organizations artichoke artichokeruby artichoke ruby img width alt screenshot at pm src
| 0
|
118,157
| 9,977,058,755
|
IssuesEvent
|
2019-07-09 16:19:17
|
reactioncommerce/reaction
|
https://api.github.com/repos/reactioncommerce/reaction
|
closed
|
App tests should not use local private settings
|
testing
|
When running app tests, it will use the local `reaction.json` which can result in inconsistent results.
We should probably create a `test_settings.json` file and have tests run against that if it exists.
|
1.0
|
App tests should not use local private settings - When running app tests, it will use the local `reaction.json` which can result in inconsistent results.
We should probably create a `test_settings.json` file and have tests run against that if it exists.
|
test
|
app tests should not use local private settings when running app tests it will use the local reaction json which can result in inconsistent results we should probably create a test settings json file and have tests run against that if it exists
| 1
|
220,992
| 16,992,491,491
|
IssuesEvent
|
2021-06-30 23:02:16
|
facebookresearch/d2go
|
https://api.github.com/repos/facebookresearch/d2go
|
closed
|
how can i train the model by pascal data format?
|
documentation
|
hi, thanks for your open source.
i wanna train my own data and it's pascal data format , could you show example about how to use pascal datasets?
thanks a lot!
|
1.0
|
how can i train the model by pascal data format? - hi, thanks for your open source.
i wanna train my own data and it's pascal data format , could you show example about how to use pascal datasets?
thanks a lot!
|
non_test
|
how can i train the model by pascal data format hi thanks for your open source i wanna train my own data and it s pascal data format could you show example about how to use pascal datasets thanks a lot
| 0
|
243,414
| 20,385,688,484
|
IssuesEvent
|
2022-02-22 06:30:42
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
acceptance: TestDockerCLI failed
|
C-test-failure O-robot branch-release-21.2
|
acceptance.TestDockerCLI [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4427168&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4427168&tab=artifacts#/) on release-21.2 @ [514b79fa4723584ea1359e0ad4118962a78c1a48](https://github.com/cockroachdb/cockroach/commits/514b79fa4723584ea1359e0ad4118962a78c1a48):
```
=== RUN TestDockerCLI
test_log_scope.go:79: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerCLI856654710
test_log_scope.go:80: use -show-logs to present logs inline
=== CONT TestDockerCLI
cli_test.go:93: -- test log scope end --
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerCLI856654710
--- FAIL: TestDockerCLI (427.32s)
=== RUN TestDockerCLI/test_demo_node_cmds.tcl
=== CONT TestDockerCLI/test_demo_node_cmds.tcl
cli_test.go:89: non-zero exit code: 1
--- FAIL: TestDockerCLI/test_demo_node_cmds.tcl (38.71s)
```
<details><summary>Reproduce</summary>
<p>
To reproduce, try:
```bash
make stressrace TESTS=TestDockerCLI PKG=./pkg/acceptance TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Parameters in this failure:
- GOFLAGS=-json
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #76448 acceptance: TestDockerCLI/test_extern_dir.tcl/runMode=docker failed [C-test-failure O-robot T-server-and-security branch-master]
- #76413 acceptance: TestDockerCLI/test_local_cmds.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #75528 acceptance: TestDockerCLI/test_cert_advisory_validation.tcl/runMode=docker failed [C-test-failure O-robot T-server-and-security branch-master]
- #75480 acceptance: TestDockerCLI/test_reconnect.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #72913 acceptance: TestDockerCLI/test_disable_replication.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #69105 acceptance: TestDockerCLI/test_init_command.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #61896 acceptance: TestDockerCLI failed [C-test-failure O-robot branch-release-21.1]
- #61834 acceptance: TestDockerCLI failed [C-test-failure O-robot branch-master]
</p>
</details>
<details><summary>Internal log</summary>
<p>
```
tamird marked as alumn{us/a}; resolving to tbg instead
```
</p>
</details>
/cc @cockroachdb/sql-experience tbg
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDockerCLI.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
acceptance: TestDockerCLI failed - acceptance.TestDockerCLI [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4427168&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4427168&tab=artifacts#/) on release-21.2 @ [514b79fa4723584ea1359e0ad4118962a78c1a48](https://github.com/cockroachdb/cockroach/commits/514b79fa4723584ea1359e0ad4118962a78c1a48):
```
=== RUN TestDockerCLI
test_log_scope.go:79: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerCLI856654710
test_log_scope.go:80: use -show-logs to present logs inline
=== CONT TestDockerCLI
cli_test.go:93: -- test log scope end --
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestDockerCLI856654710
--- FAIL: TestDockerCLI (427.32s)
=== RUN TestDockerCLI/test_demo_node_cmds.tcl
=== CONT TestDockerCLI/test_demo_node_cmds.tcl
cli_test.go:89: non-zero exit code: 1
--- FAIL: TestDockerCLI/test_demo_node_cmds.tcl (38.71s)
```
<details><summary>Reproduce</summary>
<p>
To reproduce, try:
```bash
make stressrace TESTS=TestDockerCLI PKG=./pkg/acceptance TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
Parameters in this failure:
- GOFLAGS=-json
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #76448 acceptance: TestDockerCLI/test_extern_dir.tcl/runMode=docker failed [C-test-failure O-robot T-server-and-security branch-master]
- #76413 acceptance: TestDockerCLI/test_local_cmds.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #75528 acceptance: TestDockerCLI/test_cert_advisory_validation.tcl/runMode=docker failed [C-test-failure O-robot T-server-and-security branch-master]
- #75480 acceptance: TestDockerCLI/test_reconnect.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #72913 acceptance: TestDockerCLI/test_disable_replication.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #69105 acceptance: TestDockerCLI/test_init_command.tcl/runMode=docker failed [C-test-failure O-robot branch-master]
- #61896 acceptance: TestDockerCLI failed [C-test-failure O-robot branch-release-21.1]
- #61834 acceptance: TestDockerCLI failed [C-test-failure O-robot branch-master]
</p>
</details>
<details><summary>Internal log</summary>
<p>
```
tamird marked as alumn{us/a}; resolving to tbg instead
```
</p>
</details>
/cc @cockroachdb/sql-experience tbg
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDockerCLI.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
acceptance testdockercli failed acceptance testdockercli with on release run testdockercli test log scope go test logs captured to home agent work go src github com cockroachdb cockroach artifacts acceptance test log scope go use show logs to present logs inline cont testdockercli cli test go test log scope end test logs left over in home agent work go src github com cockroachdb cockroach artifacts acceptance fail testdockercli run testdockercli test demo node cmds tcl cont testdockercli test demo node cmds tcl cli test go non zero exit code fail testdockercli test demo node cmds tcl reproduce to reproduce try bash make stressrace tests testdockercli pkg pkg acceptance testtimeout stressflags timeout parameters in this failure goflags json same failure on other branches acceptance testdockercli test extern dir tcl runmode docker failed acceptance testdockercli test local cmds tcl runmode docker failed acceptance testdockercli test cert advisory validation tcl runmode docker failed acceptance testdockercli test reconnect tcl runmode docker failed acceptance testdockercli test disable replication tcl runmode docker failed acceptance testdockercli test init command tcl runmode docker failed acceptance testdockercli failed acceptance testdockercli failed internal log tamird marked as alumn us a resolving to tbg instead cc cockroachdb sql experience tbg
| 1
|
232,204
| 18,850,410,601
|
IssuesEvent
|
2021-11-11 20:03:54
|
wazuh/wazuh
|
https://api.github.com/repos/wazuh/wazuh
|
closed
|
wazuh-db unit test is reporting AddressSanitizer issues
|
core/db bug/devel core/unit tests
|
|Wazuh version|Component|Install type|Install method|Platform|
|---|---|---|---|---|
| dev-syscollector-deltas | wazuh-db | Manager | Any | Any |
## Description
Hi team!
During rebase of master branch after #9099 merge, it was found multiple error messages from AddressSanitizer
```
99% tests passed, 2 tests failed out of 138
Total Test time (real) = 4.74 sec
The following tests FAILED:
27 - test_wdb_parser (Failed)
40 - test_wdb_delta_event (Failed)
```
```
[ RUN ] test_wdb_modify_dbsync_bad_cache
=================================================================
==274842==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000000a90 at pc 0x559fc03a0c6e bp 0x7ffe3e33d320 sp 0x7ffe3e33d310
WRITE of size 8 at 0x602000000a90 thread T0
#0 0x559fc03a0c6d in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:106
#1 0x559fc039c2b6 in test_wdb_modify_dbsync_bad_cache /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_delta_event.c:251
#2 0x7f61d3e02322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
#3 0x7f61d3e03366 in _cmocka_run_group_tests (/usr/local/lib/libcmocka.so.0+0x7366)
#4 0x559fc039e813 in main /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_delta_event.c:557
#5 0x7f61d32c40b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#6 0x559fc03988fd in _start (/root/repos/wazuh/src/unit_tests/build/wazuh_db/test_wdb_delta_event+0x958fd)
0x602000000a91 is located 0 bytes to the right of 1-byte region [0x602000000a90,0x602000000a91)
allocated by thread T0 here:
#0 0x7f61d4791dc6 in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10ddc6)
#1 0x559fc03a0920 in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:99
#2 0x559fc039c2b6 in test_wdb_modify_dbsync_bad_cache /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_delta_event.c:251
#3 0x7f61d3e02322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
SUMMARY: AddressSanitizer: heap-buffer-overflow wazuh_db/wdb_delta_event.c:106 in wdb_modify_dbsync
Shadow bytes around the buggy address:
0x0c047fff8100: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x0c047fff8110: fa fa fd fa fa fa fd fa fa fa fd fd fa fa fd fa
0x0c047fff8120: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x0c047fff8130: fa fa fd fa fa fa fd fa fa fa fd fd fa fa fd fa
0x0c047fff8140: fa fa fd fa fa fa 00 00 fa fa 07 fa fa fa 00 fa
=>0x0c047fff8150: fa fa[01]fa fa fa 00 02 fa fa fa fa fa fa fa fa
0x0c047fff8160: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8170: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8180: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8190: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff81a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==274842==ABORTING
```
```
[ RUN ] test_dbsync_modify_type_exists_data_1
=================================================================
==274885==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6030000062f8 at pc 0x55fa8ea50e4e bp 0x7fff8fb4be30 sp 0x7fff8fb4be20
WRITE of size 8 at 0x6030000062f8 thread T0
#0 0x55fa8ea50e4d in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:118
#1 0x55fa8e9f9dc3 in process_dbsync_data wazuh_db/wdb_parser.c:5713
#2 0x55fa8e9fa67a in wdb_parse_dbsync wazuh_db/wdb_parser.c:5762
#3 0x55fa8e9b8e30 in test_dbsync_modify_type_exists_data_1 /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_parser.c:1101
#4 0x7f2c72fe2322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
#5 0x7f2c72fe3366 in _cmocka_run_group_tests (/usr/local/lib/libcmocka.so.0+0x7366)
#6 0x55fa8e9c9141 in main /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_parser.c:2304
#7 0x7f2c724a40b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#8 0x55fa8e9a907d in _start (/root/repos/wazuh/src/unit_tests/build/wazuh_db/test_wdb_parser+0x1c207d)
0x6030000062f9 is located 0 bytes to the right of 25-byte region [0x6030000062e0,0x6030000062f9)
allocated by thread T0 here:
#0 0x7f2c73971dc6 in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10ddc6)
#1 0x55fa8ea508ba in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:99
#2 0x55fa8e9f9dc3 in process_dbsync_data wazuh_db/wdb_parser.c:5713
#3 0x55fa8e9fa67a in wdb_parse_dbsync wazuh_db/wdb_parser.c:5762
#4 0x55fa8e9b8e30 in test_dbsync_modify_type_exists_data_1 /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_parser.c:1101
#5 0x7f2c72fe2322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
SUMMARY: AddressSanitizer: heap-buffer-overflow wazuh_db/wdb_delta_event.c:118 in wdb_modify_dbsync
Shadow bytes around the buggy address:
0x0c067fff8c00: fa fa 00 00 00 00 fa fa 00 00 00 00 fa fa 00 00
0x0c067fff8c10: 00 fa fa fa 00 00 00 00 fa fa 00 00 00 00 fa fa
0x0c067fff8c20: 00 00 00 fa fa fa 00 00 00 00 fa fa 00 00 00 00
0x0c067fff8c30: fa fa 00 00 00 fa fa fa 00 00 00 00 fa fa 00 00
0x0c067fff8c40: 00 00 fa fa 00 00 00 00 fa fa 00 00 00 00 fa fa
=>0x0c067fff8c50: 00 00 00 fa fa fa 00 00 00 00 fa fa 00 00 00[01]
0x0c067fff8c60: fa fa 00 00 03 fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8c70: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8c80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8c90: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8ca0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==274885==ABORTING
```
## DoD
- [x] Fix wdb_modify_dbsync
- [x] Check unit tests and coverage
Regards,
Nico
|
1.0
|
wazuh-db unit test is reporting AddressSanitizer issues - |Wazuh version|Component|Install type|Install method|Platform|
|---|---|---|---|---|
| dev-syscollector-deltas | wazuh-db | Manager | Any | Any |
## Description
Hi team!
During rebase of master branch after #9099 merge, it was found multiple error messages from AddressSanitizer
```
99% tests passed, 2 tests failed out of 138
Total Test time (real) = 4.74 sec
The following tests FAILED:
27 - test_wdb_parser (Failed)
40 - test_wdb_delta_event (Failed)
```
```
[ RUN ] test_wdb_modify_dbsync_bad_cache
=================================================================
==274842==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000000a90 at pc 0x559fc03a0c6e bp 0x7ffe3e33d320 sp 0x7ffe3e33d310
WRITE of size 8 at 0x602000000a90 thread T0
#0 0x559fc03a0c6d in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:106
#1 0x559fc039c2b6 in test_wdb_modify_dbsync_bad_cache /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_delta_event.c:251
#2 0x7f61d3e02322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
#3 0x7f61d3e03366 in _cmocka_run_group_tests (/usr/local/lib/libcmocka.so.0+0x7366)
#4 0x559fc039e813 in main /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_delta_event.c:557
#5 0x7f61d32c40b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#6 0x559fc03988fd in _start (/root/repos/wazuh/src/unit_tests/build/wazuh_db/test_wdb_delta_event+0x958fd)
0x602000000a91 is located 0 bytes to the right of 1-byte region [0x602000000a90,0x602000000a91)
allocated by thread T0 here:
#0 0x7f61d4791dc6 in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10ddc6)
#1 0x559fc03a0920 in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:99
#2 0x559fc039c2b6 in test_wdb_modify_dbsync_bad_cache /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_delta_event.c:251
#3 0x7f61d3e02322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
SUMMARY: AddressSanitizer: heap-buffer-overflow wazuh_db/wdb_delta_event.c:106 in wdb_modify_dbsync
Shadow bytes around the buggy address:
0x0c047fff8100: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x0c047fff8110: fa fa fd fa fa fa fd fa fa fa fd fd fa fa fd fa
0x0c047fff8120: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x0c047fff8130: fa fa fd fa fa fa fd fa fa fa fd fd fa fa fd fa
0x0c047fff8140: fa fa fd fa fa fa 00 00 fa fa 07 fa fa fa 00 fa
=>0x0c047fff8150: fa fa[01]fa fa fa 00 02 fa fa fa fa fa fa fa fa
0x0c047fff8160: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8170: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8180: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8190: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff81a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==274842==ABORTING
```
```
[ RUN ] test_dbsync_modify_type_exists_data_1
=================================================================
==274885==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6030000062f8 at pc 0x55fa8ea50e4e bp 0x7fff8fb4be30 sp 0x7fff8fb4be20
WRITE of size 8 at 0x6030000062f8 thread T0
#0 0x55fa8ea50e4d in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:118
#1 0x55fa8e9f9dc3 in process_dbsync_data wazuh_db/wdb_parser.c:5713
#2 0x55fa8e9fa67a in wdb_parse_dbsync wazuh_db/wdb_parser.c:5762
#3 0x55fa8e9b8e30 in test_dbsync_modify_type_exists_data_1 /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_parser.c:1101
#4 0x7f2c72fe2322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
#5 0x7f2c72fe3366 in _cmocka_run_group_tests (/usr/local/lib/libcmocka.so.0+0x7366)
#6 0x55fa8e9c9141 in main /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_parser.c:2304
#7 0x7f2c724a40b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#8 0x55fa8e9a907d in _start (/root/repos/wazuh/src/unit_tests/build/wazuh_db/test_wdb_parser+0x1c207d)
0x6030000062f9 is located 0 bytes to the right of 25-byte region [0x6030000062e0,0x6030000062f9)
allocated by thread T0 here:
#0 0x7f2c73971dc6 in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10ddc6)
#1 0x55fa8ea508ba in wdb_modify_dbsync wazuh_db/wdb_delta_event.c:99
#2 0x55fa8e9f9dc3 in process_dbsync_data wazuh_db/wdb_parser.c:5713
#3 0x55fa8e9fa67a in wdb_parse_dbsync wazuh_db/wdb_parser.c:5762
#4 0x55fa8e9b8e30 in test_dbsync_modify_type_exists_data_1 /root/repos/wazuh/src/unit_tests/wazuh_db/test_wdb_parser.c:1101
#5 0x7f2c72fe2322 in cmocka_run_one_test_or_fixture (/usr/local/lib/libcmocka.so.0+0x6322)
SUMMARY: AddressSanitizer: heap-buffer-overflow wazuh_db/wdb_delta_event.c:118 in wdb_modify_dbsync
Shadow bytes around the buggy address:
0x0c067fff8c00: fa fa 00 00 00 00 fa fa 00 00 00 00 fa fa 00 00
0x0c067fff8c10: 00 fa fa fa 00 00 00 00 fa fa 00 00 00 00 fa fa
0x0c067fff8c20: 00 00 00 fa fa fa 00 00 00 00 fa fa 00 00 00 00
0x0c067fff8c30: fa fa 00 00 00 fa fa fa 00 00 00 00 fa fa 00 00
0x0c067fff8c40: 00 00 fa fa 00 00 00 00 fa fa 00 00 00 00 fa fa
=>0x0c067fff8c50: 00 00 00 fa fa fa 00 00 00 00 fa fa 00 00 00[01]
0x0c067fff8c60: fa fa 00 00 03 fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8c70: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8c80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8c90: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8ca0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==274885==ABORTING
```
## DoD
- [x] Fix wdb_modify_dbsync
- [x] Check unit tests and coverage
Regards,
Nico
|
test
|
wazuh db unit test is reporting addresssanitizer issues wazuh version component install type install method platform dev syscollector deltas wazuh db manager any any description hi team during rebase of master branch after merge it was found multiple error messages from addresssanitizer tests passed tests failed out of total test time real sec the following tests failed test wdb parser failed test wdb delta event failed test wdb modify dbsync bad cache error addresssanitizer heap buffer overflow on address at pc bp sp write of size at thread in wdb modify dbsync wazuh db wdb delta event c in test wdb modify dbsync bad cache root repos wazuh src unit tests wazuh db test wdb delta event c in cmocka run one test or fixture usr local lib libcmocka so in cmocka run group tests usr local lib libcmocka so in main root repos wazuh src unit tests wazuh db test wdb delta event c in libc start main lib linux gnu libc so in start root repos wazuh src unit tests build wazuh db test wdb delta event is located bytes to the right of byte region allocated by thread here in calloc lib linux gnu libasan so in wdb modify dbsync wazuh db wdb delta event c in test wdb modify dbsync bad cache root repos wazuh src unit tests wazuh db test wdb delta event c in cmocka run one test or fixture usr local lib libcmocka so summary addresssanitizer heap buffer overflow wazuh db wdb delta event c in wdb modify dbsync shadow bytes around the buggy address fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa shadow byte legend one shadow byte represents application bytes addressable partially addressable heap left redzone fa freed heap region fd stack left redzone stack mid redzone stack right redzone stack after return stack use after scope global redzone global init order poisoned by user container overflow fc array cookie ac intra object redzone bb asan internal fe left alloca redzone ca right alloca redzone cb shadow gap cc aborting test dbsync modify type exists data error addresssanitizer heap buffer overflow on address at pc bp sp write of size at thread in wdb modify dbsync wazuh db wdb delta event c in process dbsync data wazuh db wdb parser c in wdb parse dbsync wazuh db wdb parser c in test dbsync modify type exists data root repos wazuh src unit tests wazuh db test wdb parser c in cmocka run one test or fixture usr local lib libcmocka so in cmocka run group tests usr local lib libcmocka so in main root repos wazuh src unit tests wazuh db test wdb parser c in libc start main lib linux gnu libc so in start root repos wazuh src unit tests build wazuh db test wdb parser is located bytes to the right of byte region allocated by thread here in calloc lib linux gnu libasan so in wdb modify dbsync wazuh db wdb delta event c in process dbsync data wazuh db wdb parser c in wdb parse dbsync wazuh db wdb parser c in test dbsync modify type exists data root repos wazuh src unit tests wazuh db test wdb parser c in cmocka run one test or fixture usr local lib libcmocka so summary addresssanitizer heap buffer overflow wazuh db wdb delta event c in wdb modify dbsync shadow bytes around the buggy address fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa shadow byte legend one shadow byte represents application bytes addressable partially addressable heap left redzone fa freed heap region fd stack left redzone stack mid redzone stack right redzone stack after return stack use after scope global redzone global init order poisoned by user container overflow fc array cookie ac intra object redzone bb asan internal fe left alloca redzone ca right alloca redzone cb shadow gap cc aborting dod fix wdb modify dbsync check unit tests and coverage regards nico
| 1
|
265,428
| 23,167,486,397
|
IssuesEvent
|
2022-07-30 06:56:14
|
tky823/ssspy
|
https://api.github.com/repos/tky823/ssspy
|
closed
|
Customize coverage report
|
test
|
The settings of code-cov are in default.
Some of the settings may be changed or customized.
|
1.0
|
Customize coverage report - The settings of code-cov are in default.
Some of the settings may be changed or customized.
|
test
|
customize coverage report the settings of code cov are in default some of the settings may be changed or customized
| 1
|
39,555
| 2,856,732,032
|
IssuesEvent
|
2015-06-02 16:14:42
|
unt-libraries/django-name
|
https://api.github.com/repos/unt-libraries/django-name
|
closed
|
Exclude tests from the Distribution
|
bug priority high
|
Exclude the tests directory when building the distribution. Currently, if we install this app with pip, we end up with `name` and `tests` in the root of the site-packages.
|
1.0
|
Exclude tests from the Distribution - Exclude the tests directory when building the distribution. Currently, if we install this app with pip, we end up with `name` and `tests` in the root of the site-packages.
|
non_test
|
exclude tests from the distribution exclude the tests directory when building the distribution currently if we install this app with pip we end up with name and tests in the root of the site packages
| 0
|
271,206
| 20,625,208,667
|
IssuesEvent
|
2022-03-07 21:42:00
|
niiree/t80sz
|
https://api.github.com/repos/niiree/t80sz
|
opened
|
Better Contribution Instructions
|
documentation enhancement
|
This will be important if we want this project to be accessible for developers coming without further knowledge of the tree structure and our best practices.
|
1.0
|
Better Contribution Instructions - This will be important if we want this project to be accessible for developers coming without further knowledge of the tree structure and our best practices.
|
non_test
|
better contribution instructions this will be important if we want this project to be accessible for developers coming without further knowledge of the tree structure and our best practices
| 0
|
98,039
| 4,016,191,915
|
IssuesEvent
|
2016-05-15 12:49:28
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Creating service account does not auto generate corresponding secret
|
priority/P2 team/control-plane
|
Following the docs http://kubernetes.io/v1.1/docs/user-guide/service-accounts.html I created a service account:
```
cat > /tmp/serviceaccount.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
EOF
kubectl create -f /tmp/serviceaccount.yaml
```
Then run `kubectl get serviceaccounts/build-robot -o yaml`:
Output is
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2016-01-22T06:18:29Z
name: build-robot
namespace: default
resourceVersion: "114"
selfLink: /api/v1/namespaces/default/serviceaccounts/build-robot
uid: f44ecb14-c0cf-11e5-bc53-08002702ea1b
```
I would expect to see a reference there to the `secrets` generated.
It's also work noting that the `default` system account did not have a secret generated.
System: Fedora 23, Docker 1.91.1, Running on docker as per http://kubernetes.io/v1.1/docs/getting-started-guides/docker.html
|
1.0
|
Creating service account does not auto generate corresponding secret - Following the docs http://kubernetes.io/v1.1/docs/user-guide/service-accounts.html I created a service account:
```
cat > /tmp/serviceaccount.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
EOF
kubectl create -f /tmp/serviceaccount.yaml
```
Then run `kubectl get serviceaccounts/build-robot -o yaml`:
Output is
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2016-01-22T06:18:29Z
name: build-robot
namespace: default
resourceVersion: "114"
selfLink: /api/v1/namespaces/default/serviceaccounts/build-robot
uid: f44ecb14-c0cf-11e5-bc53-08002702ea1b
```
I would expect to see a reference there to the `secrets` generated.
It's also work noting that the `default` system account did not have a secret generated.
System: Fedora 23, Docker 1.91.1, Running on docker as per http://kubernetes.io/v1.1/docs/getting-started-guides/docker.html
|
non_test
|
creating service account does not auto generate corresponding secret following the docs i created a service account cat tmp serviceaccount yaml eof apiversion kind serviceaccount metadata name build robot eof kubectl create f tmp serviceaccount yaml then run kubectl get serviceaccounts build robot o yaml output is yaml apiversion kind serviceaccount metadata creationtimestamp name build robot namespace default resourceversion selflink api namespaces default serviceaccounts build robot uid i would expect to see a reference there to the secrets generated it s also work noting that the default system account did not have a secret generated system fedora docker running on docker as per
| 0
|
445,817
| 12,836,453,498
|
IssuesEvent
|
2020-07-07 14:22:45
|
RichardFav/AnalysisGUI
|
https://api.github.com/repos/RichardFav/AnalysisGUI
|
opened
|
Move Psychometric Fit function out of Bootstrapping Function
|
HIGH priority enhancement
|
This will be for the "**Speed LDA Comparison (Pooled Experiments)**" function.
In addition to this, have the fits calculated for the first viable bin after the comparison velocity bin (i.e., if the comparison bin is 10 deg/s, then the first viable bin will be 15 deg/s). Have this as a calculation option (i.e., either include all bins in the fitting calculations or from the first viable bin).
|
1.0
|
Move Psychometric Fit function out of Bootstrapping Function - This will be for the "**Speed LDA Comparison (Pooled Experiments)**" function.
In addition to this, have the fits calculated for the first viable bin after the comparison velocity bin (i.e., if the comparison bin is 10 deg/s, then the first viable bin will be 15 deg/s). Have this as a calculation option (i.e., either include all bins in the fitting calculations or from the first viable bin).
|
non_test
|
move psychometric fit function out of bootstrapping function this will be for the speed lda comparison pooled experiments function in addition to this have the fits calculated for the first viable bin after the comparison velocity bin i e if the comparison bin is deg s then the first viable bin will be deg s have this as a calculation option i e either include all bins in the fitting calculations or from the first viable bin
| 0
|
164,116
| 6,219,681,797
|
IssuesEvent
|
2017-07-09 15:52:18
|
CS2103JUN2017-T4/main
|
https://api.github.com/repos/CS2103JUN2017-T4/main
|
closed
|
Update Clear Task Functionality
|
priority.high type.task
|
User would be able to:
1. Clear All Task
2. Clear Completed Task
3. Clear Uncompleted Task
|
1.0
|
Update Clear Task Functionality - User would be able to:
1. Clear All Task
2. Clear Completed Task
3. Clear Uncompleted Task
|
non_test
|
update clear task functionality user would be able to clear all task clear completed task clear uncompleted task
| 0
|
783,850
| 27,548,827,675
|
IssuesEvent
|
2023-03-07 13:41:34
|
open-sauced/insights
|
https://api.github.com/repos/open-sauced/insights
|
closed
|
Bug: Everytime there is a release my data is erased.
|
🐛 bug 👀 needs triage high-priority
|
### Describe the bug
Confirm this with @getaheaddev as well.
<img width="562" alt="Screen Shot 2023-03-01 at 3 26 14 PM" src="https://user-images.githubusercontent.com/5713670/222289954-7ca43635-2536-47e6-86f7-060fe8a3fdfa.png">
### Steps to reproduce
1. view the settings
2. Notice all your data is gone
### Affected services
insights.opensauced.pizza
### Platforms
_No response_
### Browsers
_No response_
### Environment
_No response_
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [x] I agree to follow this project's Contribution Docs
|
1.0
|
Bug: Everytime there is a release my data is erased. - ### Describe the bug
Confirm this with @getaheaddev as well.
<img width="562" alt="Screen Shot 2023-03-01 at 3 26 14 PM" src="https://user-images.githubusercontent.com/5713670/222289954-7ca43635-2536-47e6-86f7-060fe8a3fdfa.png">
### Steps to reproduce
1. view the settings
2. Notice all your data is gone
### Affected services
insights.opensauced.pizza
### Platforms
_No response_
### Browsers
_No response_
### Environment
_No response_
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [x] I agree to follow this project's Contribution Docs
|
non_test
|
bug everytime there is a release my data is erased describe the bug confirm this with getaheaddev as well img width alt screen shot at pm src steps to reproduce view the settings notice all your data is gone affected services insights opensauced pizza platforms no response browsers no response environment no response additional context no response code of conduct i agree to follow this project s code of conduct contributing docs i agree to follow this project s contribution docs
| 0
|
762,190
| 26,711,143,524
|
IssuesEvent
|
2023-01-28 00:13:59
|
googleapis/gapic-generator-java
|
https://api.github.com/repos/googleapis/gapic-generator-java
|
opened
|
Enable rest_numeric_enum feature in showcase tests
|
type: feature request priority: p3
|
This feature is currently disabled until https://github.com/googleapis/gapic-showcase/issues/1255 is addressed.
|
1.0
|
Enable rest_numeric_enum feature in showcase tests - This feature is currently disabled until https://github.com/googleapis/gapic-showcase/issues/1255 is addressed.
|
non_test
|
enable rest numeric enum feature in showcase tests this feature is currently disabled until is addressed
| 0
|
307,975
| 26,571,729,576
|
IssuesEvent
|
2023-01-21 08:30:58
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix jax_numpy_creation.test_jax_numpy_zeros
|
JAX Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744293680" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
1.0
|
Fix jax_numpy_creation.test_jax_numpy_zeros - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3941620207/jobs/6744293680" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3923202675/jobs/6706647396" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
test
|
fix jax numpy creation test jax numpy zeros tensorflow img src torch img src numpy img src jax img src not found not found
| 1
|
150,159
| 11,950,042,026
|
IssuesEvent
|
2020-04-03 14:35:53
|
ValveSoftware/steam-for-linux
|
https://api.github.com/repos/ValveSoftware/steam-for-linux
|
closed
|
Ubuntu 16.04 Steam client failing
|
Distro Family: Ubuntu Need Retest Steam client
|
#### Your system information
* Steam client version (build number or date): version 1500335472
* Distribution (e.g. Ubuntu): Ubuntu 16.04.2 LTS
* Opted into Steam client beta?: [Yes/No] N
* Have you checked for system updates?: [Yes/No] Y
#### Please describe your issue in as much detail as possible:
Seeing this issue. Running steam on Unbuntu Distro on Paperspace.
AMD64 processor
threadtools.cpp (1343) : Assertion Failed: Thread synchronization object is unuseable
#### Steps for reproducing this issue:
1. run steam command
2. Triggers x forwarding
3. x forwarding freezes when asing me to log in. The UI is unresponsive and eventually the terminal crashes

similiar to #5101
Full ouput
Running Steam on ubuntu 16.04 64-bit
STEAM_RUNTIME is enabled automatically
[2017-08-06 01:22:47] Startup - updater built Jul 17 2017 23:10:02
Looks like steam didn't shutdown cleanly, scheduling immediate update check
[2017-08-06 01:22:49] Checking for update on startup
[2017-08-06 01:22:49] Checking for available updates...
[2017-08-06 01:22:49] Download skipped: /client/steam_client_ubuntu12 version 1500335472, installed version 1500335472
[2017-08-06 01:22:49] Nothing to do
[2017-08-06 01:22:49] Verifying installation...
[2017-08-06 01:22:49] Performing checksum verification of executable files
[2017-08-06 01:22:50] Verification complete
threadtools.cpp (1343) : Assertion Failed: Thread synchronization object is unuseable
|
1.0
|
Ubuntu 16.04 Steam client failing - #### Your system information
* Steam client version (build number or date): version 1500335472
* Distribution (e.g. Ubuntu): Ubuntu 16.04.2 LTS
* Opted into Steam client beta?: [Yes/No] N
* Have you checked for system updates?: [Yes/No] Y
#### Please describe your issue in as much detail as possible:
Seeing this issue. Running steam on Unbuntu Distro on Paperspace.
AMD64 processor
threadtools.cpp (1343) : Assertion Failed: Thread synchronization object is unuseable
#### Steps for reproducing this issue:
1. run steam command
2. Triggers x forwarding
3. x forwarding freezes when asing me to log in. The UI is unresponsive and eventually the terminal crashes

similiar to #5101
Full ouput
Running Steam on ubuntu 16.04 64-bit
STEAM_RUNTIME is enabled automatically
[2017-08-06 01:22:47] Startup - updater built Jul 17 2017 23:10:02
Looks like steam didn't shutdown cleanly, scheduling immediate update check
[2017-08-06 01:22:49] Checking for update on startup
[2017-08-06 01:22:49] Checking for available updates...
[2017-08-06 01:22:49] Download skipped: /client/steam_client_ubuntu12 version 1500335472, installed version 1500335472
[2017-08-06 01:22:49] Nothing to do
[2017-08-06 01:22:49] Verifying installation...
[2017-08-06 01:22:49] Performing checksum verification of executable files
[2017-08-06 01:22:50] Verification complete
threadtools.cpp (1343) : Assertion Failed: Thread synchronization object is unuseable
|
test
|
ubuntu steam client failing your system information steam client version build number or date version distribution e g ubuntu ubuntu lts opted into steam client beta n have you checked for system updates y please describe your issue in as much detail as possible seeing this issue running steam on unbuntu distro on paperspace processor threadtools cpp assertion failed thread synchronization object is unuseable steps for reproducing this issue run steam command triggers x forwarding x forwarding freezes when asing me to log in the ui is unresponsive and eventually the terminal crashes similiar to full ouput running steam on ubuntu bit steam runtime is enabled automatically startup updater built jul looks like steam didn t shutdown cleanly scheduling immediate update check checking for update on startup checking for available updates download skipped client steam client version installed version nothing to do verifying installation performing checksum verification of executable files verification complete threadtools cpp assertion failed thread synchronization object is unuseable
| 1
|
296,077
| 25,525,397,461
|
IssuesEvent
|
2022-11-29 01:30:24
|
ventoy/Ventoy
|
https://api.github.com/repos/ventoy/Ventoy
|
closed
|
[Success Image Report]: rebornos_xfce_minimal-2022.11.13-x86_64.iso
|
【Tested Image Report】
|
### Official Website List
- [X] I have checked the list in official website and the image file is not listed there.
### Ventoy Version
1.0.79
### BIOS Mode
UEFI Mode
### Partition Style
GPT
### Image file name
rebornos_xfce_minimal-2022.11.13-x86_64.iso
### Image file checksum type
SHA256
### Image file checksum value
856c489bb67f3cecebaf49017e5dc56ddc2278ad201d19e96e99ed786f7746b1
### Image file download link (if applicable)
https://www.rebornos.org/download/
### Test environment
ASUS MD800
### More Details?
This image file booted successfully in Ventoy.
|
1.0
|
[Success Image Report]: rebornos_xfce_minimal-2022.11.13-x86_64.iso - ### Official Website List
- [X] I have checked the list in official website and the image file is not listed there.
### Ventoy Version
1.0.79
### BIOS Mode
UEFI Mode
### Partition Style
GPT
### Image file name
rebornos_xfce_minimal-2022.11.13-x86_64.iso
### Image file checksum type
SHA256
### Image file checksum value
856c489bb67f3cecebaf49017e5dc56ddc2278ad201d19e96e99ed786f7746b1
### Image file download link (if applicable)
https://www.rebornos.org/download/
### Test environment
ASUS MD800
### More Details?
This image file booted successfully in Ventoy.
|
test
|
rebornos xfce minimal iso official website list i have checked the list in official website and the image file is not listed there ventoy version bios mode uefi mode partition style gpt image file name rebornos xfce minimal iso image file checksum type image file checksum value image file download link if applicable test environment asus more details this image file booted successfully in ventoy
| 1
|
162,454
| 12,676,412,042
|
IssuesEvent
|
2020-06-19 05:06:27
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
Test combinations of --set flags in istioctl operator
|
area/environments/operator community/good first issue community/help wanted kind/testing gap lifecycle/stale
|
See https://github.com/istio/istio/issues/21692#issuecomment-593426506 for context. This task may include extending validation to disallow some combinations of test flags.
|
1.0
|
Test combinations of --set flags in istioctl operator - See https://github.com/istio/istio/issues/21692#issuecomment-593426506 for context. This task may include extending validation to disallow some combinations of test flags.
|
test
|
test combinations of set flags in istioctl operator see for context this task may include extending validation to disallow some combinations of test flags
| 1
|
222,754
| 17,480,513,769
|
IssuesEvent
|
2021-08-09 00:47:02
|
CodeForPhilly/paws-data-pipeline
|
https://api.github.com/repos/CodeForPhilly/paws-data-pipeline
|
closed
|
Generate random RFM scores for testing
|
Testing RFM
|
We need an endpoint to populate the RFM tables with a random score for each matching_id to enable UI development.
|
1.0
|
Generate random RFM scores for testing - We need an endpoint to populate the RFM tables with a random score for each matching_id to enable UI development.
|
test
|
generate random rfm scores for testing we need an endpoint to populate the rfm tables with a random score for each matching id to enable ui development
| 1
|
85,796
| 7,993,876,764
|
IssuesEvent
|
2018-07-20 09:17:51
|
apache/bookkeeper
|
https://api.github.com/repos/apache/bookkeeper
|
opened
|
BookieStorageThresholdTest#testStorageThresholdCompaction is flaky on CI
|
area/tests
|
*Problem*
```
2018-07-20\T\08:51:54.182 [ERROR] org.apache.bookkeeper.bookie.BookieStorageThresholdTest.testStorageThresholdCompaction(org.apache.bookkeeper.bookie.BookieStorageThresholdTest)
2018-07-20\T\08:51:54.182 [ERROR] Run 1: BookieStorageThresholdTest.testStorageThresholdCompaction:229 Found entry log file ([0,1,2].log. They should have been compacted/tmp/ledger2356933004759883304test2/current
2018-07-20\T\08:51:54.183 [ERROR] Run 2: BookieStorageThresholdTest.testStorageThresholdCompaction:229 Found entry log file ([0,1,2].log. They should have been compacted/tmp/ledger7058711915875226551test2/current
2018-07-20\T\08:51:54.183 [ERROR] Run 3: BookieStorageThresholdTest.testStorageThresholdCompaction:229 Found entry log file ([0,1,2].log. They should have been compacted/tmp/ledger4884835867022850349test2/current
```
|
1.0
|
BookieStorageThresholdTest#testStorageThresholdCompaction is flaky on CI - *Problem*
```
2018-07-20\T\08:51:54.182 [ERROR] org.apache.bookkeeper.bookie.BookieStorageThresholdTest.testStorageThresholdCompaction(org.apache.bookkeeper.bookie.BookieStorageThresholdTest)
2018-07-20\T\08:51:54.182 [ERROR] Run 1: BookieStorageThresholdTest.testStorageThresholdCompaction:229 Found entry log file ([0,1,2].log. They should have been compacted/tmp/ledger2356933004759883304test2/current
2018-07-20\T\08:51:54.183 [ERROR] Run 2: BookieStorageThresholdTest.testStorageThresholdCompaction:229 Found entry log file ([0,1,2].log. They should have been compacted/tmp/ledger7058711915875226551test2/current
2018-07-20\T\08:51:54.183 [ERROR] Run 3: BookieStorageThresholdTest.testStorageThresholdCompaction:229 Found entry log file ([0,1,2].log. They should have been compacted/tmp/ledger4884835867022850349test2/current
```
|
test
|
bookiestoragethresholdtest teststoragethresholdcompaction is flaky on ci problem t org apache bookkeeper bookie bookiestoragethresholdtest teststoragethresholdcompaction org apache bookkeeper bookie bookiestoragethresholdtest t run bookiestoragethresholdtest teststoragethresholdcompaction found entry log file log they should have been compacted tmp current t run bookiestoragethresholdtest teststoragethresholdcompaction found entry log file log they should have been compacted tmp current t run bookiestoragethresholdtest teststoragethresholdcompaction found entry log file log they should have been compacted tmp current
| 1
|
69,902
| 7,165,834,633
|
IssuesEvent
|
2018-01-29 15:32:49
|
pawelsalawa/sqlitestudio
|
https://api.github.com/repos/pawelsalawa/sqlitestudio
|
closed
|
I suggest to add a file association function
|
enhancement old-sqlitestudio-2
|
_(This issue was migrated from the old bug tracker of SQLiteStudio)_
Original ID from old bug tracker: 800
Originally created at: Wed Mar 14 16:59:45 2012
Originally last updated at: Wed Mar 14 16:59:45 2012
I suggest to add a file association function\n\n**Operating system:**\nWindows XP/2003/2000
|
1.0
|
I suggest to add a file association function - _(This issue was migrated from the old bug tracker of SQLiteStudio)_
Original ID from old bug tracker: 800
Originally created at: Wed Mar 14 16:59:45 2012
Originally last updated at: Wed Mar 14 16:59:45 2012
I suggest to add a file association function\n\n**Operating system:**\nWindows XP/2003/2000
|
test
|
i suggest to add a file association function this issue was migrated from the old bug tracker of sqlitestudio original id from old bug tracker originally created at wed mar originally last updated at wed mar i suggest to add a file association function n n operating system nwindows xp
| 1
|
199,341
| 15,035,170,337
|
IssuesEvent
|
2021-02-02 13:50:04
|
easydigitaldownloads/easy-digital-downloads
|
https://api.github.com/repos/easydigitaldownloads/easy-digital-downloads
|
closed
|
3.0 - update edd_30_migrate_order parameters
|
type-improvement workflow-has-pr workflow-needs-testing
|
## Enhancement Request
### Explain your enhancement (please be detailed)
We should update the parameters for this hook to be the order ID and payment meta.
### Justification or use case
Currently, the hook has two identical parameters:
```php
do_action( 'edd_30_migrate_order', $order_id, $data->ID );
```
The 3.0 migration specifically maps the original payment ID to the new order ID.
In working on hooking into the migration process for Simple Shipping, it's necessary to retrieve the original payment meta, as not all of it is copied over to the new order (which may be a bug), but Ashley suggests we go ahead and make the meta available to this hook as it's already been retrieved:
>I feel like we should actually update the action hook in core (edd_30_migrate_order) to pass through the meta. It's already retrieved it at that point and it seems reasonable that anyone hooking in would want to use it. ([ref](https://github.com/easydigitaldownloads/edd-simple-shipping/pull/104#discussion_r567958034))
|
1.0
|
3.0 - update edd_30_migrate_order parameters - ## Enhancement Request
### Explain your enhancement (please be detailed)
We should update the parameters for this hook to be the order ID and payment meta.
### Justification or use case
Currently, the hook has two identical parameters:
```php
do_action( 'edd_30_migrate_order', $order_id, $data->ID );
```
The 3.0 migration specifically maps the original payment ID to the new order ID.
In working on hooking into the migration process for Simple Shipping, it's necessary to retrieve the original payment meta, as not all of it is copied over to the new order (which may be a bug), but Ashley suggests we go ahead and make the meta available to this hook as it's already been retrieved:
>I feel like we should actually update the action hook in core (edd_30_migrate_order) to pass through the meta. It's already retrieved it at that point and it seems reasonable that anyone hooking in would want to use it. ([ref](https://github.com/easydigitaldownloads/edd-simple-shipping/pull/104#discussion_r567958034))
|
test
|
update edd migrate order parameters enhancement request explain your enhancement please be detailed we should update the parameters for this hook to be the order id and payment meta justification or use case currently the hook has two identical parameters php do action edd migrate order order id data id the migration specifically maps the original payment id to the new order id in working on hooking into the migration process for simple shipping it s necessary to retrieve the original payment meta as not all of it is copied over to the new order which may be a bug but ashley suggests we go ahead and make the meta available to this hook as it s already been retrieved i feel like we should actually update the action hook in core edd migrate order to pass through the meta it s already retrieved it at that point and it seems reasonable that anyone hooking in would want to use it
| 1
|
63,333
| 6,842,617,615
|
IssuesEvent
|
2017-11-12 04:25:56
|
weidai11/cryptopp
|
https://api.github.com/repos/weidai11/cryptopp
|
closed
|
heap-buffer-overflow in CryptoPP::ECP::SimultaneousMultiply()
|
need information need test case
|
https://circleci.com/gh/ethereum/cpp-ethereum/889
```
=================================================================
==15657==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000ab1c at pc 0x0001230bdb2a bp 0x7fff59f98700 sp 0x7fff59f97ea0
READ of size 16 at 0x60200000ab1c thread T0
#0 0x1230bdb29 in wrap_memcpy (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x18b29)
Running 1 test case...
Test Case "common_encrypt_decrypt":
.
#1 0x106660f80 in std::__1::vector<unsigned int, std::__1::allocator<unsigned int> >::__swap_out_circular_buffer(std::__1::__split_buffer<unsigned int, std::__1::allocator<unsigned int>&>&) memory:1676
#2 0x1068a6d56 in void std::__1::vector<unsigned int, std::__1::allocator<unsigned int> >::__push_back_slow_path<unsigned int const&>(unsigned int const&&&) vector:1574
#3 0x108082e75 in CryptoPP::ECP::SimultaneousMultiply(CryptoPP::ECPPoint*, CryptoPP::ECPPoint const&, CryptoPP::Integer const*, unsigned int) const (testeth:x86_64+0x10241ee75)
#4 0x108082b07 in CryptoPP::ECP::ScalarMultiply(CryptoPP::ECPPoint const&, CryptoPP::Integer const&) const (testeth:x86_64+0x10241eb07)
#5 0x10804e447 in CryptoPP::ECPPoint CryptoPP::GeneralCascadeMultiplication<CryptoPP::ECPPoint, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*> >(CryptoPP::AbstractGroup<CryptoPP::ECPPoint> const&, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>) (testeth:x86_64+0x1023ea447)
#6 0x10804d91c in CryptoPP::DL_FixedBasePrecomputationImpl<CryptoPP::ECPPoint>::Exponentiate(CryptoPP::DL_GroupPrecomputation<CryptoPP::ECPPoint> const&, CryptoPP::Integer const&) const (testeth:x86_64+0x1023e991c)
#7 0x10806ac08 in CryptoPP::DL_GroupParameters<CryptoPP::EC2NPoint>::ExponentiateBase(CryptoPP::Integer const&) const (testeth:x86_64+0x102406c08)
#8 0x107e3f5b6 in CryptoPP::DL_EncryptorBase<CryptoPP::ECPPoint>::Encrypt(CryptoPP::RandomNumberGenerator&, unsigned char const*, unsigned long, unsigned char*, CryptoPP::NameValuePairs const&) const pubkey.h:1720
#9 0x107e3ea0b in dev::crypto::Secp256k1PP::encrypt(dev::FixedHash<64u> const&, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) CryptoPP.cpp:203
#10 0x107e1428d in dev::encrypt(dev::FixedHash<64u> const&, dev::vector_ref<unsigned char const>, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) Common.cpp:103
#11 0x10684b26e in Crypto::devcrypto::common_encrypt_decrypt::test_method() crypto.cpp:187
#12 0x10684a08d in Crypto::devcrypto::common_encrypt_decrypt_invoker() crypto.cpp:179
#13 0x1064b1c77 in boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) execution_monitor.ipp:1300
#14 0x1063baa69 in boost::execution_monitor::catch_signals(boost::function<int ()> const&) execution_monitor.ipp:281
#15 0x1063bb196 in boost::execution_monitor::execute(boost::function<int ()> const&) execution_monitor.ipp:1203
#16 0x1063a6119 in boost::execution_monitor::vexecute(boost::function<void ()> const&) execution_monitor.ipp:1309
#17 0x1063aefb1 in boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned int) unit_test_monitor.ipp:46
#18 0x1063b4340 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:771
#19 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#20 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#21 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#22 0x1063ad5a2 in boost::unit_test::framework::run(unsigned long, bool) framework.ipp:1577
#23 0x1063e5d1d in boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) unit_test_main.ipp:231
#24 0x10641c73a in main boostTest.cpp:129
#25 0x7fff909e0234 in start (libdyld.dylib:x86_64+0x5234)
0x60200000ab20 is located 0 bytes to the right of 16-byte region [0x60200000ab10,0x60200000ab20)
allocated by thread T0 here:
#0 0x123105e3b in wrap__Znwm (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x60e3b)
#1 0x106661581 in std::__1::__split_buffer<unsigned int, std::__1::allocator<unsigned int>&>::__split_buffer(unsigned long, unsigned long, std::__1::allocator<unsigned int>&) new:226
#2 0x1068a6cb4 in void std::__1::vector<unsigned int, std::__1::allocator<unsigned int> >::__push_back_slow_path<unsigned int const&>(unsigned int const&&&) __split_buffer:310
#3 0x108082e75 in CryptoPP::ECP::SimultaneousMultiply(CryptoPP::ECPPoint*, CryptoPP::ECPPoint const&, CryptoPP::Integer const*, unsigned int) const (testeth:x86_64+0x10241ee75)
#4 0x108082b07 in CryptoPP::ECP::ScalarMultiply(CryptoPP::ECPPoint const&, CryptoPP::Integer const&) const (testeth:x86_64+0x10241eb07)
#5 0x10804e447 in CryptoPP::ECPPoint CryptoPP::GeneralCascadeMultiplication<CryptoPP::ECPPoint, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*> >(CryptoPP::AbstractGroup<CryptoPP::ECPPoint> const&, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>) (testeth:x86_64+0x1023ea447)
#6 0x10804d91c in CryptoPP::DL_FixedBasePrecomputationImpl<CryptoPP::ECPPoint>::Exponentiate(CryptoPP::DL_GroupPrecomputation<CryptoPP::ECPPoint> const&, CryptoPP::Integer const&) const (testeth:x86_64+0x1023e991c)
#7 0x10806ac08 in CryptoPP::DL_GroupParameters<CryptoPP::EC2NPoint>::ExponentiateBase(CryptoPP::Integer const&) const (testeth:x86_64+0x102406c08)
#8 0x107e3f5b6 in CryptoPP::DL_EncryptorBase<CryptoPP::ECPPoint>::Encrypt(CryptoPP::RandomNumberGenerator&, unsigned char const*, unsigned long, unsigned char*, CryptoPP::NameValuePairs const&) const pubkey.h:1720
#9 0x107e3ea0b in dev::crypto::Secp256k1PP::encrypt(dev::FixedHash<64u> const&, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) CryptoPP.cpp:203
#10 0x107e1428d in dev::encrypt(dev::FixedHash<64u> const&, dev::vector_ref<unsigned char const>, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) Common.cpp:103
#11 0x10684b26e in Crypto::devcrypto::common_encrypt_decrypt::test_method() crypto.cpp:187
#12 0x10684a08d in Crypto::devcrypto::common_encrypt_decrypt_invoker() crypto.cpp:179
#13 0x1064b1c77 in boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) execution_monitor.ipp:1300
#14 0x1063baa69 in boost::execution_monitor::catch_signals(boost::function<int ()> const&) execution_monitor.ipp:281
#15 0x1063bb196 in boost::execution_monitor::execute(boost::function<int ()> const&) execution_monitor.ipp:1203
#16 0x1063a6119 in boost::execution_monitor::vexecute(boost::function<void ()> const&) execution_monitor.ipp:1309
#17 0x1063aefb1 in boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned int) unit_test_monitor.ipp:46
#18 0x1063b4340 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:771
#19 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#20 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#21 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#22 0x1063ad5a2 in boost::unit_test::framework::run(unsigned long, bool) framework.ipp:1577
#23 0x1063e5d1d in boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) unit_test_main.ipp:231
#24 0x10641c73a in main boostTest.cpp:129
#25 0x7fff909e0234 in start (libdyld.dylib:x86_64+0x5234)
SUMMARY: AddressSanitizer: heap-buffer-overflow (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x18b29) in wrap_memcpy
Shadow bytes around the buggy address:
0x1c0400001510: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x1c0400001520: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x1c0400001530: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x1c0400001540: fa fa 00 fa fa fa 00 fa fa fa 00 fa fa fa fd fa
0x1c0400001550: fa fa fd fa fa fa 00 fa fa fa fd fa fa fa fd fa
=>0x1c0400001560: fa fa 00[04]fa fa 00 04 fa fa fa fa fa fa fa fa
0x1c0400001570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c0400001580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c0400001590: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c04000015a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c04000015b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==15657==ABORTING
==15657==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7fff59f9c000; bottom 0x000128460000; size: 0x7ffe31b3c000 (140729732284416)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
unknown location:0: fatal error: in "Crypto/devcrypto/common_encrypt_decrypt": signal: SIGABRT (application abort requested)
../test/unittests/libdevcrypto/crypto.cpp:179: last checkpoint: "common_encrypt_decrypt" test entry
```
|
1.0
|
heap-buffer-overflow in CryptoPP::ECP::SimultaneousMultiply() - https://circleci.com/gh/ethereum/cpp-ethereum/889
```
=================================================================
==15657==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000ab1c at pc 0x0001230bdb2a bp 0x7fff59f98700 sp 0x7fff59f97ea0
READ of size 16 at 0x60200000ab1c thread T0
#0 0x1230bdb29 in wrap_memcpy (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x18b29)
Running 1 test case...
Test Case "common_encrypt_decrypt":
.
#1 0x106660f80 in std::__1::vector<unsigned int, std::__1::allocator<unsigned int> >::__swap_out_circular_buffer(std::__1::__split_buffer<unsigned int, std::__1::allocator<unsigned int>&>&) memory:1676
#2 0x1068a6d56 in void std::__1::vector<unsigned int, std::__1::allocator<unsigned int> >::__push_back_slow_path<unsigned int const&>(unsigned int const&&&) vector:1574
#3 0x108082e75 in CryptoPP::ECP::SimultaneousMultiply(CryptoPP::ECPPoint*, CryptoPP::ECPPoint const&, CryptoPP::Integer const*, unsigned int) const (testeth:x86_64+0x10241ee75)
#4 0x108082b07 in CryptoPP::ECP::ScalarMultiply(CryptoPP::ECPPoint const&, CryptoPP::Integer const&) const (testeth:x86_64+0x10241eb07)
#5 0x10804e447 in CryptoPP::ECPPoint CryptoPP::GeneralCascadeMultiplication<CryptoPP::ECPPoint, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*> >(CryptoPP::AbstractGroup<CryptoPP::ECPPoint> const&, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>) (testeth:x86_64+0x1023ea447)
#6 0x10804d91c in CryptoPP::DL_FixedBasePrecomputationImpl<CryptoPP::ECPPoint>::Exponentiate(CryptoPP::DL_GroupPrecomputation<CryptoPP::ECPPoint> const&, CryptoPP::Integer const&) const (testeth:x86_64+0x1023e991c)
#7 0x10806ac08 in CryptoPP::DL_GroupParameters<CryptoPP::EC2NPoint>::ExponentiateBase(CryptoPP::Integer const&) const (testeth:x86_64+0x102406c08)
#8 0x107e3f5b6 in CryptoPP::DL_EncryptorBase<CryptoPP::ECPPoint>::Encrypt(CryptoPP::RandomNumberGenerator&, unsigned char const*, unsigned long, unsigned char*, CryptoPP::NameValuePairs const&) const pubkey.h:1720
#9 0x107e3ea0b in dev::crypto::Secp256k1PP::encrypt(dev::FixedHash<64u> const&, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) CryptoPP.cpp:203
#10 0x107e1428d in dev::encrypt(dev::FixedHash<64u> const&, dev::vector_ref<unsigned char const>, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) Common.cpp:103
#11 0x10684b26e in Crypto::devcrypto::common_encrypt_decrypt::test_method() crypto.cpp:187
#12 0x10684a08d in Crypto::devcrypto::common_encrypt_decrypt_invoker() crypto.cpp:179
#13 0x1064b1c77 in boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) execution_monitor.ipp:1300
#14 0x1063baa69 in boost::execution_monitor::catch_signals(boost::function<int ()> const&) execution_monitor.ipp:281
#15 0x1063bb196 in boost::execution_monitor::execute(boost::function<int ()> const&) execution_monitor.ipp:1203
#16 0x1063a6119 in boost::execution_monitor::vexecute(boost::function<void ()> const&) execution_monitor.ipp:1309
#17 0x1063aefb1 in boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned int) unit_test_monitor.ipp:46
#18 0x1063b4340 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:771
#19 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#20 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#21 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#22 0x1063ad5a2 in boost::unit_test::framework::run(unsigned long, bool) framework.ipp:1577
#23 0x1063e5d1d in boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) unit_test_main.ipp:231
#24 0x10641c73a in main boostTest.cpp:129
#25 0x7fff909e0234 in start (libdyld.dylib:x86_64+0x5234)
0x60200000ab20 is located 0 bytes to the right of 16-byte region [0x60200000ab10,0x60200000ab20)
allocated by thread T0 here:
#0 0x123105e3b in wrap__Znwm (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x60e3b)
#1 0x106661581 in std::__1::__split_buffer<unsigned int, std::__1::allocator<unsigned int>&>::__split_buffer(unsigned long, unsigned long, std::__1::allocator<unsigned int>&) new:226
#2 0x1068a6cb4 in void std::__1::vector<unsigned int, std::__1::allocator<unsigned int> >::__push_back_slow_path<unsigned int const&>(unsigned int const&&&) __split_buffer:310
#3 0x108082e75 in CryptoPP::ECP::SimultaneousMultiply(CryptoPP::ECPPoint*, CryptoPP::ECPPoint const&, CryptoPP::Integer const*, unsigned int) const (testeth:x86_64+0x10241ee75)
#4 0x108082b07 in CryptoPP::ECP::ScalarMultiply(CryptoPP::ECPPoint const&, CryptoPP::Integer const&) const (testeth:x86_64+0x10241eb07)
#5 0x10804e447 in CryptoPP::ECPPoint CryptoPP::GeneralCascadeMultiplication<CryptoPP::ECPPoint, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*> >(CryptoPP::AbstractGroup<CryptoPP::ECPPoint> const&, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>, std::__1::__wrap_iter<CryptoPP::BaseAndExponent<CryptoPP::ECPPoint, CryptoPP::Integer>*>) (testeth:x86_64+0x1023ea447)
#6 0x10804d91c in CryptoPP::DL_FixedBasePrecomputationImpl<CryptoPP::ECPPoint>::Exponentiate(CryptoPP::DL_GroupPrecomputation<CryptoPP::ECPPoint> const&, CryptoPP::Integer const&) const (testeth:x86_64+0x1023e991c)
#7 0x10806ac08 in CryptoPP::DL_GroupParameters<CryptoPP::EC2NPoint>::ExponentiateBase(CryptoPP::Integer const&) const (testeth:x86_64+0x102406c08)
#8 0x107e3f5b6 in CryptoPP::DL_EncryptorBase<CryptoPP::ECPPoint>::Encrypt(CryptoPP::RandomNumberGenerator&, unsigned char const*, unsigned long, unsigned char*, CryptoPP::NameValuePairs const&) const pubkey.h:1720
#9 0x107e3ea0b in dev::crypto::Secp256k1PP::encrypt(dev::FixedHash<64u> const&, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) CryptoPP.cpp:203
#10 0x107e1428d in dev::encrypt(dev::FixedHash<64u> const&, dev::vector_ref<unsigned char const>, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> >&) Common.cpp:103
#11 0x10684b26e in Crypto::devcrypto::common_encrypt_decrypt::test_method() crypto.cpp:187
#12 0x10684a08d in Crypto::devcrypto::common_encrypt_decrypt_invoker() crypto.cpp:179
#13 0x1064b1c77 in boost::detail::function::function_obj_invoker0<boost::detail::forward, int>::invoke(boost::detail::function::function_buffer&) execution_monitor.ipp:1300
#14 0x1063baa69 in boost::execution_monitor::catch_signals(boost::function<int ()> const&) execution_monitor.ipp:281
#15 0x1063bb196 in boost::execution_monitor::execute(boost::function<int ()> const&) execution_monitor.ipp:1203
#16 0x1063a6119 in boost::execution_monitor::vexecute(boost::function<void ()> const&) execution_monitor.ipp:1309
#17 0x1063aefb1 in boost::unit_test::unit_test_monitor_t::execute_and_translate(boost::function<void ()> const&, unsigned int) unit_test_monitor.ipp:46
#18 0x1063b4340 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:771
#19 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#20 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#21 0x1063b4df6 in boost::unit_test::framework::state::execute_test_tree(unsigned long, unsigned int, boost::unit_test::framework::state::random_generator_helper const*) framework.ipp:717
#22 0x1063ad5a2 in boost::unit_test::framework::run(unsigned long, bool) framework.ipp:1577
#23 0x1063e5d1d in boost::unit_test::unit_test_main(boost::unit_test::test_suite* (*)(int, char**), int, char**) unit_test_main.ipp:231
#24 0x10641c73a in main boostTest.cpp:129
#25 0x7fff909e0234 in start (libdyld.dylib:x86_64+0x5234)
SUMMARY: AddressSanitizer: heap-buffer-overflow (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x18b29) in wrap_memcpy
Shadow bytes around the buggy address:
0x1c0400001510: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x1c0400001520: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x1c0400001530: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa
0x1c0400001540: fa fa 00 fa fa fa 00 fa fa fa 00 fa fa fa fd fa
0x1c0400001550: fa fa fd fa fa fa 00 fa fa fa fd fa fa fa fd fa
=>0x1c0400001560: fa fa 00[04]fa fa 00 04 fa fa fa fa fa fa fa fa
0x1c0400001570: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c0400001580: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c0400001590: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c04000015a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x1c04000015b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==15657==ABORTING
==15657==WARNING: ASan is ignoring requested __asan_handle_no_return: stack top: 0x7fff59f9c000; bottom 0x000128460000; size: 0x7ffe31b3c000 (140729732284416)
False positive error reports may follow
For details see https://github.com/google/sanitizers/issues/189
unknown location:0: fatal error: in "Crypto/devcrypto/common_encrypt_decrypt": signal: SIGABRT (application abort requested)
../test/unittests/libdevcrypto/crypto.cpp:179: last checkpoint: "common_encrypt_decrypt" test entry
```
|
test
|
heap buffer overflow in cryptopp ecp simultaneousmultiply error addresssanitizer heap buffer overflow on address at pc bp sp read of size at thread in wrap memcpy libclang rt asan osx dynamic dylib running test case test case common encrypt decrypt in std vector swap out circular buffer std split buffer memory in void std vector push back slow path unsigned int const vector in cryptopp ecp simultaneousmultiply cryptopp ecppoint cryptopp ecppoint const cryptopp integer const unsigned int const testeth in cryptopp ecp scalarmultiply cryptopp ecppoint const cryptopp integer const const testeth in cryptopp ecppoint cryptopp generalcascademultiplication cryptopp abstractgroup const std wrap iter std wrap iter testeth in cryptopp dl fixedbaseprecomputationimpl exponentiate cryptopp dl groupprecomputation const cryptopp integer const const testeth in cryptopp dl groupparameters exponentiatebase cryptopp integer const const testeth in cryptopp dl encryptorbase encrypt cryptopp randomnumbergenerator unsigned char const unsigned long unsigned char cryptopp namevaluepairs const const pubkey h in dev crypto encrypt dev fixedhash const std vector cryptopp cpp in dev encrypt dev fixedhash const dev vector ref std vector common cpp in crypto devcrypto common encrypt decrypt test method crypto cpp in crypto devcrypto common encrypt decrypt invoker crypto cpp in boost detail function function obj invoke boost detail function function buffer execution monitor ipp in boost execution monitor catch signals boost function const execution monitor ipp in boost execution monitor execute boost function const execution monitor ipp in boost execution monitor vexecute boost function const execution monitor ipp in boost unit test unit test monitor t execute and translate boost function const unsigned int unit test monitor ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework run unsigned long bool framework ipp in boost unit test unit test main boost unit test test suite int char int char unit test main ipp in main boosttest cpp in start libdyld dylib is located bytes to the right of byte region allocated by thread here in wrap znwm libclang rt asan osx dynamic dylib in std split buffer split buffer unsigned long unsigned long std allocator new in void std vector push back slow path unsigned int const split buffer in cryptopp ecp simultaneousmultiply cryptopp ecppoint cryptopp ecppoint const cryptopp integer const unsigned int const testeth in cryptopp ecp scalarmultiply cryptopp ecppoint const cryptopp integer const const testeth in cryptopp ecppoint cryptopp generalcascademultiplication cryptopp abstractgroup const std wrap iter std wrap iter testeth in cryptopp dl fixedbaseprecomputationimpl exponentiate cryptopp dl groupprecomputation const cryptopp integer const const testeth in cryptopp dl groupparameters exponentiatebase cryptopp integer const const testeth in cryptopp dl encryptorbase encrypt cryptopp randomnumbergenerator unsigned char const unsigned long unsigned char cryptopp namevaluepairs const const pubkey h in dev crypto encrypt dev fixedhash const std vector cryptopp cpp in dev encrypt dev fixedhash const dev vector ref std vector common cpp in crypto devcrypto common encrypt decrypt test method crypto cpp in crypto devcrypto common encrypt decrypt invoker crypto cpp in boost detail function function obj invoke boost detail function function buffer execution monitor ipp in boost execution monitor catch signals boost function const execution monitor ipp in boost execution monitor execute boost function const execution monitor ipp in boost execution monitor vexecute boost function const execution monitor ipp in boost unit test unit test monitor t execute and translate boost function const unsigned int unit test monitor ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework state execute test tree unsigned long unsigned int boost unit test framework state random generator helper const framework ipp in boost unit test framework run unsigned long bool framework ipp in boost unit test unit test main boost unit test test suite int char int char unit test main ipp in main boosttest cpp in start libdyld dylib summary addresssanitizer heap buffer overflow libclang rt asan osx dynamic dylib in wrap memcpy shadow bytes around the buggy address fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa fa fa fa fa fa fa fa fa fa fa fa fd fa fa fa fd fa fa fa fa fa fa fd fa fa fa fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa shadow byte legend one shadow byte represents application bytes addressable partially addressable heap left redzone fa freed heap region fd stack left redzone stack mid redzone stack right redzone stack after return stack use after scope global redzone global init order poisoned by user container overflow fc array cookie ac intra object redzone bb asan internal fe left alloca redzone ca right alloca redzone cb aborting warning asan is ignoring requested asan handle no return stack top bottom size false positive error reports may follow for details see unknown location fatal error in crypto devcrypto common encrypt decrypt signal sigabrt application abort requested test unittests libdevcrypto crypto cpp last checkpoint common encrypt decrypt test entry
| 1
|
24,733
| 4,107,121,291
|
IssuesEvent
|
2016-06-06 11:42:46
|
NativeScript/nativescript-cli
|
https://api.github.com/repos/NativeScript/nativescript-cli
|
closed
|
ios builds with 2.0.0: Processing node_modules failed. TypeError: Cannot read property 'split' of null
|
2 - Ready For Test bug
|
I have just restarted my mac for the first time since I updated to the CLI 2.0.0. My project which was working before now won't build, citing error:
>Processing node_modules failed. TypeError: Cannot read property 'split' of null
I am running OSX El Capitan with XCode 7.3
Even if I run `npm install nativescript -g` I get the same error:
```
Unknown-88-63-df-a3-2e-e9:demo georgeedwards$ npm install nativescript -g
npm WARN excluding symbolic link docs/stylesheets/hightlight.css -> ../../node_modules/highlight.js/src/styles/solarized_light.css
npm WARN deprecated lodash-node@2.4.1: This package has been discontinued in favor of lodash@^4.0.0.
npm WARN deprecated lodash@1.0.2: lodash@<3.0.0 is no longer maintained. Upgrade to lodash@^4.0.0.
npm WARN deprecated graceful-fs@1.2.3: graceful-fs v3.0.0 and before will fail on node releases >= v6.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
npm WARN excluding symbolic link docs/assets/ir_black.css -> ../../node_modules/highlight.js/src/styles/ir_black.css
npm WARN excluding symbolic link examples/TestFramework/Test Framework.framework/Resources -> Versions/Current/Resources
npm WARN excluding symbolic link examples/TestFramework/Test Framework.framework/Test Framework -> Versions/Current/Test Framework
npm WARN excluding symbolic link examples/TestFramework/Test Framework.framework/Versions/Current -> A
/Users/georgeedwards/.npm-packages/bin/tns -> /Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/bin/nativescript.js
/Users/georgeedwards/.npm-packages/bin/nativescript -> /Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/bin/nativescript.js
> nativescript@2.0.0 postinstall /Users/georgeedwards/.npm-packages/lib/node_modules/nativescript
> node postinstall.js
WARNING: CocoaPods is not installed or is not configured properly.
You will not be able to build your projects for iOS if they contain plugin with CocoaPod file.
To be able to build such projects, verify that you have installed CocoaPods.
Cannot read property 'split' of null
Error: Unknown command 'dev-post-install'. Try '$ tns help' for a full list of supported commands.
at Object.<anonymous> (/Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/lib/common/errors.js:7:23)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
at Module.require (module.js:353:17)
at require (internal/module.js:12:17)
at Object.<anonymous> (/Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/lib/nativescript-cli.js:8:16)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
/Users/georgeedwards/.npm-packages/lib
└── nativescript@2.0.0
```
<!---
@huboard:{"order":237.5,"milestone_order":1734,"custom_state":""}
-->
|
1.0
|
ios builds with 2.0.0: Processing node_modules failed. TypeError: Cannot read property 'split' of null - I have just restarted my mac for the first time since I updated to the CLI 2.0.0. My project which was working before now won't build, citing error:
>Processing node_modules failed. TypeError: Cannot read property 'split' of null
I am running OSX El Capitan with XCode 7.3
Even if I run `npm install nativescript -g` I get the same error:
```
Unknown-88-63-df-a3-2e-e9:demo georgeedwards$ npm install nativescript -g
npm WARN excluding symbolic link docs/stylesheets/hightlight.css -> ../../node_modules/highlight.js/src/styles/solarized_light.css
npm WARN deprecated lodash-node@2.4.1: This package has been discontinued in favor of lodash@^4.0.0.
npm WARN deprecated lodash@1.0.2: lodash@<3.0.0 is no longer maintained. Upgrade to lodash@^4.0.0.
npm WARN deprecated graceful-fs@1.2.3: graceful-fs v3.0.0 and before will fail on node releases >= v6.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.
npm WARN excluding symbolic link docs/assets/ir_black.css -> ../../node_modules/highlight.js/src/styles/ir_black.css
npm WARN excluding symbolic link examples/TestFramework/Test Framework.framework/Resources -> Versions/Current/Resources
npm WARN excluding symbolic link examples/TestFramework/Test Framework.framework/Test Framework -> Versions/Current/Test Framework
npm WARN excluding symbolic link examples/TestFramework/Test Framework.framework/Versions/Current -> A
/Users/georgeedwards/.npm-packages/bin/tns -> /Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/bin/nativescript.js
/Users/georgeedwards/.npm-packages/bin/nativescript -> /Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/bin/nativescript.js
> nativescript@2.0.0 postinstall /Users/georgeedwards/.npm-packages/lib/node_modules/nativescript
> node postinstall.js
WARNING: CocoaPods is not installed or is not configured properly.
You will not be able to build your projects for iOS if they contain plugin with CocoaPod file.
To be able to build such projects, verify that you have installed CocoaPods.
Cannot read property 'split' of null
Error: Unknown command 'dev-post-install'. Try '$ tns help' for a full list of supported commands.
at Object.<anonymous> (/Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/lib/common/errors.js:7:23)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
at Module.require (module.js:353:17)
at require (internal/module.js:12:17)
at Object.<anonymous> (/Users/georgeedwards/.npm-packages/lib/node_modules/nativescript/lib/nativescript-cli.js:8:16)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
/Users/georgeedwards/.npm-packages/lib
└── nativescript@2.0.0
```
<!---
@huboard:{"order":237.5,"milestone_order":1734,"custom_state":""}
-->
|
test
|
ios builds with processing node modules failed typeerror cannot read property split of null i have just restarted my mac for the first time since i updated to the cli my project which was working before now won t build citing error processing node modules failed typeerror cannot read property split of null i am running osx el capitan with xcode even if i run npm install nativescript g i get the same error unknown df demo georgeedwards npm install nativescript g npm warn excluding symbolic link docs stylesheets hightlight css node modules highlight js src styles solarized light css npm warn deprecated lodash node this package has been discontinued in favor of lodash npm warn deprecated lodash lodash is no longer maintained upgrade to lodash npm warn deprecated graceful fs graceful fs and before will fail on node releases please update to graceful fs as soon as possible use npm ls graceful fs to find it in the tree npm warn excluding symbolic link docs assets ir black css node modules highlight js src styles ir black css npm warn excluding symbolic link examples testframework test framework framework resources versions current resources npm warn excluding symbolic link examples testframework test framework framework test framework versions current test framework npm warn excluding symbolic link examples testframework test framework framework versions current a users georgeedwards npm packages bin tns users georgeedwards npm packages lib node modules nativescript bin nativescript js users georgeedwards npm packages bin nativescript users georgeedwards npm packages lib node modules nativescript bin nativescript js nativescript postinstall users georgeedwards npm packages lib node modules nativescript node postinstall js warning cocoapods is not installed or is not configured properly you will not be able to build your projects for ios if they contain plugin with cocoapod file to be able to build such projects verify that you have installed cocoapods cannot read property split of null error unknown command dev post install try tns help for a full list of supported commands at object users georgeedwards npm packages lib node modules nativescript lib common errors js at module compile module js at object module extensions js module js at module load module js at function module load module js at module require module js at require internal module js at object users georgeedwards npm packages lib node modules nativescript lib nativescript cli js at module compile module js at object module extensions js module js users georgeedwards npm packages lib └── nativescript huboard order milestone order custom state
| 1
|
177,805
| 21,509,186,882
|
IssuesEvent
|
2022-04-28 01:13:54
|
rgordon95/advanced-react-demo
|
https://api.github.com/repos/rgordon95/advanced-react-demo
|
closed
|
WS-2019-0493 (High) detected in handlebars-4.1.2.tgz - autoclosed
|
security vulnerability
|
## WS-2019-0493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /advanced-react-demo/package.json</p>
<p>Path to vulnerable library: advanced-react-demo/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-20.0.4.tgz (Root Library)
- jest-cli-20.0.4.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-14
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-11-14</p>
<p>Fix Resolution: handlebars - 3.0.8,4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0493 (High) detected in handlebars-4.1.2.tgz - autoclosed - ## WS-2019-0493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /advanced-react-demo/package.json</p>
<p>Path to vulnerable library: advanced-react-demo/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- jest-20.0.4.tgz (Root Library)
- jest-cli-20.0.4.tgz
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-14
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-11-14</p>
<p>Fix Resolution: handlebars - 3.0.8,4.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
ws high detected in handlebars tgz autoclosed ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file advanced react demo package json path to vulnerable library advanced react demo node modules handlebars package json dependency hierarchy jest tgz root library jest cli tgz istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the package s lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
| 0
|
175,263
| 27,817,325,742
|
IssuesEvent
|
2023-03-18 20:49:01
|
hasnainmakada-99/Open-Source-With-Hasnain
|
https://api.github.com/repos/hasnainmakada-99/Open-Source-With-Hasnain
|
closed
|
Add: Search bar functionality to easily search content across the website
|
enhancement help wanted good first issue EddieHub:good-first-issue Designing
|
**Is your feature request related to a problem? Please describe.**
### **I would like to request the contributor to add a search bar functionality which should easily make content accessible to the users, and the searching should be efficient enough to recognize certain keywords.**
**Describe the solution you'd like**
None
**Describe alternatives you've considered**
None
**Additional context**
You can refer to the official dosify docs from which this project is created for certain references
https://docsify.js.org/#/
|
1.0
|
Add: Search bar functionality to easily search content across the website - **Is your feature request related to a problem? Please describe.**
### **I would like to request the contributor to add a search bar functionality which should easily make content accessible to the users, and the searching should be efficient enough to recognize certain keywords.**
**Describe the solution you'd like**
None
**Describe alternatives you've considered**
None
**Additional context**
You can refer to the official dosify docs from which this project is created for certain references
https://docsify.js.org/#/
|
non_test
|
add search bar functionality to easily search content across the website is your feature request related to a problem please describe i would like to request the contributor to add a search bar functionality which should easily make content accessible to the users and the searching should be efficient enough to recognize certain keywords describe the solution you d like none describe alternatives you ve considered none additional context you can refer to the official dosify docs from which this project is created for certain references
| 0
|
97,516
| 28,310,494,543
|
IssuesEvent
|
2023-04-10 14:59:55
|
golang/go
|
https://api.github.com/repos/golang/go
|
reopened
|
all: test failures with `device not configured`
|
Builders NeedsInvestigation
|
```
#!watchflakes
post <- test == "" && log ~ `: device not configured`
```
|
1.0
|
all: test failures with `device not configured` - ```
#!watchflakes
post <- test == "" && log ~ `: device not configured`
```
|
non_test
|
all test failures with device not configured watchflakes post test log device not configured
| 0
|
113,695
| 11,812,250,950
|
IssuesEvent
|
2020-03-19 19:47:25
|
newrelic/developer-toolkit
|
https://api.github.com/repos/newrelic/developer-toolkit
|
closed
|
New Relic CLI Demo Video
|
documentation priority:high size:M
|
Create a quick demo recording of using `newrelic/newrelic-cli` following the [Getting Started](https://github.com/newrelic/newrelic-cli/blob/master/docs/GETTING_STARTED.md) guide.
|
1.0
|
New Relic CLI Demo Video - Create a quick demo recording of using `newrelic/newrelic-cli` following the [Getting Started](https://github.com/newrelic/newrelic-cli/blob/master/docs/GETTING_STARTED.md) guide.
|
non_test
|
new relic cli demo video create a quick demo recording of using newrelic newrelic cli following the guide
| 0
|
505,254
| 14,630,572,222
|
IssuesEvent
|
2020-12-23 17:59:29
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
Category data is changed after saving category link in a loop for different stores
|
Component: Catalog Event: dmcdindia1 Issue: Clear Description Issue: Confirmed Issue: Format is valid Issue: Ready for Work Priority: P3 Progress: ready for dev Reproduced on 2.3.x Severity: S3 Triage: Dev.Experience stale issue
|
<!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Summary (*)
When there is a loop going through a few products and linking those products to categories. Some categories get data copied from another store.
### Examples (*)
It's hard to reproduce but imagine that we import/update some product data for different stores. And if we want to link / unlink products to / from "marketing categories" and those categories have different descriptions or other data.
The data may be received from previous store and saved in current store because `\Magento\Catalog\Model\CategoryLinkRepository` loads categories without specifying store id. And in `\Magento\Catalog\Model\CategoryRepository::get` all entities are stored as cached in a class property.
### Proposed solution
Since there is no any data about a **store_id** in `\Magento\Catalog\Model\CategoryProductLink` and column for store id in the table `catalog_category_product` we can just load a category from the default store to avoid mismatch because of cache key **all** in CategoryRepository
|
1.0
|
Category data is changed after saving category link in a loop for different stores - <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Summary (*)
When there is a loop going through a few products and linking those products to categories. Some categories get data copied from another store.
### Examples (*)
It's hard to reproduce but imagine that we import/update some product data for different stores. And if we want to link / unlink products to / from "marketing categories" and those categories have different descriptions or other data.
The data may be received from previous store and saved in current store because `\Magento\Catalog\Model\CategoryLinkRepository` loads categories without specifying store id. And in `\Magento\Catalog\Model\CategoryRepository::get` all entities are stored as cached in a class property.
### Proposed solution
Since there is no any data about a **store_id** in `\Magento\Catalog\Model\CategoryProductLink` and column for store id in the table `catalog_category_product` we can just load a category from the default store to avoid mismatch because of cache key **all** in CategoryRepository
|
non_test
|
category data is changed after saving category link in a loop for different stores please review our guidelines before adding a new issue fields marked with are required please don t remove the template summary when there is a loop going through a few products and linking those products to categories some categories get data copied from another store examples it s hard to reproduce but imagine that we import update some product data for different stores and if we want to link unlink products to from marketing categories and those categories have different descriptions or other data the data may be received from previous store and saved in current store because magento catalog model categorylinkrepository loads categories without specifying store id and in magento catalog model categoryrepository get all entities are stored as cached in a class property proposed solution since there is no any data about a store id in magento catalog model categoryproductlink and column for store id in the table catalog category product we can just load a category from the default store to avoid mismatch because of cache key all in categoryrepository
| 0
|
249,929
| 21,217,898,760
|
IssuesEvent
|
2022-04-11 09:08:17
|
hoppscotch/hoppscotch
|
https://api.github.com/repos/hoppscotch/hoppscotch
|
closed
|
[bug]: GitHub account login failed, prompting some errors
|
bug need testing
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
<img width="935" alt="image" src="https://user-images.githubusercontent.com/41265142/162698070-f3eb8f57-0c0f-424b-a759-3ea9ff1cb9cc.png">
### Steps to reproduce
<img width="935" alt="image" src="https://user-images.githubusercontent.com/41265142/162698142-e23cdc8e-673b-4cc2-9f56-1f832efac44a.png">
### Environment
Release
### Version
Self-hosted
|
1.0
|
[bug]: GitHub account login failed, prompting some errors - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behavior
<img width="935" alt="image" src="https://user-images.githubusercontent.com/41265142/162698070-f3eb8f57-0c0f-424b-a759-3ea9ff1cb9cc.png">
### Steps to reproduce
<img width="935" alt="image" src="https://user-images.githubusercontent.com/41265142/162698142-e23cdc8e-673b-4cc2-9f56-1f832efac44a.png">
### Environment
Release
### Version
Self-hosted
|
test
|
github account login failed prompting some errors is there an existing issue for this i have searched the existing issues current behavior img width alt image src steps to reproduce img width alt image src environment release version self hosted
| 1
|
9,258
| 6,187,706,033
|
IssuesEvent
|
2017-07-04 08:18:23
|
Virtual-Labs/circular-dichronism-spectroscopy-iiith
|
https://api.github.com/repos/Virtual-Labs/circular-dichronism-spectroscopy-iiith
|
closed
|
QA_To deconvolute the CD spectrum of a given protein solution and to classify it in terms of its secondary structure elements_Manual_spelling-mistakes
|
Category:Usability Developed by: VLEAD Open-Edx Severity:S3 Status: Differed
|
Defect Description :
Found spelling mistakes in the Manual section of **To deconvolute the CD spectrum of a given protein solution and to classify it in terms of its secondary structure elements** experiment this lab.
Actual Result :
Found spelling mistakes in the Manual section of **To deconvolute the CD spectrum of a given protein solution and to classify it in terms of its secondary structure elements** experiment this lab.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers:Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Attachments:

|
True
|
QA_To deconvolute the CD spectrum of a given protein solution and to classify it in terms of its secondary structure elements_Manual_spelling-mistakes - Defect Description :
Found spelling mistakes in the Manual section of **To deconvolute the CD spectrum of a given protein solution and to classify it in terms of its secondary structure elements** experiment this lab.
Actual Result :
Found spelling mistakes in the Manual section of **To deconvolute the CD spectrum of a given protein solution and to classify it in terms of its secondary structure elements** experiment this lab.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers:Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Attachments:

|
non_test
|
qa to deconvolute the cd spectrum of a given protein solution and to classify it in terms of its secondary structure elements manual spelling mistakes defect description found spelling mistakes in the manual section of to deconvolute the cd spectrum of a given protein solution and to classify it in terms of its secondary structure elements experiment this lab actual result found spelling mistakes in the manual section of to deconvolute the cd spectrum of a given protein solution and to classify it in terms of its secondary structure elements experiment this lab environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor attachments
| 0
|
1,356
| 2,540,926,634
|
IssuesEvent
|
2015-01-28 01:52:57
|
medic/medic-webapp
|
https://api.github.com/repos/medic/medic-webapp
|
closed
|
User Management: Regressions on save and in user type dropdown
|
3 - Acceptance testing Bug Regression
|
In the new Angular version of the user management screen, I'm seeing a couple of regressions.
**First regression**: I can't get a user to save properly; on both Webkit and Firefox I'm getting the following console error:
Webkit Nightly:
```
Error: undefined is not an object (evaluating 'a.editUserModel.facility.doc._id')
editUser@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:44:40750
e@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:16637
$eval@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1083
$apply@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1314
c@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:2:15008
```
Firefox:
```
Error: a.editUserModel.facility.doc is undefined
[46]</</</a.editUser@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:44:21118
nf.prototype.functionCall/<@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:10217
cg[b]</<.compile/</</e@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:16636
Vc/this.$get</l.prototype.$eval@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1079
Vc/this.$get</l.prototype.$apply@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1304
cg[b]</<.compile/</<@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:16686
Mb/c@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:2:14981
```
**Second regression**: It looks like there's a weird item in the "User Type" dropdown box. It goes away after I change the value of the selection box; the initial markup is:
```
<option value="? string:admin ?"></option>
```
You may already know about one or both of these, but I wanted to get it written down just in case.
|
1.0
|
User Management: Regressions on save and in user type dropdown - In the new Angular version of the user management screen, I'm seeing a couple of regressions.
**First regression**: I can't get a user to save properly; on both Webkit and Firefox I'm getting the following console error:
Webkit Nightly:
```
Error: undefined is not an object (evaluating 'a.editUserModel.facility.doc._id')
editUser@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:44:40750
e@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:16637
$eval@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1083
$apply@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1314
c@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:2:15008
```
Firefox:
```
Error: a.editUserModel.facility.doc is undefined
[46]</</</a.editUser@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:44:21118
nf.prototype.functionCall/<@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:10217
cg[b]</<.compile/</</e@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:16636
Vc/this.$get</l.prototype.$eval@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1079
Vc/this.$get</l.prototype.$apply@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:4:1304
cg[b]</<.compile/</<@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:5:16686
Mb/c@http://medic.local/medic/_design/medic/_rewrite/static/dist/inbox.js:2:14981
```
**Second regression**: It looks like there's a weird item in the "User Type" dropdown box. It goes away after I change the value of the selection box; the initial markup is:
```
<option value="? string:admin ?"></option>
```
You may already know about one or both of these, but I wanted to get it written down just in case.
|
test
|
user management regressions on save and in user type dropdown in the new angular version of the user management screen i m seeing a couple of regressions first regression i can t get a user to save properly on both webkit and firefox i m getting the following console error webkit nightly error undefined is not an object evaluating a editusermodel facility doc id edituser e eval apply c firefox error a editusermodel facility doc is undefined a edituser nf prototype functioncall cg compile e vc this get l prototype eval vc this get l prototype apply cg compile mb c second regression it looks like there s a weird item in the user type dropdown box it goes away after i change the value of the selection box the initial markup is you may already know about one or both of these but i wanted to get it written down just in case
| 1
|
148,269
| 11,845,339,227
|
IssuesEvent
|
2020-03-24 08:09:34
|
axemclion/IndexedDBShim
|
https://api.github.com/repos/axemclion/IndexedDBShim
|
closed
|
Problem with running individual tests
|
Testing
|
When running the QUnit testing in the browser, if one opts to "Rerun" a particular test, some of these tests do not work properly due to the manner in which the tests were added. It would be nice to work around this issue.
Mocha has this problem as well insofar as running individual tests does not seem to delete the old ones first.
|
1.0
|
Problem with running individual tests - When running the QUnit testing in the browser, if one opts to "Rerun" a particular test, some of these tests do not work properly due to the manner in which the tests were added. It would be nice to work around this issue.
Mocha has this problem as well insofar as running individual tests does not seem to delete the old ones first.
|
test
|
problem with running individual tests when running the qunit testing in the browser if one opts to rerun a particular test some of these tests do not work properly due to the manner in which the tests were added it would be nice to work around this issue mocha has this problem as well insofar as running individual tests does not seem to delete the old ones first
| 1
|
280,333
| 24,295,820,932
|
IssuesEvent
|
2022-09-29 09:57:33
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] InternalCartesianCentroidTests testEqualsAndHashcode failing
|
:Analytics/Geo >test-failure Team:Analytics
|
**Build scan:**
https://gradle-enterprise.elastic.co/s/2douff6uor2wy/tests/:x-pack:plugin:spatial:test/org.elasticsearch.xpack.spatial.search.aggregations.metrics.InternalCartesianCentroidTests/testEqualsAndHashcode
**Reproduction line:**
`./gradlew ':x-pack:plugin:spatial:test' --tests "org.elasticsearch.xpack.spatial.search.aggregations.metrics.InternalCartesianCentroidTests.testEqualsAndHashcode" -Dtests.seed=8D68DA4C3F1BC51F -Dtests.locale=tr -Dtests.timezone=Etc/GMT+4 -Druntime.java=17`
**Applicable branches:**
main
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.spatial.search.aggregations.metrics.InternalCartesianCentroidTests&tests.test=testEqualsAndHashcode
**Failure excerpt:**
```
java.lang.AssertionError: InternalCartesianCentroid mutation should not be equal to original
Expected: not <InternalCentroid{centroid=3.402823466385285E38, 0.0, count=139}>
but: was <InternalCentroid{centroid=3.402823466385285E38, 0.0, count=139}>
at __randomizedtesting.SeedInfo.seed([8D68DA4C3F1BC51F:FC67A281F0FC8C30]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.elasticsearch.test.EqualsHashCodeTestUtils.checkEqualsAndHashCode(EqualsHashCodeTestUtils.java:73)
at org.elasticsearch.test.AbstractWireTestCase.testEqualsAndHashcode(AbstractWireTestCase.java:58)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
```
|
1.0
|
[CI] InternalCartesianCentroidTests testEqualsAndHashcode failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/2douff6uor2wy/tests/:x-pack:plugin:spatial:test/org.elasticsearch.xpack.spatial.search.aggregations.metrics.InternalCartesianCentroidTests/testEqualsAndHashcode
**Reproduction line:**
`./gradlew ':x-pack:plugin:spatial:test' --tests "org.elasticsearch.xpack.spatial.search.aggregations.metrics.InternalCartesianCentroidTests.testEqualsAndHashcode" -Dtests.seed=8D68DA4C3F1BC51F -Dtests.locale=tr -Dtests.timezone=Etc/GMT+4 -Druntime.java=17`
**Applicable branches:**
main
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.spatial.search.aggregations.metrics.InternalCartesianCentroidTests&tests.test=testEqualsAndHashcode
**Failure excerpt:**
```
java.lang.AssertionError: InternalCartesianCentroid mutation should not be equal to original
Expected: not <InternalCentroid{centroid=3.402823466385285E38, 0.0, count=139}>
but: was <InternalCentroid{centroid=3.402823466385285E38, 0.0, count=139}>
at __randomizedtesting.SeedInfo.seed([8D68DA4C3F1BC51F:FC67A281F0FC8C30]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.elasticsearch.test.EqualsHashCodeTestUtils.checkEqualsAndHashCode(EqualsHashCodeTestUtils.java:73)
at org.elasticsearch.test.AbstractWireTestCase.testEqualsAndHashcode(AbstractWireTestCase.java:58)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:833)
```
|
test
|
internalcartesiancentroidtests testequalsandhashcode failing build scan reproduction line gradlew x pack plugin spatial test tests org elasticsearch xpack spatial search aggregations metrics internalcartesiancentroidtests testequalsandhashcode dtests seed dtests locale tr dtests timezone etc gmt druntime java applicable branches main reproduces locally yes failure history failure excerpt java lang assertionerror internalcartesiancentroid mutation should not be equal to original expected not but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org elasticsearch test equalshashcodetestutils checkequalsandhashcode equalshashcodetestutils java at org elasticsearch test abstractwiretestcase testequalsandhashcode abstractwiretestcase java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 1
|
324,147
| 23,985,646,578
|
IssuesEvent
|
2022-09-13 18:48:15
|
GetStream/stream-chat-react-native
|
https://api.github.com/repos/GetStream/stream-chat-react-native
|
closed
|
Offline Support Docs Inconsistency
|
Documentation released
|
<!--
PLEASE READ THIS BEFORE PROCEEDING:
Did you check docs? - https://getstream.io/chat/docs/sdk/reactnative
If you are looking for an answer to "how to implement/do ... using xx component?" question, please check the Guides section of the docs.
If you can't find an answer there, please create an issue, and we will try to add/include a sample code or example to the Guides section of docs.
This way it can help the other devs who are looking for the same answer.
Also if you have some feedback regarding docs, please don't hesitate to comment there.
Your co-operation is really-really appreciated in this manner. Thanks and happy coding :)
-->
**Describe the bug**
Hi 👋 I was reading the new offline docs and noticed a recent inconsistency with caching of Images.
In the section about what is not supported is caching of Images/Profile Images but there is now a new section for how to cache images. You may be updating this shortly but just in case it is a mistake, I thought I would raise it.
Please close this if not an actual issue.
|
1.0
|
Offline Support Docs Inconsistency - <!--
PLEASE READ THIS BEFORE PROCEEDING:
Did you check docs? - https://getstream.io/chat/docs/sdk/reactnative
If you are looking for an answer to "how to implement/do ... using xx component?" question, please check the Guides section of the docs.
If you can't find an answer there, please create an issue, and we will try to add/include a sample code or example to the Guides section of docs.
This way it can help the other devs who are looking for the same answer.
Also if you have some feedback regarding docs, please don't hesitate to comment there.
Your co-operation is really-really appreciated in this manner. Thanks and happy coding :)
-->
**Describe the bug**
Hi 👋 I was reading the new offline docs and noticed a recent inconsistency with caching of Images.
In the section about what is not supported is caching of Images/Profile Images but there is now a new section for how to cache images. You may be updating this shortly but just in case it is a mistake, I thought I would raise it.
Please close this if not an actual issue.
|
non_test
|
offline support docs inconsistency please read this before proceeding did you check docs if you are looking for an answer to how to implement do using xx component question please check the guides section of the docs if you can t find an answer there please create an issue and we will try to add include a sample code or example to the guides section of docs this way it can help the other devs who are looking for the same answer also if you have some feedback regarding docs please don t hesitate to comment there your co operation is really really appreciated in this manner thanks and happy coding describe the bug hi 👋 i was reading the new offline docs and noticed a recent inconsistency with caching of images in the section about what is not supported is caching of images profile images but there is now a new section for how to cache images you may be updating this shortly but just in case it is a mistake i thought i would raise it please close this if not an actual issue
| 0
|
152,339
| 12,102,226,089
|
IssuesEvent
|
2020-04-20 16:21:35
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: transfer-leases/quit failed
|
C-test-failure O-roachtest O-robot branch-release-19.2
|
[(roachtest).transfer-leases/quit failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1862663&tab=buildLog) on [release-19.2@7ff60aac0a394a8d43befc9a0f4e89f911aa1294](https://github.com/cockroachdb/cockroach/commits/7ff60aac0a394a8d43befc9a0f4e89f911aa1294):
```
The test failed on branch=release-19.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20200409-1862663/transfer-leases/quit/run_1
quit.go:59,quit.go:87,quit.go:46,quit.go:343,test_runner.go:753: wo
cluster.go:1420,context.go:135,cluster.go:1409,test_runner.go:825: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-1862663-1586414639-17-n3cpu4 --oneshot --ignore-empty-nodes: exit status 1 2: 14044
3: 13896
1: dead
Error: UNCLASSIFIED_PROBLEM:
- 1: dead
main.glob..func13
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1129
main.wrap.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:272
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).execute
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:766
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:852
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).Execute
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:800
main.main
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1793
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
```
<details><summary>More</summary><p>
Artifacts: [/transfer-leases/quit](https://teamcity.cockroachdb.com/viewLog.html?buildId=1862663&tab=artifacts#/transfer-leases/quit)
Related:
- #47184 roachtest: transfer-leases/quit failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atransfer-leases%2Fquit.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: transfer-leases/quit failed - [(roachtest).transfer-leases/quit failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1862663&tab=buildLog) on [release-19.2@7ff60aac0a394a8d43befc9a0f4e89f911aa1294](https://github.com/cockroachdb/cockroach/commits/7ff60aac0a394a8d43befc9a0f4e89f911aa1294):
```
The test failed on branch=release-19.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20200409-1862663/transfer-leases/quit/run_1
quit.go:59,quit.go:87,quit.go:46,quit.go:343,test_runner.go:753: wo
cluster.go:1420,context.go:135,cluster.go:1409,test_runner.go:825: dead node detection: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod monitor teamcity-1862663-1586414639-17-n3cpu4 --oneshot --ignore-empty-nodes: exit status 1 2: 14044
3: 13896
1: dead
Error: UNCLASSIFIED_PROBLEM:
- 1: dead
main.glob..func13
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1129
main.wrap.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:272
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).execute
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:766
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:852
github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra.(*Command).Execute
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/github.com/spf13/cobra/command.go:800
main.main
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/main.go:1793
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
```
<details><summary>More</summary><p>
Artifacts: [/transfer-leases/quit](https://teamcity.cockroachdb.com/viewLog.html?buildId=1862663&tab=artifacts#/transfer-leases/quit)
Related:
- #47184 roachtest: transfer-leases/quit failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atransfer-leases%2Fquit.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
roachtest transfer leases quit failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts transfer leases quit run quit go quit go quit go quit go test runner go wo cluster go context go cluster go test runner go dead node detection home agent work go src github com cockroachdb cockroach bin roachprod monitor teamcity oneshot ignore empty nodes exit status dead error unclassified problem dead main glob home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go main wrap home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go github com cockroachdb cockroach vendor github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cockroachdb cockroach vendor github com cobra command executec home agent work go src github com cockroachdb cockroach vendor github com cobra command go github com cockroachdb cockroach vendor github com cobra command execute home agent work go src github com cockroachdb cockroach vendor github com cobra command go main main home agent work go src github com cockroachdb cockroach pkg cmd roachprod main go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s more artifacts related roachtest transfer leases quit failed powered by
| 1
|
24,038
| 3,901,438,681
|
IssuesEvent
|
2016-04-18 10:51:31
|
bpineau/rcron
|
https://api.github.com/repos/bpineau/rcron
|
closed
|
Info for compilation on Unbreakable Linux 7
|
auto-migrated Priority-Medium Type-Defect
|
```
If anyone runs in similar issues
First I had to follow the hints from "Issue 2: Package dosent compile on
Ubuntu 10.04 LTS"
Download the rcon.8 file from the SVN-repository to the src-folder.
And I had to add "%option noyywrap" to src/rcron_conf-lexer.l
Seems there was a change in flex after version 2.5.4 - at least I found some
similar issues pointing to http://www.campin.net/syslog-ng/faq.html#lex
After that "make" ran through just with a warning. Tests showed that rcon is
running smooth atm.
```
Original issue reported on code.google.com by `e...@tilt-lan.com` on 3 Sep 2014 at 2:16
|
1.0
|
Info for compilation on Unbreakable Linux 7 - ```
If anyone runs in similar issues
First I had to follow the hints from "Issue 2: Package dosent compile on
Ubuntu 10.04 LTS"
Download the rcon.8 file from the SVN-repository to the src-folder.
And I had to add "%option noyywrap" to src/rcron_conf-lexer.l
Seems there was a change in flex after version 2.5.4 - at least I found some
similar issues pointing to http://www.campin.net/syslog-ng/faq.html#lex
After that "make" ran through just with a warning. Tests showed that rcon is
running smooth atm.
```
Original issue reported on code.google.com by `e...@tilt-lan.com` on 3 Sep 2014 at 2:16
|
non_test
|
info for compilation on unbreakable linux if anyone runs in similar issues first i had to follow the hints from issue package dosent compile on ubuntu lts download the rcon file from the svn repository to the src folder and i had to add option noyywrap to src rcron conf lexer l seems there was a change in flex after version at least i found some similar issues pointing to after that make ran through just with a warning tests showed that rcon is running smooth atm original issue reported on code google com by e tilt lan com on sep at
| 0
|
427,796
| 12,399,181,786
|
IssuesEvent
|
2020-05-21 04:20:19
|
incognitochain/incognito-chain
|
https://api.github.com/repos/incognitochain/incognito-chain
|
closed
|
[Portal] Implement Incognito Vault
|
Priority: High Type: Feature
|
1. Relaying Bitcoin header chain: relate #697
2. Relaying Binance header chain: relate #696
3. Request becoming custodian (burning collaterals PRV)
4. Request porting from users (pick-up custodians, check porting fee)
5. Verify deposit proof and mint pTokens
6. Request redeem from users (pick-up custodians, check redeem fee)
7. Verify redeem proof and unlock collaterals to custodians
8. Custodian withdraw collateral request
9. Liquidation
10. RPC update/get exchange rate BTC/USDT, BNB/USDT, PRV/USDT
11. RPC push header chain.
12. Rewards, fees for custodians.
|
1.0
|
[Portal] Implement Incognito Vault - 1. Relaying Bitcoin header chain: relate #697
2. Relaying Binance header chain: relate #696
3. Request becoming custodian (burning collaterals PRV)
4. Request porting from users (pick-up custodians, check porting fee)
5. Verify deposit proof and mint pTokens
6. Request redeem from users (pick-up custodians, check redeem fee)
7. Verify redeem proof and unlock collaterals to custodians
8. Custodian withdraw collateral request
9. Liquidation
10. RPC update/get exchange rate BTC/USDT, BNB/USDT, PRV/USDT
11. RPC push header chain.
12. Rewards, fees for custodians.
|
non_test
|
implement incognito vault relaying bitcoin header chain relate relaying binance header chain relate request becoming custodian burning collaterals prv request porting from users pick up custodians check porting fee verify deposit proof and mint ptokens request redeem from users pick up custodians check redeem fee verify redeem proof and unlock collaterals to custodians custodian withdraw collateral request liquidation rpc update get exchange rate btc usdt bnb usdt prv usdt rpc push header chain rewards fees for custodians
| 0
|
178,007
| 13,757,105,188
|
IssuesEvent
|
2020-10-06 21:02:43
|
AllenInstitute/MIES
|
https://api.github.com/repos/AllenInstitute/MIES
|
closed
|
Async frame work needs feature to check if all workloads of a certain type have finished.
|
Testpulse bug
|
Currently the async frame work checks only on ASYNC_Stop if all work loads have finished and were read out.
However if different work load types are sent it would be good to have the option to check if a certain type of workloads completely finished. So one could wait for e.g. the TP processing without stopping the frame work.
The extension would be to implement counting of work loads per type, similar as it is done globally already. If the counted read out equal the number of pushed workloads it is considered finished as in this moment are no workloads of that type in limbo.
|
1.0
|
Async frame work needs feature to check if all workloads of a certain type have finished. - Currently the async frame work checks only on ASYNC_Stop if all work loads have finished and were read out.
However if different work load types are sent it would be good to have the option to check if a certain type of workloads completely finished. So one could wait for e.g. the TP processing without stopping the frame work.
The extension would be to implement counting of work loads per type, similar as it is done globally already. If the counted read out equal the number of pushed workloads it is considered finished as in this moment are no workloads of that type in limbo.
|
test
|
async frame work needs feature to check if all workloads of a certain type have finished currently the async frame work checks only on async stop if all work loads have finished and were read out however if different work load types are sent it would be good to have the option to check if a certain type of workloads completely finished so one could wait for e g the tp processing without stopping the frame work the extension would be to implement counting of work loads per type similar as it is done globally already if the counted read out equal the number of pushed workloads it is considered finished as in this moment are no workloads of that type in limbo
| 1
|
3,456
| 9,648,905,932
|
IssuesEvent
|
2019-05-17 17:35:13
|
mitmedialab/MediaCloud-Web-Tools
|
https://api.github.com/repos/mitmedialab/MediaCloud-Web-Tools
|
closed
|
link to privacy policy
|
architecture enhancement
|
The link is ready, just waiting to finish the the policy and get it posted.
|
1.0
|
link to privacy policy - The link is ready, just waiting to finish the the policy and get it posted.
|
non_test
|
link to privacy policy the link is ready just waiting to finish the the policy and get it posted
| 0
|
7,713
| 18,954,171,722
|
IssuesEvent
|
2021-11-18 18:11:54
|
ohfacts/ficverse-re
|
https://api.github.com/repos/ohfacts/ficverse-re
|
closed
|
objectPickUp
|
gameplay should have architecture
|
build the object pick up system. Use a placeholder object for now to test if it works
|
1.0
|
objectPickUp - build the object pick up system. Use a placeholder object for now to test if it works
|
non_test
|
objectpickup build the object pick up system use a placeholder object for now to test if it works
| 0
|
33,422
| 4,833,501,376
|
IssuesEvent
|
2016-11-08 11:12:38
|
Pliohub/Plio
|
https://api.github.com/repos/Pliohub/Plio
|
closed
|
Reopening the web app on Android doesn't take you to your current workspace
|
bug on hold question testlio
|
When I'm testing on Android, Plio keeps taking me back to an old workspace (Caradon Management plc - rather than the current workspace, Plio Ltd) every time I reopen the Plio web app from an icon on my Android home screen.
|
1.0
|
Reopening the web app on Android doesn't take you to your current workspace - When I'm testing on Android, Plio keeps taking me back to an old workspace (Caradon Management plc - rather than the current workspace, Plio Ltd) every time I reopen the Plio web app from an icon on my Android home screen.
|
test
|
reopening the web app on android doesn t take you to your current workspace when i m testing on android plio keeps taking me back to an old workspace caradon management plc rather than the current workspace plio ltd every time i reopen the plio web app from an icon on my android home screen
| 1
|
221,969
| 17,379,657,826
|
IssuesEvent
|
2021-07-31 12:33:57
|
ColoredCow/portal
|
https://api.github.com/repos/ColoredCow/portal
|
closed
|
Application dashboard tabs resets the job id filter when clicked
|
good first issue module : hr priority : high status : ready to test
|
**Describe the bug**
Currently, users can filter all the applications from the HR > opportunities tab. It takes the user to applications dashboard filtered by job id. By default the "Open" applications tab is active.
However, when you switch the tab to another status "On Hold" or "Closed", the job filter resets and you see applications for all jobs.
**Steps to replicate**
1. Login and click on HR > Recruitment.
2. Click on the `Opportunities` tab.
3. Click on any job which has an application. Click on `view application`.
4. Click on the `Closed` status.
5. You will see the job filter resets and it shows all the closed applications
For more info, check out screenshots from comment below https://github.com/ColoredCow-Portal/portal/issues/133#issuecomment-862952321
|
1.0
|
Application dashboard tabs resets the job id filter when clicked - **Describe the bug**
Currently, users can filter all the applications from the HR > opportunities tab. It takes the user to applications dashboard filtered by job id. By default the "Open" applications tab is active.
However, when you switch the tab to another status "On Hold" or "Closed", the job filter resets and you see applications for all jobs.
**Steps to replicate**
1. Login and click on HR > Recruitment.
2. Click on the `Opportunities` tab.
3. Click on any job which has an application. Click on `view application`.
4. Click on the `Closed` status.
5. You will see the job filter resets and it shows all the closed applications
For more info, check out screenshots from comment below https://github.com/ColoredCow-Portal/portal/issues/133#issuecomment-862952321
|
test
|
application dashboard tabs resets the job id filter when clicked describe the bug currently users can filter all the applications from the hr opportunities tab it takes the user to applications dashboard filtered by job id by default the open applications tab is active however when you switch the tab to another status on hold or closed the job filter resets and you see applications for all jobs steps to replicate login and click on hr recruitment click on the opportunities tab click on any job which has an application click on view application click on the closed status you will see the job filter resets and it shows all the closed applications for more info check out screenshots from comment below
| 1
|
131,035
| 10,678,832,196
|
IssuesEvent
|
2019-10-21 18:04:07
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
Storage account name in Update Access Tier dialog is not surrounded by quotes
|
:triumph: easy 🌐 localization 🧪 testing
|
**Storage Explorer Version:** 1.10.0
**Build:** [20190916.5](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3055553)
**Branch:** master
**Platform/OS:** Windows 10/ Centos Linux release_7.6.1810(core)/ macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Right click one StorageV2 account -> Select 'Set Default Access Tier'.
2. Check the description on the 'Update Access Tier' dialog.
**Expect Experience:**
Indicate the storage account in the description on 'Update Access Tier' dialog.
Like 'Which access tier would you like **the account 'myadlsxuanacc'** to have?'
**Actual Experience:**
It doesn't indicate the storage account in the description on 'Update Access Tier' dialog.
Display 'Which access tier would you like **myadlsxuanacc** to have?'

**More Info:**
1. This issue also reproduces for one BlobStorage account.
2. This issue also reproduces for one blob.

|
1.0
|
Storage account name in Update Access Tier dialog is not surrounded by quotes - **Storage Explorer Version:** 1.10.0
**Build:** [20190916.5](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3055553)
**Branch:** master
**Platform/OS:** Windows 10/ Centos Linux release_7.6.1810(core)/ macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Right click one StorageV2 account -> Select 'Set Default Access Tier'.
2. Check the description on the 'Update Access Tier' dialog.
**Expect Experience:**
Indicate the storage account in the description on 'Update Access Tier' dialog.
Like 'Which access tier would you like **the account 'myadlsxuanacc'** to have?'
**Actual Experience:**
It doesn't indicate the storage account in the description on 'Update Access Tier' dialog.
Display 'Which access tier would you like **myadlsxuanacc** to have?'

**More Info:**
1. This issue also reproduces for one BlobStorage account.
2. This issue also reproduces for one blob.

|
test
|
storage account name in update access tier dialog is not surrounded by quotes storage explorer version build branch master platform os windows centos linux release core macos high sierra architecture regression from not a regression steps to reproduce right click one account select set default access tier check the description on the update access tier dialog expect experience indicate the storage account in the description on update access tier dialog like which access tier would you like the account myadlsxuanacc to have actual experience it doesn t indicate the storage account in the description on update access tier dialog display which access tier would you like myadlsxuanacc to have more info this issue also reproduces for one blobstorage account this issue also reproduces for one blob
| 1
|
18,354
| 25,366,043,078
|
IssuesEvent
|
2022-11-21 06:31:16
|
Automattic/woocommerce-subscriptions-core
|
https://api.github.com/repos/Automattic/woocommerce-subscriptions-core
|
closed
|
With HPOS enabled, tokens don't get copied to the renewal order causing failed payments
|
type: bug compatibility: HPOS
|
## Describe the bug
<!-- A clear and concise description of what the bug is. Please be as descriptive as possible, and include screenshots to illustrate. -->
With HPOS tables enabled (see screenshot below), tokens don't get copied to the renewal order which causes payment gateways like WC Payments to fail the payment attempt.
<img width="757" alt="Screen Shot 2022-11-21 at 9 58 59 am" src="https://user-images.githubusercontent.com/8490476/202933685-ecdc9ae5-e73c-44cf-a434-0d2f6011714e.png">
## To Reproduce
<!-- Describe the steps to reproduce the behavior. -->
1. Enable HPOS tables (see screenshot above)
2. Enable WC Payments + WC Subscriptions (extension)
3. Optional. Enable the early renewal order feature
4. Purchase a subscription product using WC Payments
5. From the **my Account → View Subscription** page click the early renewal button or process a renewal order manually via admin/action scheduler.
7. Notice the error that the payment was unsuccessful.
<img width="1109" alt="Screen Shot 2022-11-21 at 10 12 49 am" src="https://user-images.githubusercontent.com/8490476/202934336-686fb12f-6e5f-458b-b707-9e7d4544eaf2.png">
^ error after attempting to process an early renewal order.
<img width="1711" alt="Screen Shot 2022-11-21 at 10 04 23 am" src="https://user-images.githubusercontent.com/8490476/202933947-6fa7aed3-e10d-4386-95cc-3328f80dded0.png">
^ Early renewal settings.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The tokens should be copied and set on the renewal order, and result in a successful payment.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
The tokens aren't copied and the payment fails.
## Product impact
<!-- What products does this issue affect? -->
- [ ] Does this issue affect WooCommerce Subscriptions? yes/no/tbc, add issue ref
- [ ] Does this issue affect WooCommerce Payments? yes/no/tbc, add issue ref
|
True
|
With HPOS enabled, tokens don't get copied to the renewal order causing failed payments - ## Describe the bug
<!-- A clear and concise description of what the bug is. Please be as descriptive as possible, and include screenshots to illustrate. -->
With HPOS tables enabled (see screenshot below), tokens don't get copied to the renewal order which causes payment gateways like WC Payments to fail the payment attempt.
<img width="757" alt="Screen Shot 2022-11-21 at 9 58 59 am" src="https://user-images.githubusercontent.com/8490476/202933685-ecdc9ae5-e73c-44cf-a434-0d2f6011714e.png">
## To Reproduce
<!-- Describe the steps to reproduce the behavior. -->
1. Enable HPOS tables (see screenshot above)
2. Enable WC Payments + WC Subscriptions (extension)
3. Optional. Enable the early renewal order feature
4. Purchase a subscription product using WC Payments
5. From the **my Account → View Subscription** page click the early renewal button or process a renewal order manually via admin/action scheduler.
7. Notice the error that the payment was unsuccessful.
<img width="1109" alt="Screen Shot 2022-11-21 at 10 12 49 am" src="https://user-images.githubusercontent.com/8490476/202934336-686fb12f-6e5f-458b-b707-9e7d4544eaf2.png">
^ error after attempting to process an early renewal order.
<img width="1711" alt="Screen Shot 2022-11-21 at 10 04 23 am" src="https://user-images.githubusercontent.com/8490476/202933947-6fa7aed3-e10d-4386-95cc-3328f80dded0.png">
^ Early renewal settings.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The tokens should be copied and set on the renewal order, and result in a successful payment.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
The tokens aren't copied and the payment fails.
## Product impact
<!-- What products does this issue affect? -->
- [ ] Does this issue affect WooCommerce Subscriptions? yes/no/tbc, add issue ref
- [ ] Does this issue affect WooCommerce Payments? yes/no/tbc, add issue ref
|
non_test
|
with hpos enabled tokens don t get copied to the renewal order causing failed payments describe the bug with hpos tables enabled see screenshot below tokens don t get copied to the renewal order which causes payment gateways like wc payments to fail the payment attempt img width alt screen shot at am src to reproduce enable hpos tables see screenshot above enable wc payments wc subscriptions extension optional enable the early renewal order feature purchase a subscription product using wc payments from the my account → view subscription page click the early renewal button or process a renewal order manually via admin action scheduler notice the error that the payment was unsuccessful img width alt screen shot at am src error after attempting to process an early renewal order img width alt screen shot at am src early renewal settings expected behavior the tokens should be copied and set on the renewal order and result in a successful payment actual behavior the tokens aren t copied and the payment fails product impact does this issue affect woocommerce subscriptions yes no tbc add issue ref does this issue affect woocommerce payments yes no tbc add issue ref
| 0
|
90,964
| 8,288,027,516
|
IssuesEvent
|
2018-09-19 10:37:23
|
humera987/HumTestData
|
https://api.github.com/repos/humera987/HumTestData
|
closed
|
testing_hums : ApiV1ProjectsProjectidFilesIdGetPathParamNullValueId
|
testing_hums
|
Project : testing_hums
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/projects/{projectId}/files/null
Request :
Response :
Not enough variable values available to expand 'projectId'
Logs :
Assertion [@StatusCode != 404] resolved-to [500 != 404] result [Passed]Assertion [@StatusCode != 500] resolved-to [500 != 500] result [Failed]Assertion [@StatusCode != 401] resolved-to [500 != 401] result [Passed]Assertion [@StatusCode != 200] resolved-to [500 != 200] result [Passed]
--- FX Bot ---
|
1.0
|
testing_hums : ApiV1ProjectsProjectidFilesIdGetPathParamNullValueId - Project : testing_hums
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 500
Headers : {}
Endpoint : http://13.56.210.25/api/v1/projects/{projectId}/files/null
Request :
Response :
Not enough variable values available to expand 'projectId'
Logs :
Assertion [@StatusCode != 404] resolved-to [500 != 404] result [Passed]Assertion [@StatusCode != 500] resolved-to [500 != 500] result [Failed]Assertion [@StatusCode != 401] resolved-to [500 != 401] result [Passed]Assertion [@StatusCode != 200] resolved-to [500 != 200] result [Passed]
--- FX Bot ---
|
test
|
testing hums project testing hums job uat env uat region fxlabs us west result fail status code headers endpoint request response not enough variable values available to expand projectid logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot
| 1
|
46,115
| 5,785,062,204
|
IssuesEvent
|
2017-05-01 00:42:07
|
avr-rust/rust
|
https://api.github.com/repos/avr-rust/rust
|
closed
|
Invalid machine code generated for R_AVR_16 fixups
|
A-llvm bug has-llvm-commit has-reduced-testcase
|
GIven the following `main` function:
```rust
static mut PIXELS: &'static mut [u8] = &mut [0b_1010_0110; 10];
#[no_mangle]
pub extern fn main() {
unsafe {
spi::setup();
for &pixel in PIXELS.iter() {
spi::sync(pixel);
}
loop {}
}
}
```
the generated `.elf` file seems to contain invalid machine code at several places, e.g.
```asm
000000b2 <LBB1_5>:
b2: 8d b5 in r24, 0x2d ; 45
b4: 88 23 and r24, r24
b6: ea f7 brpl .-6 ; 0xb2 <LBB1_5>
b8: 8e b5 in r24, 0x2e ; 46
ba: 83 92 .word 0x9283 ; ????
bc: 00 00 nop
be: 8e bd out 0x2e, r24 ; 46
```
If needed, I can try attaching a full project, but it is basically using a trimmed-down version of `libcore`. Full `.ll` listing:
```llvm
; ModuleID = 'hello_avr.cgu-0.rs'
source_filename = "hello_avr.cgu-0.rs"
target datalayout = "e-p:16:16:16-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-n8"
target triple = "avr-atmel-none"
@ref_mut.0 = internal unnamed_addr global [10 x i8] c"\A6\A6\A6\A6\A6\A6\A6\A6\A6\A6", align 1
; Function Attrs: norecurse nounwind readnone uwtable
define void @rust_eh_personality({}* nocapture, {}* nocapture) unnamed_addr #0 {
start:
ret void
}
; Function Attrs: noreturn nounwind uwtable
define void @main() unnamed_addr #1 {
start:
%0 = load volatile i8, i8* inttoptr (i16 37 to i8*), align 1
%1 = or i8 %0, 4
store volatile i8 %1, i8* inttoptr (i16 37 to i8*), align 1
%2 = load volatile i8, i8* inttoptr (i16 36 to i8*), align 4
%3 = or i8 %2, 44
store volatile i8 %3, i8* inttoptr (i16 36 to i8*), align 4
%4 = load volatile i8, i8* inttoptr (i16 36 to i8*), align 4
%5 = and i8 %4, -17
store volatile i8 %5, i8* inttoptr (i16 36 to i8*), align 4
%6 = load volatile i8, i8* inttoptr (i16 76 to i8*), align 4
%7 = or i8 %6, 80
store volatile i8 %7, i8* inttoptr (i16 76 to i8*), align 4
%8 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 0), align 1
store volatile i8 %8, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i
bb2.i: ; preds = %bb2.i, %start
%9 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%10 = icmp sgt i8 %9, -1
br i1 %10, label %bb2.i, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit: ; preds = %bb2.i
%11 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%12 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 1), align 1
store volatile i8 %12, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.1
bb9: ; preds = %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.9, %bb9
br label %bb9
bb2.i.1: ; preds = %bb2.i.1, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit
%13 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%14 = icmp sgt i8 %13, -1
br i1 %14, label %bb2.i.1, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.1
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.1: ; preds = %bb2.i.1
%15 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%16 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 2), align 1
store volatile i8 %16, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.2
bb2.i.2: ; preds = %bb2.i.2, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.1
%17 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%18 = icmp sgt i8 %17, -1
br i1 %18, label %bb2.i.2, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.2
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.2: ; preds = %bb2.i.2
%19 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%20 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 3), align 1
store volatile i8 %20, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.3
bb2.i.3: ; preds = %bb2.i.3, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.2
%21 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%22 = icmp sgt i8 %21, -1
br i1 %22, label %bb2.i.3, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.3
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.3: ; preds = %bb2.i.3
%23 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%24 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 4), align 1
store volatile i8 %24, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.4
bb2.i.4: ; preds = %bb2.i.4, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.3
%25 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%26 = icmp sgt i8 %25, -1
br i1 %26, label %bb2.i.4, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.4
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.4: ; preds = %bb2.i.4
%27 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%28 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 5), align 1
store volatile i8 %28, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.5
bb2.i.5: ; preds = %bb2.i.5, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.4
%29 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%30 = icmp sgt i8 %29, -1
br i1 %30, label %bb2.i.5, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.5
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.5: ; preds = %bb2.i.5
%31 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%32 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 6), align 1
store volatile i8 %32, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.6
bb2.i.6: ; preds = %bb2.i.6, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.5
%33 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%34 = icmp sgt i8 %33, -1
br i1 %34, label %bb2.i.6, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.6
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.6: ; preds = %bb2.i.6
%35 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%36 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 7), align 1
store volatile i8 %36, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.7
bb2.i.7: ; preds = %bb2.i.7, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.6
%37 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%38 = icmp sgt i8 %37, -1
br i1 %38, label %bb2.i.7, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.7
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.7: ; preds = %bb2.i.7
%39 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%40 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 8), align 1
store volatile i8 %40, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.8
bb2.i.8: ; preds = %bb2.i.8, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.7
%41 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%42 = icmp sgt i8 %41, -1
br i1 %42, label %bb2.i.8, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.8
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.8: ; preds = %bb2.i.8
%43 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%44 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 9), align 1
store volatile i8 %44, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.9
bb2.i.9: ; preds = %bb2.i.9, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.8
%45 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%46 = icmp sgt i8 %45, -1
br i1 %46, label %bb2.i.9, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.9
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.9: ; preds = %bb2.i.9
%47 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
br label %bb9
}
attributes #0 = { norecurse nounwind readnone uwtable }
attributes #1 = { noreturn nounwind uwtable }
!llvm.module.flags = !{!0}
!0 = !{i32 1, !"PIE Level", i32 2}
```
|
1.0
|
Invalid machine code generated for R_AVR_16 fixups - GIven the following `main` function:
```rust
static mut PIXELS: &'static mut [u8] = &mut [0b_1010_0110; 10];
#[no_mangle]
pub extern fn main() {
unsafe {
spi::setup();
for &pixel in PIXELS.iter() {
spi::sync(pixel);
}
loop {}
}
}
```
the generated `.elf` file seems to contain invalid machine code at several places, e.g.
```asm
000000b2 <LBB1_5>:
b2: 8d b5 in r24, 0x2d ; 45
b4: 88 23 and r24, r24
b6: ea f7 brpl .-6 ; 0xb2 <LBB1_5>
b8: 8e b5 in r24, 0x2e ; 46
ba: 83 92 .word 0x9283 ; ????
bc: 00 00 nop
be: 8e bd out 0x2e, r24 ; 46
```
If needed, I can try attaching a full project, but it is basically using a trimmed-down version of `libcore`. Full `.ll` listing:
```llvm
; ModuleID = 'hello_avr.cgu-0.rs'
source_filename = "hello_avr.cgu-0.rs"
target datalayout = "e-p:16:16:16-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-n8"
target triple = "avr-atmel-none"
@ref_mut.0 = internal unnamed_addr global [10 x i8] c"\A6\A6\A6\A6\A6\A6\A6\A6\A6\A6", align 1
; Function Attrs: norecurse nounwind readnone uwtable
define void @rust_eh_personality({}* nocapture, {}* nocapture) unnamed_addr #0 {
start:
ret void
}
; Function Attrs: noreturn nounwind uwtable
define void @main() unnamed_addr #1 {
start:
%0 = load volatile i8, i8* inttoptr (i16 37 to i8*), align 1
%1 = or i8 %0, 4
store volatile i8 %1, i8* inttoptr (i16 37 to i8*), align 1
%2 = load volatile i8, i8* inttoptr (i16 36 to i8*), align 4
%3 = or i8 %2, 44
store volatile i8 %3, i8* inttoptr (i16 36 to i8*), align 4
%4 = load volatile i8, i8* inttoptr (i16 36 to i8*), align 4
%5 = and i8 %4, -17
store volatile i8 %5, i8* inttoptr (i16 36 to i8*), align 4
%6 = load volatile i8, i8* inttoptr (i16 76 to i8*), align 4
%7 = or i8 %6, 80
store volatile i8 %7, i8* inttoptr (i16 76 to i8*), align 4
%8 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 0), align 1
store volatile i8 %8, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i
bb2.i: ; preds = %bb2.i, %start
%9 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%10 = icmp sgt i8 %9, -1
br i1 %10, label %bb2.i, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit: ; preds = %bb2.i
%11 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%12 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 1), align 1
store volatile i8 %12, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.1
bb9: ; preds = %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.9, %bb9
br label %bb9
bb2.i.1: ; preds = %bb2.i.1, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit
%13 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%14 = icmp sgt i8 %13, -1
br i1 %14, label %bb2.i.1, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.1
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.1: ; preds = %bb2.i.1
%15 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%16 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 2), align 1
store volatile i8 %16, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.2
bb2.i.2: ; preds = %bb2.i.2, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.1
%17 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%18 = icmp sgt i8 %17, -1
br i1 %18, label %bb2.i.2, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.2
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.2: ; preds = %bb2.i.2
%19 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%20 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 3), align 1
store volatile i8 %20, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.3
bb2.i.3: ; preds = %bb2.i.3, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.2
%21 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%22 = icmp sgt i8 %21, -1
br i1 %22, label %bb2.i.3, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.3
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.3: ; preds = %bb2.i.3
%23 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%24 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 4), align 1
store volatile i8 %24, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.4
bb2.i.4: ; preds = %bb2.i.4, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.3
%25 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%26 = icmp sgt i8 %25, -1
br i1 %26, label %bb2.i.4, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.4
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.4: ; preds = %bb2.i.4
%27 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%28 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 5), align 1
store volatile i8 %28, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.5
bb2.i.5: ; preds = %bb2.i.5, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.4
%29 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%30 = icmp sgt i8 %29, -1
br i1 %30, label %bb2.i.5, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.5
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.5: ; preds = %bb2.i.5
%31 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%32 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 6), align 1
store volatile i8 %32, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.6
bb2.i.6: ; preds = %bb2.i.6, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.5
%33 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%34 = icmp sgt i8 %33, -1
br i1 %34, label %bb2.i.6, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.6
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.6: ; preds = %bb2.i.6
%35 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%36 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 7), align 1
store volatile i8 %36, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.7
bb2.i.7: ; preds = %bb2.i.7, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.6
%37 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%38 = icmp sgt i8 %37, -1
br i1 %38, label %bb2.i.7, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.7
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.7: ; preds = %bb2.i.7
%39 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%40 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 8), align 1
store volatile i8 %40, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.8
bb2.i.8: ; preds = %bb2.i.8, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.7
%41 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%42 = icmp sgt i8 %41, -1
br i1 %42, label %bb2.i.8, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.8
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.8: ; preds = %bb2.i.8
%43 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
%44 = load i8, i8* getelementptr inbounds ([10 x i8], [10 x i8]* @ref_mut.0, i16 0, i16 9), align 1
store volatile i8 %44, i8* inttoptr (i16 78 to i8*), align 2
br label %bb2.i.9
bb2.i.9: ; preds = %bb2.i.9, %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.8
%45 = load volatile i8, i8* inttoptr (i16 77 to i8*), align 1
%46 = icmp sgt i8 %45, -1
br i1 %46, label %bb2.i.9, label %_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.9
_ZN9hello_avr3spi4sync17he0f643349c96805cE.exit.9: ; preds = %bb2.i.9
%47 = load volatile i8, i8* inttoptr (i16 78 to i8*), align 2
br label %bb9
}
attributes #0 = { norecurse nounwind readnone uwtable }
attributes #1 = { noreturn nounwind uwtable }
!llvm.module.flags = !{!0}
!0 = !{i32 1, !"PIE Level", i32 2}
```
|
test
|
invalid machine code generated for r avr fixups given the following main function rust static mut pixels static mut mut pub extern fn main unsafe spi setup for pixel in pixels iter spi sync pixel loop the generated elf file seems to contain invalid machine code at several places e g asm in and ea brpl in ba word bc nop be bd out if needed i can try attaching a full project but it is basically using a trimmed down version of libcore full ll listing llvm moduleid hello avr cgu rs source filename hello avr cgu rs target datalayout e p target triple avr atmel none ref mut internal unnamed addr global c align function attrs norecurse nounwind readnone uwtable define void rust eh personality nocapture nocapture unnamed addr start ret void function attrs noreturn nounwind uwtable define void main unnamed addr start load volatile inttoptr to align or store volatile inttoptr to align load volatile inttoptr to align or store volatile inttoptr to align load volatile inttoptr to align and store volatile inttoptr to align load volatile inttoptr to align or store volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i start load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i preds exit br label i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align load getelementptr inbounds ref mut align store volatile inttoptr to align br label i i preds i exit load volatile inttoptr to align icmp sgt br label i label exit exit preds i load volatile inttoptr to align br label attributes norecurse nounwind readnone uwtable attributes noreturn nounwind uwtable llvm module flags pie level
| 1
|
406,865
| 27,585,813,725
|
IssuesEvent
|
2023-03-08 19:40:41
|
Vaskivskyi/ha-asusrouter
|
https://api.github.com/repos/Vaskivskyi/ha-asusrouter
|
closed
|
[Device support] Upload/Download values seem to be monthly stats
|
documentation network
|
### Anything you want to mention?
Have a feeling this is due to how the device is reporting the stat, but I am trying to get the daily downloads / upload values out of the "rt_ax95q_wan_download(upload)" sensor. The values reported seem to be monthly, but the daily stats are available on the router itself.
So far this seems to be a PEBKAC issue, and not one with the integration itself.
from the HA side, I cannot seem to find a way to reset this value daily outside of the meters sensor, which this is not.
thoughts or input would be appreciated.
### Your device model
ZenWifi AX (XT8)
### Firmware type
Stock
### Firmware version
3.0.0.4.388_22525
### Does everything work well?
Yes
|
1.0
|
[Device support] Upload/Download values seem to be monthly stats - ### Anything you want to mention?
Have a feeling this is due to how the device is reporting the stat, but I am trying to get the daily downloads / upload values out of the "rt_ax95q_wan_download(upload)" sensor. The values reported seem to be monthly, but the daily stats are available on the router itself.
So far this seems to be a PEBKAC issue, and not one with the integration itself.
from the HA side, I cannot seem to find a way to reset this value daily outside of the meters sensor, which this is not.
thoughts or input would be appreciated.
### Your device model
ZenWifi AX (XT8)
### Firmware type
Stock
### Firmware version
3.0.0.4.388_22525
### Does everything work well?
Yes
|
non_test
|
upload download values seem to be monthly stats anything you want to mention have a feeling this is due to how the device is reporting the stat but i am trying to get the daily downloads upload values out of the rt wan download upload sensor the values reported seem to be monthly but the daily stats are available on the router itself so far this seems to be a pebkac issue and not one with the integration itself from the ha side i cannot seem to find a way to reset this value daily outside of the meters sensor which this is not thoughts or input would be appreciated your device model zenwifi ax firmware type stock firmware version does everything work well yes
| 0
|
306,774
| 9,403,863,895
|
IssuesEvent
|
2019-04-09 03:21:01
|
ClangBuiltLinux/linux
|
https://api.github.com/repos/ClangBuiltLinux/linux
|
closed
|
-Wpointer-bool-conversion in include/trace/events/cgroup.h
|
-Wpointer-bool-conversion [ARCH] arm32 [BUG] linux good first issue low priority
|
from linaro tcwg ci:
```
00:00:34 In file included from kernel/cgroup/cgroup.c:63:
00:00:34 In file included from ./include/trace/events/cgroup.h:155:
00:00:34 In file included from ./include/trace/define_trace.h:96:
00:00:34 In file included from ./include/trace/trace_events.h:547:
00:00:34 ./include/trace/events/cgroup.h:11:498: warning: address of array 'root->name' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static inline __attribute__((__always_inline__)) __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) int trace_event_get_offsets_cgroup_root( struct trace_event_data_offsets_cgroup_root *__data_offsets, struct cgroup_root *root) { int __data_size = 0; int __attribute__((__unused__)) __item_length; struct trace_event_raw_cgroup_root __attribute__((__unused__)) *entry; __item_length = (strlen((root->name) ? (const char *)(root->name) : "(null)") + 1) * sizeof(char); __data_offsets->name = __data_size + __builtin_offsetof(typeof(*entry), __data); __data_offsets->name |= __item_length << 16; __data_size += __item_length;; return __data_size; };
00:00:34 ~~~~~~^~~~ ~
00:00:34 ./include/trace/events/cgroup.h:106:819: warning: address of array 'task->comm' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static inline __attribute__((__always_inline__)) __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) int trace_event_get_offsets_cgroup_migrate( struct trace_event_data_offsets_cgroup_migrate *__data_offsets, struct cgroup *dst_cgrp, const char *path, struct task_struct *task, bool threadgroup) { int __data_size = 0; int __attribute__((__unused__)) __item_length; struct trace_event_raw_cgroup_migrate __attribute__((__unused__)) *entry; __item_length = (strlen((path) ? (const char *)(path) : "(null)") + 1) * sizeof(char); __data_offsets->dst_path = __data_size + __builtin_offsetof(typeof(*entry), __data); __data_offsets->dst_path |= __item_length << 16; __data_size += __item_length; __item_length = (strlen((task->comm) ? (const char *)(task->comm) : "(null)") + 1) * sizeof(char); __data_offsets->comm = __data_size + __builtin_offsetof(typeof(*entry), __data); __data_offsets->comm |= __item_length << 16; __data_size += __item_length;; return __data_size; };
00:00:34 ~~~~~~^~~~ ~
00:00:34 In file included from kernel/cgroup/cgroup.c:63:
00:00:34 In file included from ./include/trace/events/cgroup.h:155:
00:00:34 In file included from ./include/trace/define_trace.h:96:
00:00:34 In file included from ./include/trace/trace_events.h:740:
00:00:34 ./include/trace/events/cgroup.h:11:792: warning: address of array 'root->name' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void trace_event_raw_event_cgroup_root(void *__data, struct cgroup_root *root) { struct trace_event_file *trace_file = __data; struct trace_event_data_offsets_cgroup_root __attribute__((__unused__)) __data_offsets; struct trace_event_buffer fbuffer; struct trace_event_raw_cgroup_root *entry; int __data_size; if (trace_trigger_soft_disabled(trace_file)) return; __data_size = trace_event_get_offsets_cgroup_root(&__data_offsets, root); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + __data_size); if (!entry) return; entry->__data_loc_name = __data_offsets.name; { entry->root = root->hierarchy_id; entry->ss_mask = root->subsys_mask; strcpy(((char *)((void *)entry + (entry->__data_loc_name & 0xffff))), (root->name) ? (const char *)(root->name) : "(null)");;; } trace_event_buffer_commit(&fbuffer); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 ./include/trace/events/cgroup.h:106:1134: warning: address of array 'task->comm' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void trace_event_raw_event_cgroup_migrate(void *__data, struct cgroup *dst_cgrp, const char *path, struct task_struct *task, bool threadgroup) { struct trace_event_file *trace_file = __data; struct trace_event_data_offsets_cgroup_migrate __attribute__((__unused__)) __data_offsets; struct trace_event_buffer fbuffer; struct trace_event_raw_cgroup_migrate *entry; int __data_size; if (trace_trigger_soft_disabled(trace_file)) return; __data_size = trace_event_get_offsets_cgroup_migrate(&__data_offsets, dst_cgrp, path, task, threadgroup); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + __data_size); if (!entry) return; entry->__data_loc_dst_path = __data_offsets.dst_path; entry->__data_loc_comm = __data_offsets.comm; { entry->dst_root = dst_cgrp->root->hierarchy_id; entry->dst_id = dst_cgrp->id; entry->dst_level = dst_cgrp->level; strcpy(((char *)((void *)entry + (entry->__data_loc_dst_path & 0xffff))), (path) ? (const char *)(path) : "(null)");; entry->pid = task->pid; strcpy(((char *)((void *)entry + (entry->__data_loc_comm & 0xffff))), (task->comm) ? (const char *)(task->comm) : "(null)");;; } trace_event_buffer_commit(&fbuffer); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 In file included from kernel/cgroup/cgroup.c:63:
00:00:34 In file included from ./include/trace/events/cgroup.h:155:
00:00:34 In file included from ./include/trace/define_trace.h:97:
00:00:34 In file included from ./include/trace/perf.h:90:
00:00:34 ./include/trace/events/cgroup.h:11:1586: warning: address of array 'root->name' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void perf_trace_cgroup_root(void *__data, struct cgroup_root *root) { struct trace_event_call *event_call = __data; struct trace_event_data_offsets_cgroup_root __attribute__((__unused__)) __data_offsets; struct trace_event_raw_cgroup_root *entry; struct pt_regs *__regs; u64 __count = 1; struct task_struct *__task = ((void *)0); struct hlist_head *head; int __entry_size; int __data_size; int rctx; __data_size = trace_event_get_offsets_cgroup_root(&__data_offsets, root); head = ({ do { const void *__vpp_verify = (typeof((event_call->perf_events) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*(event_call->perf_events)) *)(event_call->perf_events)); (typeof((typeof(*(event_call->perf_events)) *)(event_call->perf_events))) (__ptr + (((__per_cpu_offset[(current_thread_info()->cpu)])))); }); }); if (!bpf_prog_array_valid(event_call) && __builtin_constant_p(!__task) && !__task && hlist_empty(head)) return; __entry_size = ((((__data_size + sizeof(*entry) + sizeof(u32))) + ((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)) & ~((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)); __entry_size -= sizeof(u32); entry = perf_trace_buf_alloc(__entry_size, &__regs, &rctx); if (!entry) return; perf_fetch_caller_regs(__regs); entry->__data_loc_name = __data_offsets.name; { entry->root = root->hierarchy_id; entry->ss_mask = root->subsys_mask; strcpy(((char *)((void *)entry + (entry->__data_loc_name & 0xffff))), (root->name) ? (const char *)(root->name) : "(null)");;; } perf_trace_run_bpf_submit(entry, __entry_size, rctx, event_call, __count, __regs, head, __task); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 ./include/trace/events/cgroup.h:106:1928: warning: address of array 'task->comm' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void perf_trace_cgroup_migrate(void *__data, struct cgroup *dst_cgrp, const char *path, struct task_struct *task, bool threadgroup) { struct trace_event_call *event_call = __data; struct trace_event_data_offsets_cgroup_migrate __attribute__((__unused__)) __data_offsets; struct trace_event_raw_cgroup_migrate *entry; struct pt_regs *__regs; u64 __count = 1; struct task_struct *__task = ((void *)0); struct hlist_head *head; int __entry_size; int __data_size; int rctx; __data_size = trace_event_get_offsets_cgroup_migrate(&__data_offsets, dst_cgrp, path, task, threadgroup); head = ({ do { const void *__vpp_verify = (typeof((event_call->perf_events) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*(event_call->perf_events)) *)(event_call->perf_events)); (typeof((typeof(*(event_call->perf_events)) *)(event_call->perf_events))) (__ptr + (((__per_cpu_offset[(current_thread_info()->cpu)])))); }); }); if (!bpf_prog_array_valid(event_call) && __builtin_constant_p(!__task) && !__task && hlist_empty(head)) return; __entry_size = ((((__data_size + sizeof(*entry) + sizeof(u32))) + ((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)) & ~((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)); __entry_size -= sizeof(u32); entry = perf_trace_buf_alloc(__entry_size, &__regs, &rctx); if (!entry) return; perf_fetch_caller_regs(__regs); entry->__data_loc_dst_path = __data_offsets.dst_path; entry->__data_loc_comm = __data_offsets.comm; { entry->dst_root = dst_cgrp->root->hierarchy_id; entry->dst_id = dst_cgrp->id; entry->dst_level = dst_cgrp->level; strcpy(((char *)((void *)entry + (entry->__data_loc_dst_path & 0xffff))), (path) ? (const char *)(path) : "(null)");; entry->pid = task->pid; strcpy(((char *)((void *)entry + (entry->__data_loc_comm & 0xffff))), (task->comm) ? (const char *)(task->comm) : "(null)");;; } perf_trace_run_bpf_submit(entry, __entry_size, rctx, event_call, __count, __regs, head, __task); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 6 warnings generated.
```
|
1.0
|
-Wpointer-bool-conversion in include/trace/events/cgroup.h - from linaro tcwg ci:
```
00:00:34 In file included from kernel/cgroup/cgroup.c:63:
00:00:34 In file included from ./include/trace/events/cgroup.h:155:
00:00:34 In file included from ./include/trace/define_trace.h:96:
00:00:34 In file included from ./include/trace/trace_events.h:547:
00:00:34 ./include/trace/events/cgroup.h:11:498: warning: address of array 'root->name' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static inline __attribute__((__always_inline__)) __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) int trace_event_get_offsets_cgroup_root( struct trace_event_data_offsets_cgroup_root *__data_offsets, struct cgroup_root *root) { int __data_size = 0; int __attribute__((__unused__)) __item_length; struct trace_event_raw_cgroup_root __attribute__((__unused__)) *entry; __item_length = (strlen((root->name) ? (const char *)(root->name) : "(null)") + 1) * sizeof(char); __data_offsets->name = __data_size + __builtin_offsetof(typeof(*entry), __data); __data_offsets->name |= __item_length << 16; __data_size += __item_length;; return __data_size; };
00:00:34 ~~~~~~^~~~ ~
00:00:34 ./include/trace/events/cgroup.h:106:819: warning: address of array 'task->comm' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static inline __attribute__((__always_inline__)) __attribute__((__gnu_inline__)) __attribute__((__unused__)) __attribute__((__no_instrument_function__)) __attribute__((__no_instrument_function__)) int trace_event_get_offsets_cgroup_migrate( struct trace_event_data_offsets_cgroup_migrate *__data_offsets, struct cgroup *dst_cgrp, const char *path, struct task_struct *task, bool threadgroup) { int __data_size = 0; int __attribute__((__unused__)) __item_length; struct trace_event_raw_cgroup_migrate __attribute__((__unused__)) *entry; __item_length = (strlen((path) ? (const char *)(path) : "(null)") + 1) * sizeof(char); __data_offsets->dst_path = __data_size + __builtin_offsetof(typeof(*entry), __data); __data_offsets->dst_path |= __item_length << 16; __data_size += __item_length; __item_length = (strlen((task->comm) ? (const char *)(task->comm) : "(null)") + 1) * sizeof(char); __data_offsets->comm = __data_size + __builtin_offsetof(typeof(*entry), __data); __data_offsets->comm |= __item_length << 16; __data_size += __item_length;; return __data_size; };
00:00:34 ~~~~~~^~~~ ~
00:00:34 In file included from kernel/cgroup/cgroup.c:63:
00:00:34 In file included from ./include/trace/events/cgroup.h:155:
00:00:34 In file included from ./include/trace/define_trace.h:96:
00:00:34 In file included from ./include/trace/trace_events.h:740:
00:00:34 ./include/trace/events/cgroup.h:11:792: warning: address of array 'root->name' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void trace_event_raw_event_cgroup_root(void *__data, struct cgroup_root *root) { struct trace_event_file *trace_file = __data; struct trace_event_data_offsets_cgroup_root __attribute__((__unused__)) __data_offsets; struct trace_event_buffer fbuffer; struct trace_event_raw_cgroup_root *entry; int __data_size; if (trace_trigger_soft_disabled(trace_file)) return; __data_size = trace_event_get_offsets_cgroup_root(&__data_offsets, root); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + __data_size); if (!entry) return; entry->__data_loc_name = __data_offsets.name; { entry->root = root->hierarchy_id; entry->ss_mask = root->subsys_mask; strcpy(((char *)((void *)entry + (entry->__data_loc_name & 0xffff))), (root->name) ? (const char *)(root->name) : "(null)");;; } trace_event_buffer_commit(&fbuffer); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 ./include/trace/events/cgroup.h:106:1134: warning: address of array 'task->comm' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void trace_event_raw_event_cgroup_migrate(void *__data, struct cgroup *dst_cgrp, const char *path, struct task_struct *task, bool threadgroup) { struct trace_event_file *trace_file = __data; struct trace_event_data_offsets_cgroup_migrate __attribute__((__unused__)) __data_offsets; struct trace_event_buffer fbuffer; struct trace_event_raw_cgroup_migrate *entry; int __data_size; if (trace_trigger_soft_disabled(trace_file)) return; __data_size = trace_event_get_offsets_cgroup_migrate(&__data_offsets, dst_cgrp, path, task, threadgroup); entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry) + __data_size); if (!entry) return; entry->__data_loc_dst_path = __data_offsets.dst_path; entry->__data_loc_comm = __data_offsets.comm; { entry->dst_root = dst_cgrp->root->hierarchy_id; entry->dst_id = dst_cgrp->id; entry->dst_level = dst_cgrp->level; strcpy(((char *)((void *)entry + (entry->__data_loc_dst_path & 0xffff))), (path) ? (const char *)(path) : "(null)");; entry->pid = task->pid; strcpy(((char *)((void *)entry + (entry->__data_loc_comm & 0xffff))), (task->comm) ? (const char *)(task->comm) : "(null)");;; } trace_event_buffer_commit(&fbuffer); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 In file included from kernel/cgroup/cgroup.c:63:
00:00:34 In file included from ./include/trace/events/cgroup.h:155:
00:00:34 In file included from ./include/trace/define_trace.h:97:
00:00:34 In file included from ./include/trace/perf.h:90:
00:00:34 ./include/trace/events/cgroup.h:11:1586: warning: address of array 'root->name' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void perf_trace_cgroup_root(void *__data, struct cgroup_root *root) { struct trace_event_call *event_call = __data; struct trace_event_data_offsets_cgroup_root __attribute__((__unused__)) __data_offsets; struct trace_event_raw_cgroup_root *entry; struct pt_regs *__regs; u64 __count = 1; struct task_struct *__task = ((void *)0); struct hlist_head *head; int __entry_size; int __data_size; int rctx; __data_size = trace_event_get_offsets_cgroup_root(&__data_offsets, root); head = ({ do { const void *__vpp_verify = (typeof((event_call->perf_events) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*(event_call->perf_events)) *)(event_call->perf_events)); (typeof((typeof(*(event_call->perf_events)) *)(event_call->perf_events))) (__ptr + (((__per_cpu_offset[(current_thread_info()->cpu)])))); }); }); if (!bpf_prog_array_valid(event_call) && __builtin_constant_p(!__task) && !__task && hlist_empty(head)) return; __entry_size = ((((__data_size + sizeof(*entry) + sizeof(u32))) + ((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)) & ~((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)); __entry_size -= sizeof(u32); entry = perf_trace_buf_alloc(__entry_size, &__regs, &rctx); if (!entry) return; perf_fetch_caller_regs(__regs); entry->__data_loc_name = __data_offsets.name; { entry->root = root->hierarchy_id; entry->ss_mask = root->subsys_mask; strcpy(((char *)((void *)entry + (entry->__data_loc_name & 0xffff))), (root->name) ? (const char *)(root->name) : "(null)");;; } perf_trace_run_bpf_submit(entry, __entry_size, rctx, event_call, __count, __regs, head, __task); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 ./include/trace/events/cgroup.h:106:1928: warning: address of array 'task->comm' will always evaluate to 'true' [-Wpointer-bool-conversion]
00:00:34 static __attribute__((__no_instrument_function__)) void perf_trace_cgroup_migrate(void *__data, struct cgroup *dst_cgrp, const char *path, struct task_struct *task, bool threadgroup) { struct trace_event_call *event_call = __data; struct trace_event_data_offsets_cgroup_migrate __attribute__((__unused__)) __data_offsets; struct trace_event_raw_cgroup_migrate *entry; struct pt_regs *__regs; u64 __count = 1; struct task_struct *__task = ((void *)0); struct hlist_head *head; int __entry_size; int __data_size; int rctx; __data_size = trace_event_get_offsets_cgroup_migrate(&__data_offsets, dst_cgrp, path, task, threadgroup); head = ({ do { const void *__vpp_verify = (typeof((event_call->perf_events) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*(event_call->perf_events)) *)(event_call->perf_events)); (typeof((typeof(*(event_call->perf_events)) *)(event_call->perf_events))) (__ptr + (((__per_cpu_offset[(current_thread_info()->cpu)])))); }); }); if (!bpf_prog_array_valid(event_call) && __builtin_constant_p(!__task) && !__task && hlist_empty(head)) return; __entry_size = ((((__data_size + sizeof(*entry) + sizeof(u32))) + ((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)) & ~((typeof((__data_size + sizeof(*entry) + sizeof(u32))))((sizeof(u64))) - 1)); __entry_size -= sizeof(u32); entry = perf_trace_buf_alloc(__entry_size, &__regs, &rctx); if (!entry) return; perf_fetch_caller_regs(__regs); entry->__data_loc_dst_path = __data_offsets.dst_path; entry->__data_loc_comm = __data_offsets.comm; { entry->dst_root = dst_cgrp->root->hierarchy_id; entry->dst_id = dst_cgrp->id; entry->dst_level = dst_cgrp->level; strcpy(((char *)((void *)entry + (entry->__data_loc_dst_path & 0xffff))), (path) ? (const char *)(path) : "(null)");; entry->pid = task->pid; strcpy(((char *)((void *)entry + (entry->__data_loc_comm & 0xffff))), (task->comm) ? (const char *)(task->comm) : "(null)");;; } perf_trace_run_bpf_submit(entry, __entry_size, rctx, event_call, __count, __regs, head, __task); };
00:00:34 ~~~~~~^~~~ ~
00:00:34 6 warnings generated.
```
|
non_test
|
wpointer bool conversion in include trace events cgroup h from linaro tcwg ci in file included from kernel cgroup cgroup c in file included from include trace events cgroup h in file included from include trace define trace h in file included from include trace trace events h include trace events cgroup h warning address of array root name will always evaluate to true static inline attribute always inline attribute gnu inline attribute unused attribute no instrument function attribute no instrument function int trace event get offsets cgroup root struct trace event data offsets cgroup root data offsets struct cgroup root root int data size int attribute unused item length struct trace event raw cgroup root attribute unused entry item length strlen root name const char root name null sizeof char data offsets name data size builtin offsetof typeof entry data data offsets name item length data size item length return data size include trace events cgroup h warning address of array task comm will always evaluate to true static inline attribute always inline attribute gnu inline attribute unused attribute no instrument function attribute no instrument function int trace event get offsets cgroup migrate struct trace event data offsets cgroup migrate data offsets struct cgroup dst cgrp const char path struct task struct task bool threadgroup int data size int attribute unused item length struct trace event raw cgroup migrate attribute unused entry item length strlen path const char path null sizeof char data offsets dst path data size builtin offsetof typeof entry data data offsets dst path item length comm const char task comm null sizeof char data offsets comm data size builtin offsetof typeof entry data data offsets comm item length data size item length return data size in file included from kernel cgroup cgroup c in file included from include trace events cgroup h in file included from include trace define trace h in file included from include trace trace events h include trace events cgroup h warning address of array root name will always evaluate to true static attribute no instrument function void trace event raw event cgroup root void data struct cgroup root root struct trace event file trace file data struct trace event data offsets cgroup root attribute unused data offsets struct trace event buffer fbuffer struct trace event raw cgroup root entry int data size if trace trigger soft disabled trace file return data size trace event get offsets cgroup root data offsets root entry trace event buffer reserve fbuffer trace file sizeof entry data size if entry return entry data loc name data offsets name entry root root hierarchy id entry ss mask root subsys mask strcpy char void entry entry data loc name root name const char root name null trace event buffer commit fbuffer include trace events cgroup h warning address of array task comm will always evaluate to true static attribute no instrument function void trace event raw event cgroup migrate void data struct cgroup dst cgrp const char path struct task struct task bool threadgroup struct trace event file trace file data struct trace event data offsets cgroup migrate attribute unused data offsets struct trace event buffer fbuffer struct trace event raw cgroup migrate entry int data size if trace trigger soft disabled trace file return data size trace event get offsets cgroup migrate data offsets dst cgrp path task threadgroup entry trace event buffer reserve fbuffer trace file sizeof entry data size if entry return entry data loc dst path data offsets dst path entry data loc comm data offsets comm entry dst root dst cgrp root hierarchy id entry dst id dst cgrp id entry dst level dst cgrp level strcpy char void entry entry data loc dst path path const char path null entry pid task pid strcpy char void entry entry data loc comm task comm const char task comm null trace event buffer commit fbuffer in file included from kernel cgroup cgroup c in file included from include trace events cgroup h in file included from include trace define trace h in file included from include trace perf h include trace events cgroup h warning address of array root name will always evaluate to true static attribute no instrument function void perf trace cgroup root void data struct cgroup root root struct trace event call event call data struct trace event data offsets cgroup root attribute unused data offsets struct trace event raw cgroup root entry struct pt regs regs count struct task struct task void struct hlist head head int entry size int data size int rctx data size trace event get offsets cgroup root data offsets root head do const void vpp verify typeof event call perf events void void vpp verify while unsigned long ptr ptr unsigned long typeof event call perf events event call perf events typeof typeof event call perf events event call perf events ptr per cpu offset if bpf prog array valid event call builtin constant p task task hlist empty head return entry size data size sizeof entry sizeof typeof data size sizeof entry sizeof sizeof typeof data size sizeof entry sizeof sizeof entry size sizeof entry perf trace buf alloc entry size regs rctx if entry return perf fetch caller regs regs entry data loc name data offsets name entry root root hierarchy id entry ss mask root subsys mask strcpy char void entry entry data loc name root name const char root name null perf trace run bpf submit entry entry size rctx event call count regs head task include trace events cgroup h warning address of array task comm will always evaluate to true static attribute no instrument function void perf trace cgroup migrate void data struct cgroup dst cgrp const char path struct task struct task bool threadgroup struct trace event call event call data struct trace event data offsets cgroup migrate attribute unused data offsets struct trace event raw cgroup migrate entry struct pt regs regs count struct task struct task void struct hlist head head int entry size int data size int rctx data size trace event get offsets cgroup migrate data offsets dst cgrp path task threadgroup head do const void vpp verify typeof event call perf events void void vpp verify while unsigned long ptr ptr unsigned long typeof event call perf events event call perf events typeof typeof event call perf events event call perf events ptr per cpu offset if bpf prog array valid event call builtin constant p task task hlist empty head return entry size data size sizeof entry sizeof typeof data size sizeof entry sizeof sizeof typeof data size sizeof entry sizeof sizeof entry size sizeof entry perf trace buf alloc entry size regs rctx if entry return perf fetch caller regs regs entry data loc dst path data offsets dst path entry data loc comm data offsets comm entry dst root dst cgrp root hierarchy id entry dst id dst cgrp id entry dst level dst cgrp level strcpy char void entry entry data loc dst path path const char path null entry pid task pid strcpy char void entry entry data loc comm task comm const char task comm null perf trace run bpf submit entry entry size rctx event call count regs head task warnings generated
| 0
|
181,601
| 14,059,472,865
|
IssuesEvent
|
2020-11-03 03:13:00
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
gen_istio_image_list works only on helm yaml files
|
area/test and release
|
**Bug description**
We need to update this script to work with the charts from installer. specifically, when I download the release, I can run the tool and it will parse the yaml files from install/kubernetes/operator/charts to generate the image list.
|
1.0
|
gen_istio_image_list works only on helm yaml files -
**Bug description**
We need to update this script to work with the charts from installer. specifically, when I download the release, I can run the tool and it will parse the yaml files from install/kubernetes/operator/charts to generate the image list.
|
test
|
gen istio image list works only on helm yaml files bug description we need to update this script to work with the charts from installer specifically when i download the release i can run the tool and it will parse the yaml files from install kubernetes operator charts to generate the image list
| 1
|
35,743
| 2,793,030,772
|
IssuesEvent
|
2015-05-11 08:09:27
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
closed
|
Add an option ScrollToCell
|
Feature Priority: normal
|
When a user clicks on a cell or starts to edit its content, the viewport scrolls to the cell's coordinates.
Sometimes (in my case), this behavior is not suitable.
Could you add an option to handsontable to disable this feature?
Thank you in advance,
|
1.0
|
Add an option ScrollToCell - When a user clicks on a cell or starts to edit its content, the viewport scrolls to the cell's coordinates.
Sometimes (in my case), this behavior is not suitable.
Could you add an option to handsontable to disable this feature?
Thank you in advance,
|
non_test
|
add an option scrolltocell when a user clicks on a cell or starts to edit its content the viewport scrolls to the cell s coordinates sometimes in my case this behavior is not suitable could you add an option to handsontable to disable this feature thank you in advance
| 0
|
207,739
| 15,834,252,003
|
IssuesEvent
|
2021-04-06 16:33:20
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
New Smoke Test: Login
|
VSP-testing-team
|
## Issue Description
There is a need for a smoke test that tests standard login on the platform.
---
## Tasks
- [ ] Go to va.gov
- [ ] Click on Sign In
- [ ] Click on the id.me button
- [ ] Enter credentials
## Acceptance Criteria
- [ ] _What will be created or happen as a result of this story?_
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
1.0
|
New Smoke Test: Login - ## Issue Description
There is a need for a smoke test that tests standard login on the platform.
---
## Tasks
- [ ] Go to va.gov
- [ ] Click on Sign In
- [ ] Click on the id.me button
- [ ] Enter credentials
## Acceptance Criteria
- [ ] _What will be created or happen as a result of this story?_
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
test
|
new smoke test login issue description there is a need for a smoke test that tests standard login on the platform tasks go to va gov click on sign in click on the id me button enter credentials acceptance criteria what will be created or happen as a result of this story how to configure this issue attached to a milestone when will this be completed attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design tools be tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc
| 1
|
318,413
| 9,692,109,977
|
IssuesEvent
|
2019-05-24 13:03:37
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[NPC/Daily] Daryl Riknussun <Cooking Trainer>-5159- Ironforge (issue 2)
|
Daily Fixed in Dev Low Priority Major City
|
**Links:**
npc - https://www.wowhead.com/npc=5159/daryl-riknussun
daily list - https://www.wowhead.com/npc=5159/daryl-riknussun#starts
**What is happening:**
Noticed over the last few days that the daily reset time is different on this daily than with the fishing daily in Ironforge
**cooking reset timer is different than the fishing (only sometimes)
could be due to broken / missing dailies**
note:same thing happens in Stormwind for daily there too
**What should happen:**
All dailies should reset same time (from when you did them)
@Rushor - check all dailies are there from dailies list
https://www.wowhead.com/npc=5159/daryl-riknussun#starts
Noticed that is one requires lvl 35 - https://cata-twinhead.twinstar.cz/?quest=6612
could be worlth checking that dailies are scaling to lvl too please
|
1.0
|
[NPC/Daily] Daryl Riknussun <Cooking Trainer>-5159- Ironforge (issue 2) - **Links:**
npc - https://www.wowhead.com/npc=5159/daryl-riknussun
daily list - https://www.wowhead.com/npc=5159/daryl-riknussun#starts
**What is happening:**
Noticed over the last few days that the daily reset time is different on this daily than with the fishing daily in Ironforge
**cooking reset timer is different than the fishing (only sometimes)
could be due to broken / missing dailies**
note:same thing happens in Stormwind for daily there too
**What should happen:**
All dailies should reset same time (from when you did them)
@Rushor - check all dailies are there from dailies list
https://www.wowhead.com/npc=5159/daryl-riknussun#starts
Noticed that is one requires lvl 35 - https://cata-twinhead.twinstar.cz/?quest=6612
could be worlth checking that dailies are scaling to lvl too please
|
non_test
|
daryl riknussun ironforge issue links npc daily list what is happening noticed over the last few days that the daily reset time is different on this daily than with the fishing daily in ironforge cooking reset timer is different than the fishing only sometimes could be due to broken missing dailies note same thing happens in stormwind for daily there too what should happen all dailies should reset same time from when you did them rushor check all dailies are there from dailies list noticed that is one requires lvl could be worlth checking that dailies are scaling to lvl too please
| 0
|
183,927
| 14,263,262,302
|
IssuesEvent
|
2020-11-20 14:12:15
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Add a test to catch native editor crash happening on https://github.com/WordPress/gutenberg/pull/24711#issuecomment-707058795
|
Automated Testing [Type] Enhancement
|
Looks like https://github.com/WordPress/gutenberg/pull/24711 was leading to a native editor crash and while we're in the process of temporarily reverting it, it'd be good to add a test in the mobile testsuite here in the Gutenberg repo to catch a crash in a very basic flow of using the Columns block.
|
1.0
|
Add a test to catch native editor crash happening on https://github.com/WordPress/gutenberg/pull/24711#issuecomment-707058795 - Looks like https://github.com/WordPress/gutenberg/pull/24711 was leading to a native editor crash and while we're in the process of temporarily reverting it, it'd be good to add a test in the mobile testsuite here in the Gutenberg repo to catch a crash in a very basic flow of using the Columns block.
|
test
|
add a test to catch native editor crash happening on looks like was leading to a native editor crash and while we re in the process of temporarily reverting it it d be good to add a test in the mobile testsuite here in the gutenberg repo to catch a crash in a very basic flow of using the columns block
| 1
|
306,334
| 26,458,927,607
|
IssuesEvent
|
2023-01-16 16:01:16
|
getsentry/sentry-javascript
|
https://api.github.com/repos/getsentry/sentry-javascript
|
closed
|
Fix flaky doubleEndMethodOnVercel test
|
Meta: Help Wanted Package: nextjs Status: Backlog Type: Tests
|
### Problem Statement
The `doubleEndMethodOnVercel` test in the NextJS integration tests are flaky. See an example here: https://github.com/getsentry/sentry-javascript/actions/runs/3930047407/jobs/6719941317
This test was introduced here: https://github.com/getsentry/sentry-javascript/pull/6674
```
[nextjs@10 | webpack@5] Running server tests with options: --silent
X Scenario failed: doubleEndMethodOnVercel.js (line: 9)
'' == '{"success":true}'
✓ Scenario succeded: errorApiEndpoint.js
✓ Scenario succeded: errorServerSideProps.js
✓ Scenario succeded: excludedApiEndpoints.js
✓ Scenario succeded: tracing200.js
✓ Scenario succeded: tracing500.js
✓ Scenario succeded: tracingServerGetInitialProps.js
✓ Scenario succeded: tracingServerGetServerSideProps.js
✓ Scenario succeded: tracingServerGetServerSidePropsCustomPageExtension.js
✓ Scenario succeded: tracingWithSentryAPI.js
✓ Scenario succeded: tracingHttp.js
✓ Scenario succeded: cjsApiEndpoints.js
X Some scenarios failed
[nextjs@10 | webpack@5] Server integration tests failed
[nextjs] Cleaning up...
[nextjs] Test run complete
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Error: Process completed with exit code 1.
```
### Solution Brainstorm
This might be indicative of a deeper issue here, that the fix in https://github.com/getsentry/sentry-javascript/pull/6674 did not work. As such, it might require some more investigation into alternative wrapping strategies for `res.end`.
|
1.0
|
Fix flaky doubleEndMethodOnVercel test - ### Problem Statement
The `doubleEndMethodOnVercel` test in the NextJS integration tests are flaky. See an example here: https://github.com/getsentry/sentry-javascript/actions/runs/3930047407/jobs/6719941317
This test was introduced here: https://github.com/getsentry/sentry-javascript/pull/6674
```
[nextjs@10 | webpack@5] Running server tests with options: --silent
X Scenario failed: doubleEndMethodOnVercel.js (line: 9)
'' == '{"success":true}'
✓ Scenario succeded: errorApiEndpoint.js
✓ Scenario succeded: errorServerSideProps.js
✓ Scenario succeded: excludedApiEndpoints.js
✓ Scenario succeded: tracing200.js
✓ Scenario succeded: tracing500.js
✓ Scenario succeded: tracingServerGetInitialProps.js
✓ Scenario succeded: tracingServerGetServerSideProps.js
✓ Scenario succeded: tracingServerGetServerSidePropsCustomPageExtension.js
✓ Scenario succeded: tracingWithSentryAPI.js
✓ Scenario succeded: tracingHttp.js
✓ Scenario succeded: cjsApiEndpoints.js
X Some scenarios failed
[nextjs@10 | webpack@5] Server integration tests failed
[nextjs] Cleaning up...
[nextjs] Test run complete
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Error: Process completed with exit code 1.
```
### Solution Brainstorm
This might be indicative of a deeper issue here, that the fix in https://github.com/getsentry/sentry-javascript/pull/6674 did not work. As such, it might require some more investigation into alternative wrapping strategies for `res.end`.
|
test
|
fix flaky doubleendmethodonvercel test problem statement the doubleendmethodonvercel test in the nextjs integration tests are flaky see an example here this test was introduced here running server tests with options silent x scenario failed doubleendmethodonvercel js line success true ✓ scenario succeded errorapiendpoint js ✓ scenario succeded errorserversideprops js ✓ scenario succeded excludedapiendpoints js ✓ scenario succeded js ✓ scenario succeded js ✓ scenario succeded tracingservergetinitialprops js ✓ scenario succeded tracingservergetserversideprops js ✓ scenario succeded tracingservergetserversidepropscustompageextension js ✓ scenario succeded tracingwithsentryapi js ✓ scenario succeded tracinghttp js ✓ scenario succeded cjsapiendpoints js x some scenarios failed server integration tests failed cleaning up test run complete error command failed with exit code info visit for documentation about this command error process completed with exit code solution brainstorm this might be indicative of a deeper issue here that the fix in did not work as such it might require some more investigation into alternative wrapping strategies for res end
| 1
|
116,489
| 9,853,887,712
|
IssuesEvent
|
2019-06-19 15:38:08
|
dojot/dojot
|
https://api.github.com/repos/dojot/dojot
|
closed
|
[GUI] Maintain the same pattern and style on pages
|
Priority:Low Status:ToTest Team:Frontend Type:Enhancement
|
- DISCARD and SAVE buttons:



- invert the position of buttons to keep the same pattern as other pages

|
1.0
|
[GUI] Maintain the same pattern and style on pages - - DISCARD and SAVE buttons:



- invert the position of buttons to keep the same pattern as other pages

|
test
|
maintain the same pattern and style on pages discard and save buttons invert the position of buttons to keep the same pattern as other pages
| 1
|
218,902
| 17,029,435,930
|
IssuesEvent
|
2021-07-04 08:49:35
|
dchocoboo/notarise-devops
|
https://api.github.com/repos/dchocoboo/notarise-devops
|
opened
|
npm audit found vulnerabilities
|
test vulnerability
|
```
=== npm audit security report ===
┌──────────────────────────────────────────────────────────────────────────────┐
│ Manual Review │
│ Some vulnerabilities require your attention to resolve │
│ │
│ Visit https://go.npm.me/audit-guide for additional guidance │
└──────────────────────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ node.extend │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=1.1.7 <2.0.0 || >= 2.0.1 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ serverless-dynamodb-local [dev] │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ serverless-dynamodb-local > dynamodb-localhost > rmdir > │
│ │ node.flow > node.extend │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/781 │
└───────────────┴──────────────────────────────────────────────────────────────┘
found 1 moderate severity vulnerability in 1246 scanned packages
1 vulnerability requires manual review. See the full report for details.
```
|
1.0
|
npm audit found vulnerabilities - ```
=== npm audit security report ===
┌──────────────────────────────────────────────────────────────────────────────┐
│ Manual Review │
│ Some vulnerabilities require your attention to resolve │
│ │
│ Visit https://go.npm.me/audit-guide for additional guidance │
└──────────────────────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ Moderate │ Prototype Pollution │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ node.extend │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=1.1.7 <2.0.0 || >= 2.0.1 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ serverless-dynamodb-local [dev] │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ serverless-dynamodb-local > dynamodb-localhost > rmdir > │
│ │ node.flow > node.extend │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/781 │
└───────────────┴──────────────────────────────────────────────────────────────┘
found 1 moderate severity vulnerability in 1246 scanned packages
1 vulnerability requires manual review. See the full report for details.
```
|
test
|
npm audit found vulnerabilities npm audit security report ┌──────────────────────────────────────────────────────────────────────────────┐ │ manual review │ │ some vulnerabilities require your attention to resolve │ │ │ │ visit for additional guidance │ └──────────────────────────────────────────────────────────────────────────────┘ ┌───────────────┬──────────────────────────────────────────────────────────────┐ │ moderate │ prototype pollution │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ package │ node extend │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ patched in │ │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ dependency of │ serverless dynamodb local │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ path │ serverless dynamodb local dynamodb localhost rmdir │ │ │ node flow node extend │ ├───────────────┼──────────────────────────────────────────────────────────────┤ │ more info │ │ └───────────────┴──────────────────────────────────────────────────────────────┘ found moderate severity vulnerability in scanned packages vulnerability requires manual review see the full report for details
| 1
|
7,040
| 24,165,060,559
|
IssuesEvent
|
2022-09-22 14:28:06
|
mozilla-mobile/firefox-ios
|
https://api.github.com/repos/mozilla-mobile/firefox-ios
|
closed
|
[XCTests] Tabs Perf tests: Pre-loaded tabs are not shown
|
eng:automation
|
There is the [TabsPerformanceTest](https://github.com/mozilla-mobile/firefox-ios/blob/main/Tests/XCUITests/TabsPerformanceTests.swift) suite where we are launching the app and the tab tray with different number of tabs, from 1 to 1200.
To run these tests we created some `.archive`[ files containing the tabs](https://github.com/mozilla-mobile/firefox-ios/blob/2887a4eede37591f5448c0030462ffbdc3f4c513/Tests/XCUITests/TabsPerformanceTests.swift#L5).
Unfortunately after this commit 6a0b5048121eb1690d82fa2b13235af817857e67 the tabs are not shown in the simulator.
I see changes in the `TabManagerStore` which will be likely the cause of this.
@lmarceau (as author of that commit ;)) I'm afraid we need your help to fix this :(
Also @rpappalax as I remember you creating those .archive files, we may need your help too
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4963)
|
1.0
|
[XCTests] Tabs Perf tests: Pre-loaded tabs are not shown - There is the [TabsPerformanceTest](https://github.com/mozilla-mobile/firefox-ios/blob/main/Tests/XCUITests/TabsPerformanceTests.swift) suite where we are launching the app and the tab tray with different number of tabs, from 1 to 1200.
To run these tests we created some `.archive`[ files containing the tabs](https://github.com/mozilla-mobile/firefox-ios/blob/2887a4eede37591f5448c0030462ffbdc3f4c513/Tests/XCUITests/TabsPerformanceTests.swift#L5).
Unfortunately after this commit 6a0b5048121eb1690d82fa2b13235af817857e67 the tabs are not shown in the simulator.
I see changes in the `TabManagerStore` which will be likely the cause of this.
@lmarceau (as author of that commit ;)) I'm afraid we need your help to fix this :(
Also @rpappalax as I remember you creating those .archive files, we may need your help too
┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FXIOS-4963)
|
non_test
|
tabs perf tests pre loaded tabs are not shown there is the suite where we are launching the app and the tab tray with different number of tabs from to to run these tests we created some archive unfortunately after this commit the tabs are not shown in the simulator i see changes in the tabmanagerstore which will be likely the cause of this lmarceau as author of that commit i m afraid we need your help to fix this also rpappalax as i remember you creating those archive files we may need your help too ┆issue is synchronized with this
| 0
|
95,023
| 8,528,583,703
|
IssuesEvent
|
2018-11-03 01:04:42
|
rust-lang/cargo
|
https://api.github.com/repos/rust-lang/cargo
|
closed
|
Timing errors on some cargo tests on macOS
|
A-testing-cargo-itself
|
Due to the mtime change in #5919, the following tests fail frequently (some are ~50% on my system) on macOS:
* `bench::bench_twice_with_build_cmd`
* `bench::pass_through_command_line`
* `build_script::flags_go_into_tests`
* `build_script::fresh_builds_possible_with_link_libs`
* `build_script::fresh_builds_possible_with_multiple_metadata_overrides`
* `build_script::optional_build_dep_and_required_normal_dep`
* `build_script::rebuild_only_on_explicit_paths`
* `freshness::changing_bin_paths_common_target_features_caches_targets`
* `freshness::changing_lib_features_caches_targets`
* `freshness::changing_profiles_caches_targets`
* `freshness::dont_rebuild_based_on_plugins`
* `freshness::no_rebuild_transitive_target_deps`
* `freshness::no_rebuild_when_rename_dir`
* `freshness::path_dev_dep_registry_updates`
* `freshness::reuse_workspace_lib`
* `freshness::same_build_dir_cached_packages`
* `freshness::unused_optional_dep`
* `install::do_not_rebuilds_on_local_install`
* `install::workspace_uses_workspace_target_dir`
* `local_registry::simple`
* `metabuild::metabuild_fresh`
* `patch::add_ignored_patch`
* `patch::add_patch`
* `patch::new_major`
* `patch::nonexistent`
* `patch::patch_depends_on_another_patch`
* `patch::patch_git`
* `patch::patch_in_virtual`
* `patch::replace`
* `patch::unused`
* `patch::unused_git`
* `path::custom_target_no_rebuild`
* `path::deep_dependencies_trigger_rebuild`
* `path::nested_deps_recompile`
* `path::no_rebuild_dependency`
* `profile_targets::profile_selection_bench`
* `profile_targets::profile_selection_build`
* `profile_targets::profile_selection_build_all_targets`
* `profile_targets::profile_selection_build_all_targets_release`
* `profile_targets::profile_selection_build_release`
* `profile_targets::profile_selection_check_all_targets`
* `profile_targets::profile_selection_check_all_targets_release`
* `profile_targets::profile_selection_check_all_targets_test`
* `profile_targets::profile_selection_test`
* `profile_targets::profile_selection_test_release`
* `registry::bump_version_dont_update_registry`
* `required_features::bench_default_features`
* `run::run_from_executable_folder`
* `run::specify_name`
* `rustc::rustc_fingerprint`
* `rustflags::cfg_rustflags_normal_source`
* `rustflags::cfg_rustflags_precedence`
* `test::bin_does_not_rebuild_tests`
* `test::pass_through_command_line`
* `test::selective_testing`
* `tool_paths::custom_runner`
* `workspaces::dep_used_with_separate_features`
I have a fix for some of them, I'll try to finish the rest tomorrow.
EDIT: Looks like it happens to a lot more than I first anticipated.
|
1.0
|
Timing errors on some cargo tests on macOS - Due to the mtime change in #5919, the following tests fail frequently (some are ~50% on my system) on macOS:
* `bench::bench_twice_with_build_cmd`
* `bench::pass_through_command_line`
* `build_script::flags_go_into_tests`
* `build_script::fresh_builds_possible_with_link_libs`
* `build_script::fresh_builds_possible_with_multiple_metadata_overrides`
* `build_script::optional_build_dep_and_required_normal_dep`
* `build_script::rebuild_only_on_explicit_paths`
* `freshness::changing_bin_paths_common_target_features_caches_targets`
* `freshness::changing_lib_features_caches_targets`
* `freshness::changing_profiles_caches_targets`
* `freshness::dont_rebuild_based_on_plugins`
* `freshness::no_rebuild_transitive_target_deps`
* `freshness::no_rebuild_when_rename_dir`
* `freshness::path_dev_dep_registry_updates`
* `freshness::reuse_workspace_lib`
* `freshness::same_build_dir_cached_packages`
* `freshness::unused_optional_dep`
* `install::do_not_rebuilds_on_local_install`
* `install::workspace_uses_workspace_target_dir`
* `local_registry::simple`
* `metabuild::metabuild_fresh`
* `patch::add_ignored_patch`
* `patch::add_patch`
* `patch::new_major`
* `patch::nonexistent`
* `patch::patch_depends_on_another_patch`
* `patch::patch_git`
* `patch::patch_in_virtual`
* `patch::replace`
* `patch::unused`
* `patch::unused_git`
* `path::custom_target_no_rebuild`
* `path::deep_dependencies_trigger_rebuild`
* `path::nested_deps_recompile`
* `path::no_rebuild_dependency`
* `profile_targets::profile_selection_bench`
* `profile_targets::profile_selection_build`
* `profile_targets::profile_selection_build_all_targets`
* `profile_targets::profile_selection_build_all_targets_release`
* `profile_targets::profile_selection_build_release`
* `profile_targets::profile_selection_check_all_targets`
* `profile_targets::profile_selection_check_all_targets_release`
* `profile_targets::profile_selection_check_all_targets_test`
* `profile_targets::profile_selection_test`
* `profile_targets::profile_selection_test_release`
* `registry::bump_version_dont_update_registry`
* `required_features::bench_default_features`
* `run::run_from_executable_folder`
* `run::specify_name`
* `rustc::rustc_fingerprint`
* `rustflags::cfg_rustflags_normal_source`
* `rustflags::cfg_rustflags_precedence`
* `test::bin_does_not_rebuild_tests`
* `test::pass_through_command_line`
* `test::selective_testing`
* `tool_paths::custom_runner`
* `workspaces::dep_used_with_separate_features`
I have a fix for some of them, I'll try to finish the rest tomorrow.
EDIT: Looks like it happens to a lot more than I first anticipated.
|
test
|
timing errors on some cargo tests on macos due to the mtime change in the following tests fail frequently some are on my system on macos bench bench twice with build cmd bench pass through command line build script flags go into tests build script fresh builds possible with link libs build script fresh builds possible with multiple metadata overrides build script optional build dep and required normal dep build script rebuild only on explicit paths freshness changing bin paths common target features caches targets freshness changing lib features caches targets freshness changing profiles caches targets freshness dont rebuild based on plugins freshness no rebuild transitive target deps freshness no rebuild when rename dir freshness path dev dep registry updates freshness reuse workspace lib freshness same build dir cached packages freshness unused optional dep install do not rebuilds on local install install workspace uses workspace target dir local registry simple metabuild metabuild fresh patch add ignored patch patch add patch patch new major patch nonexistent patch patch depends on another patch patch patch git patch patch in virtual patch replace patch unused patch unused git path custom target no rebuild path deep dependencies trigger rebuild path nested deps recompile path no rebuild dependency profile targets profile selection bench profile targets profile selection build profile targets profile selection build all targets profile targets profile selection build all targets release profile targets profile selection build release profile targets profile selection check all targets profile targets profile selection check all targets release profile targets profile selection check all targets test profile targets profile selection test profile targets profile selection test release registry bump version dont update registry required features bench default features run run from executable folder run specify name rustc rustc fingerprint rustflags cfg rustflags normal source rustflags cfg rustflags precedence test bin does not rebuild tests test pass through command line test selective testing tool paths custom runner workspaces dep used with separate features i have a fix for some of them i ll try to finish the rest tomorrow edit looks like it happens to a lot more than i first anticipated
| 1
|
123,277
| 10,261,689,094
|
IssuesEvent
|
2019-08-22 10:33:22
|
viszerale-therapie/vt.at-drupal
|
https://api.github.com/repos/viszerale-therapie/vt.at-drupal
|
closed
|
Menüveränderungen css
|
ready for test
|
*Sent by Ursula Feuerherdt. Created by [fire](https://fire.fundersclub.com/).*
---
Lieber Andreas,
wir haben jetzt auch gleich noch ein paar Usability-Optimierungen geplant.
Könntest Du die bitte erledigen laut Screenshots anbei.
Wenn was unklar ist melde Dich bitte.
Vielen lieben Dank!
Liebe Grüße,
Ursula

|
1.0
|
Menüveränderungen css - *Sent by Ursula Feuerherdt. Created by [fire](https://fire.fundersclub.com/).*
---
Lieber Andreas,
wir haben jetzt auch gleich noch ein paar Usability-Optimierungen geplant.
Könntest Du die bitte erledigen laut Screenshots anbei.
Wenn was unklar ist melde Dich bitte.
Vielen lieben Dank!
Liebe Grüße,
Ursula

|
test
|
menüveränderungen css sent by ursula feuerherdt created by lieber andreas wir haben jetzt auch gleich noch ein paar usability optimierungen geplant könntest du die bitte erledigen laut screenshots anbei wenn was unklar ist melde dich bitte vielen lieben dank liebe grüße ursula
| 1
|
39,317
| 6,734,810,702
|
IssuesEvent
|
2017-10-18 19:23:05
|
numpy/numpy
|
https://api.github.com/repos/numpy/numpy
|
closed
|
SVD // return value V //better documentation to avoid confusion
|
03 - Documentation component: numpy.linalg
|
In the documentation @ page:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html
In most of the bibliografy it can be found that, svd works like that:


Most users will assume that the return value of np.linalg.svd() will be V but actually is V transposed. This behaviour can be noticed from the example "Reconstruction based on reduced SVD:"
but it is not mentioned explicity in the documentation
|
1.0
|
SVD // return value V //better documentation to avoid confusion - In the documentation @ page:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html
In most of the bibliografy it can be found that, svd works like that:


Most users will assume that the return value of np.linalg.svd() will be V but actually is V transposed. This behaviour can be noticed from the example "Reconstruction based on reduced SVD:"
but it is not mentioned explicity in the documentation
|
non_test
|
svd return value v better documentation to avoid confusion in the documentation page in most of the bibliografy it can be found that svd works like that most users will assume that the return value of np linalg svd will be v but actually is v transposed this behaviour can be noticed from the example reconstruction based on reduced svd but it is not mentioned explicity in the documentation
| 0
|
319,699
| 27,395,538,375
|
IssuesEvent
|
2023-02-28 19:23:32
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: schemachange/mixed-versions-compat failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker T-sql-schema
|
roachtest.schemachange/mixed-versions-compat [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8803571?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8803571?buildTab=artifacts#/schemachange/mixed-versions-compat) on master @ [e028ce5b14505dfd17ef8b13001c0ab8ac811e3c](https://github.com/cockroachdb/cockroach/commits/e028ce5b14505dfd17ef8b13001c0ab8ac811e3c):
```
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE sc2.typ_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t, t_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t1_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t2_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t3_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t4_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP VIEW v_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP DATABASE a CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP DATABASE defaultdb CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP DATABASE test CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP TABLE a CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW testuser1 CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW testuser3_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW x, y_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW y, x_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_1
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_10
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_11
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_12
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_13
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_14
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_15
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_16
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_17
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_18
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_2
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_3
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_4
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_5
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_6
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_7
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_8
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_9
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP DATABASE db_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP DATABASE defaultdb CASCADE_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP DATABASE test CASCADE_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP TABLE db.kv_0, W230223 16:55:24.581810 1 sql/schemachanger/scplan/plan.go:122 [-] 1 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.588003 1 sql/schemachanger/scplan/plan.go:122 [-] 2 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.593982 1 sql/schemachanger/scplan/plan.go:122 [-] 3 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.599962 1 sql/schemachanger/scplan/plan.go:122 [-] 4 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.605882 1 sql/schemachanger/scplan/plan.go:122 [-] 5 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.611867 1 sql/schemachanger/scplan/plan.go:122 [-] 6 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.617807 1 sql/schemachanger/scplan/plan.go:122 [-] 7 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.623878 1 sql/schemachanger/scplan/plan.go:122 [-] 8 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.629952 1 sql/schemachanger/scplan/plan.go:122 [-] 9 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.635978 1 sql/schemachanger/scplan/plan.go:122 [-] 10 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
ERROR: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
Failed running "debug declarative-corpus-validate"
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-schema
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*schemachange/mixed-versions-compat.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-24767
|
2.0
|
roachtest: schemachange/mixed-versions-compat failed - roachtest.schemachange/mixed-versions-compat [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8803571?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8803571?buildTab=artifacts#/schemachange/mixed-versions-compat) on master @ [e028ce5b14505dfd17ef8b13001c0ab8ac811e3c](https://github.com/cockroachdb/cockroach/commits/e028ce5b14505dfd17ef8b13001c0ab8ac811e3c):
```
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE sc2.typ_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t, t_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t1_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t2_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t3_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t4_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP TYPE t_0
validated TestLogic_drop_type_TestLogic_drop_type_DROP VIEW v_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP DATABASE a CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP DATABASE defaultdb CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP DATABASE test CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP TABLE a CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW testuser1 CASCADE_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW testuser3_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW x, y_0
validated TestLogic_drop_view_TestLogic_drop_view_DROP VIEW y, x_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_1
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_10
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_11
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_12
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_13
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_14
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_15
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_16
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_17
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_18
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_2
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_3
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_4
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_5
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_6
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_7
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_8
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_ALTER TABLE db.kv ALTER PRIMARY KEY USING COLUMNS (k)_9
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP DATABASE db_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP DATABASE defaultdb CASCADE_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP DATABASE test CASCADE_0
validated TestLogic_gc_job_mixed_TestLogic_gc_job_mixed_DROP TABLE db.kv_0, W230223 16:55:24.581810 1 sql/schemachanger/scplan/plan.go:122 [-] 1 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.588003 1 sql/schemachanger/scplan/plan.go:122 [-] 2 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.593982 1 sql/schemachanger/scplan/plan.go:122 [-] 3 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.599962 1 sql/schemachanger/scplan/plan.go:122 [-] 4 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.605882 1 sql/schemachanger/scplan/plan.go:122 [-] 5 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.611867 1 sql/schemachanger/scplan/plan.go:122 [-] 6 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.617807 1 sql/schemachanger/scplan/plan.go:122 [-] 7 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.623878 1 sql/schemachanger/scplan/plan.go:122 [-] 8 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.629952 1 sql/schemachanger/scplan/plan.go:122 [-] 9 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
W230223 16:55:24.635978 1 sql/schemachanger/scplan/plan.go:122 [-] 10 failed building declarative schema changer plan in PostCommitNonRevertiblePhase (rollback=false) for ALTER TABLE with error: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
ERROR: invalid execution plan: final status is VALIDATED instead of ABSENT at index 15 for adding ForeignKeyConstraint:{DescID: 110, ConstraintID: 2, ReferencedDescID: 107}
Failed running "debug declarative-corpus-validate"
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-schema
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*schemachange/mixed-versions-compat.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-24767
|
test
|
roachtest schemachange mixed versions compat failed roachtest schemachange mixed versions compat with on master validated testlogic drop type testlogic drop type drop type typ validated testlogic drop type testlogic drop type drop type t t validated testlogic drop type testlogic drop type drop type validated testlogic drop type testlogic drop type drop type validated testlogic drop type testlogic drop type drop type validated testlogic drop type testlogic drop type drop type validated testlogic drop type testlogic drop type drop type t validated testlogic drop type testlogic drop type drop view v validated testlogic drop view testlogic drop view drop database a cascade validated testlogic drop view testlogic drop view drop database defaultdb cascade validated testlogic drop view testlogic drop view drop database test cascade validated testlogic drop view testlogic drop view drop table a cascade validated testlogic drop view testlogic drop view drop view cascade validated testlogic drop view testlogic drop view drop view validated testlogic drop view testlogic drop view drop view x y validated testlogic drop view testlogic drop view drop view y x validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed alter table db kv alter primary key using columns k validated testlogic gc job mixed testlogic gc job mixed drop database db validated testlogic gc job mixed testlogic gc job mixed drop database defaultdb cascade validated testlogic gc job mixed testlogic gc job mixed drop database test cascade validated testlogic gc job mixed testlogic gc job mixed drop table db kv sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid sql schemachanger scplan plan go failed building declarative schema changer plan in postcommitnonrevertiblephase rollback false for alter table with error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid error invalid execution plan final status is validated instead of absent at index for adding foreignkeyconstraint descid constraintid referenceddescid failed running debug declarative corpus validate parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb sql schema jira issue crdb
| 1
|
82,195
| 7,833,812,764
|
IssuesEvent
|
2018-06-16 03:49:08
|
ibm-functions/shell
|
https://api.github.com/repos/ibm-functions/shell
|
closed
|
move composer tests and test inputs to composer plugin repo
|
tests
|
as part of the continuing work to componentize the plugins, we need to shift the composer tests and test inputs out of the main repo and into the shell-composer-plugin repo.
this seems like it will require the test rig to support tests located in the plugin directories, rather than in the global tests directory?
see https://github.com/ibm-functions/shell-composer-plugin/issues/9
|
1.0
|
move composer tests and test inputs to composer plugin repo - as part of the continuing work to componentize the plugins, we need to shift the composer tests and test inputs out of the main repo and into the shell-composer-plugin repo.
this seems like it will require the test rig to support tests located in the plugin directories, rather than in the global tests directory?
see https://github.com/ibm-functions/shell-composer-plugin/issues/9
|
test
|
move composer tests and test inputs to composer plugin repo as part of the continuing work to componentize the plugins we need to shift the composer tests and test inputs out of the main repo and into the shell composer plugin repo this seems like it will require the test rig to support tests located in the plugin directories rather than in the global tests directory see
| 1
|
35,519
| 4,995,255,423
|
IssuesEvent
|
2016-12-09 09:30:11
|
zetkin/call.zetk.in
|
https://api.github.com/repos/zetkin/call.zetk.in
|
opened
|
LaneSwitch tooltip misinterpreted
|
user test
|
Test subjects generally notice the `LaneSwitch` tooltip at some point before needing the feature, but not always. Some subjects close it after reading it, while others don't seem to notice it at all, leaving it open throughout the session.
To those who do notice it, the text does not seem to properly explain the feature, and the phrasing (Swedish) makes it a bit difficult to read. Still, after some reasoning and trial and error, people who read the tooltip understand it.
_This was observed (noted as signal 17) during the december 2016 user tests._
|
1.0
|
LaneSwitch tooltip misinterpreted - Test subjects generally notice the `LaneSwitch` tooltip at some point before needing the feature, but not always. Some subjects close it after reading it, while others don't seem to notice it at all, leaving it open throughout the session.
To those who do notice it, the text does not seem to properly explain the feature, and the phrasing (Swedish) makes it a bit difficult to read. Still, after some reasoning and trial and error, people who read the tooltip understand it.
_This was observed (noted as signal 17) during the december 2016 user tests._
|
test
|
laneswitch tooltip misinterpreted test subjects generally notice the laneswitch tooltip at some point before needing the feature but not always some subjects close it after reading it while others don t seem to notice it at all leaving it open throughout the session to those who do notice it the text does not seem to properly explain the feature and the phrasing swedish makes it a bit difficult to read still after some reasoning and trial and error people who read the tooltip understand it this was observed noted as signal during the december user tests
| 1
|
197,471
| 14,926,649,295
|
IssuesEvent
|
2021-01-24 12:25:01
|
bauersebastian/termplanner
|
https://api.github.com/repos/bauersebastian/termplanner
|
closed
|
Automatisierte Tests für neu implementierte Funktionen
|
testing
|
**_Als Tester möchte ich, dass der Code beim Bauen automatisch getestet wird, damit ich Fehler bei Änderungen schnellstmöglich erkenne._**
___________________
**_Akzeptanzkriterien:_**
- Umsetzung automatisierter Tests für neu-implementierte Funktionen (siehe Links)
|
1.0
|
Automatisierte Tests für neu implementierte Funktionen - **_Als Tester möchte ich, dass der Code beim Bauen automatisch getestet wird, damit ich Fehler bei Änderungen schnellstmöglich erkenne._**
___________________
**_Akzeptanzkriterien:_**
- Umsetzung automatisierter Tests für neu-implementierte Funktionen (siehe Links)
|
test
|
automatisierte tests für neu implementierte funktionen als tester möchte ich dass der code beim bauen automatisch getestet wird damit ich fehler bei änderungen schnellstmöglich erkenne akzeptanzkriterien umsetzung automatisierter tests für neu implementierte funktionen siehe links
| 1
|
140,631
| 11,353,858,081
|
IssuesEvent
|
2020-01-24 16:22:16
|
qri-io/desktop
|
https://api.github.com/repos/qri-io/desktop
|
closed
|
Need qri docker image!
|
blocked test
|
In order to have the e2e tests run on circleci, we need a docker image that has both `qri` and `node` (at least 10.13.0)
|
1.0
|
Need qri docker image! - In order to have the e2e tests run on circleci, we need a docker image that has both `qri` and `node` (at least 10.13.0)
|
test
|
need qri docker image in order to have the tests run on circleci we need a docker image that has both qri and node at least
| 1
|
344,326
| 30,734,744,204
|
IssuesEvent
|
2023-07-28 06:37:35
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: sqlsmith/setup=tpch-sf1/setting=no-mutations failed
|
C-test-failure O-robot O-roachtest branch-master T-sql-queries
|
roachtest.sqlsmith/setup=tpch-sf1/setting=no-mutations [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11094252?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11094252?buildTab=artifacts#/sqlsmith/setup=tpch-sf1/setting=no-mutations) on master @ [51aa6257c063f60fb76044f9cccc47e259930d00](https://github.com/cockroachdb/cockroach/commits/51aa6257c063f60fb76044f9cccc47e259930d00):
```
(sqlsmith.go:296).func3: ping node 1: dial tcp 34.74.197.211:26257: connect: connection refused
HINT: node likely crashed, check logs in artifacts > logs/1.unredacted
previous sql:
WITH
"with139" ("
co l1144")
AS (
SELECT
*
FROM
(VALUES ((-1.2741097211837769):::FLOAT8), ((-0.6566488146781921):::FLOAT8)) AS tab448 ("
co l1144")
),
"wi'tH140" ("co\\xc4l1145")
AS (
SELECT
*
FROM
(
VALUES
(e'\x00':::STRING),
(e'6,O]b\fw':::STRING),
(NULL),
('':::STRING),
(
(NULL::BOX2D || st_asencodedpolyline(geomfromewkb(bytes('01070000A0E61000000300000001060000A0E6100000040000000103000080010000000B0000006F9AF3F7D81863C057397A0B3ACD42C006ED79D13D2DF24130E1CBF62A4A45C01B6DC0D279F452C022F8C672D2C1F9C140DB9D23050141C0B53FBA6E396656C0C0A1842E2B53EC413A8DC94C568D61401BEF0352091651C05C681DE175D2FD41A048C859DF7C5440C00E327AC97FF63F68BBA8144D0EF8416E6042F340DD6040805791CE56602740DC6C3922676401C29C6D4009B1334E40625E855D7A7E5140D4241F7D86D2D5C14B1E4BCA077A58C07806BF9D4D084D4074E74BE4DCAA00C2C11D020045A051C0D81F8E3DD0AE3740B8315FF7F63AF4C1BAD97494BA9860C07411EFAF0A6C3D4038E26DE7A194F4C16F9AF3F7D81863C057397A0B3ACD42C006ED79D13D2DF2410103000080010000000B000000610D90AE22A55FC0B8265672F7562940CE3E91EB7A320042CC96C05617B649C085229B1A09E153C0D84FA7998764EB4190FEB5A7C71B2DC03EBBCB6B588F4DC02891E1ED4A7DE6417C0F7BBC14C75840E08D4D210E2F21C04180E453EDEDFFC112E57185B2435240F0D20FC2D7C11640F8BF2E3AB98DD2C1D0A9A94D6EA14140FCABD1E5BF074B400010C2F75AA16F4166BC301BE6FB55C002F9F182DE3D53400038B551BC0495C199463681FA6E5CC0184D3E0AE179554030323F0F2525F4415791FBE04FEA61C0361E8733C77A5240C84AEBEAD466EEC1D5012A75E04F63C0A8BF403F623D4240182CFEFA9071E0C1610D90AE22A55FC0B8265672F7562940CE3E91EB7A3200420103000080010000000C000000004CADA6B302E0BF3EE5FA281ACE3BC0E051ABD6580901429B65A22CCBDC53C003DEBC27260E4AC07E45691D3E7AF541408E6FAB9A1008C016E33E2631FD40C0EEF80F0FF3E8F341A8C5EAA8B0C836409E85B8193CE952C0F4448FFF9B31F24190F9A9FCD2AC604004F99C51ED4356C0903A1F652902D4C1C01B737BD389554080315427550738C05C51AE075475F64124F6F92382235C40148AB5FED33D38C048960D3AC6D7E8412C2E310EABA5514058EA10C427562EC0A82D4D1A69D7FE41621FC77070C26540FE84B78FE5BE51400962385E53FAFEC1483899F770B14DC020BD4BBBAB604D401C08306EA143E84140337962764325C0002F089A1DF3F83FAC9B66F32C2BE841004CADA6B302E0BF3EE5FA281ACE3BC0E051ABD65809014201030000800100000006000000D03FF366A2C95C40E59BFCECDD7553C0D028B038BB63D041A40AEF5CB0195840006FB928107F084020CA0A39A4AED34100F09D4126336140E8AC65DAC4954C4024406D4040D1EE41F04B3F3DA6C75740106594C0ADCE4440A498B6372039FD4155D08A1E777452C084CC004170E949409C87DF53EB9AFDC1D03FF366A2C95C40E59BFCECDD7553C0D028B038BB63D04101020000A0E61000000200000074F55D784E5F50408F7DE14351CD41C0FCCCE7B1B166F541583C392FCD7554409022EAF3EFCC39C03CDE1C0E29A7EB4101040000A0E6100000060000000101000080E8801DFC8A2130C04401B4AE182F50C0BC13340EB7E2E1C1010100008010DC3F808D824D406C9927CDAE5144409A464D93BBD7F94101010000806F0682A50FE253C030E1964D47F74BC00E752ECB60B0F441010100008014F5242FC2C444C01257DB8E51FA51404449E2FB7D2AFFC1010100008069945BA1FD2E50C0501CA74B7A661C40CB83CB9A4894F1C10101000080C0BBBC345A4B51C0E08F6EBB2F250E400681A78F4FFCF741':::GEOGRAPHY::GEOGRAPHY)::BYTES::BYTES)::GEOMETRY::GEOMETRY)::STRING::STRING)::STRING
)
)
AS "t'ab449" ("co\\xc4l1145")
)
SELECT
'\xe88545dd22bb34ad6d':::BYTES AS col1146;ping node 1: dial tcp 34.74.197.211:26257: connect: connection refused
HINT: node likely crashed, check logs in artifacts > logs/1.unredacted
(test_runner.go:1122).func1: 1 dead node(s) detected
test artifacts and logs in: /artifacts/sqlsmith/setup=tpch-sf1/setting=no-mutations/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=tpch-sf1/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: sqlsmith/setup=tpch-sf1/setting=no-mutations failed - roachtest.sqlsmith/setup=tpch-sf1/setting=no-mutations [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11094252?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11094252?buildTab=artifacts#/sqlsmith/setup=tpch-sf1/setting=no-mutations) on master @ [51aa6257c063f60fb76044f9cccc47e259930d00](https://github.com/cockroachdb/cockroach/commits/51aa6257c063f60fb76044f9cccc47e259930d00):
```
(sqlsmith.go:296).func3: ping node 1: dial tcp 34.74.197.211:26257: connect: connection refused
HINT: node likely crashed, check logs in artifacts > logs/1.unredacted
previous sql:
WITH
"with139" ("
co l1144")
AS (
SELECT
*
FROM
(VALUES ((-1.2741097211837769):::FLOAT8), ((-0.6566488146781921):::FLOAT8)) AS tab448 ("
co l1144")
),
"wi'tH140" ("co\\xc4l1145")
AS (
SELECT
*
FROM
(
VALUES
(e'\x00':::STRING),
(e'6,O]b\fw':::STRING),
(NULL),
('':::STRING),
(
(NULL::BOX2D || st_asencodedpolyline(geomfromewkb(bytes('01070000A0E61000000300000001060000A0E6100000040000000103000080010000000B0000006F9AF3F7D81863C057397A0B3ACD42C006ED79D13D2DF24130E1CBF62A4A45C01B6DC0D279F452C022F8C672D2C1F9C140DB9D23050141C0B53FBA6E396656C0C0A1842E2B53EC413A8DC94C568D61401BEF0352091651C05C681DE175D2FD41A048C859DF7C5440C00E327AC97FF63F68BBA8144D0EF8416E6042F340DD6040805791CE56602740DC6C3922676401C29C6D4009B1334E40625E855D7A7E5140D4241F7D86D2D5C14B1E4BCA077A58C07806BF9D4D084D4074E74BE4DCAA00C2C11D020045A051C0D81F8E3DD0AE3740B8315FF7F63AF4C1BAD97494BA9860C07411EFAF0A6C3D4038E26DE7A194F4C16F9AF3F7D81863C057397A0B3ACD42C006ED79D13D2DF2410103000080010000000B000000610D90AE22A55FC0B8265672F7562940CE3E91EB7A320042CC96C05617B649C085229B1A09E153C0D84FA7998764EB4190FEB5A7C71B2DC03EBBCB6B588F4DC02891E1ED4A7DE6417C0F7BBC14C75840E08D4D210E2F21C04180E453EDEDFFC112E57185B2435240F0D20FC2D7C11640F8BF2E3AB98DD2C1D0A9A94D6EA14140FCABD1E5BF074B400010C2F75AA16F4166BC301BE6FB55C002F9F182DE3D53400038B551BC0495C199463681FA6E5CC0184D3E0AE179554030323F0F2525F4415791FBE04FEA61C0361E8733C77A5240C84AEBEAD466EEC1D5012A75E04F63C0A8BF403F623D4240182CFEFA9071E0C1610D90AE22A55FC0B8265672F7562940CE3E91EB7A3200420103000080010000000C000000004CADA6B302E0BF3EE5FA281ACE3BC0E051ABD6580901429B65A22CCBDC53C003DEBC27260E4AC07E45691D3E7AF541408E6FAB9A1008C016E33E2631FD40C0EEF80F0FF3E8F341A8C5EAA8B0C836409E85B8193CE952C0F4448FFF9B31F24190F9A9FCD2AC604004F99C51ED4356C0903A1F652902D4C1C01B737BD389554080315427550738C05C51AE075475F64124F6F92382235C40148AB5FED33D38C048960D3AC6D7E8412C2E310EABA5514058EA10C427562EC0A82D4D1A69D7FE41621FC77070C26540FE84B78FE5BE51400962385E53FAFEC1483899F770B14DC020BD4BBBAB604D401C08306EA143E84140337962764325C0002F089A1DF3F83FAC9B66F32C2BE841004CADA6B302E0BF3EE5FA281ACE3BC0E051ABD65809014201030000800100000006000000D03FF366A2C95C40E59BFCECDD7553C0D028B038BB63D041A40AEF5CB0195840006FB928107F084020CA0A39A4AED34100F09D4126336140E8AC65DAC4954C4024406D4040D1EE41F04B3F3DA6C75740106594C0ADCE4440A498B6372039FD4155D08A1E777452C084CC004170E949409C87DF53EB9AFDC1D03FF366A2C95C40E59BFCECDD7553C0D028B038BB63D04101020000A0E61000000200000074F55D784E5F50408F7DE14351CD41C0FCCCE7B1B166F541583C392FCD7554409022EAF3EFCC39C03CDE1C0E29A7EB4101040000A0E6100000060000000101000080E8801DFC8A2130C04401B4AE182F50C0BC13340EB7E2E1C1010100008010DC3F808D824D406C9927CDAE5144409A464D93BBD7F94101010000806F0682A50FE253C030E1964D47F74BC00E752ECB60B0F441010100008014F5242FC2C444C01257DB8E51FA51404449E2FB7D2AFFC1010100008069945BA1FD2E50C0501CA74B7A661C40CB83CB9A4894F1C10101000080C0BBBC345A4B51C0E08F6EBB2F250E400681A78F4FFCF741':::GEOGRAPHY::GEOGRAPHY)::BYTES::BYTES)::GEOMETRY::GEOMETRY)::STRING::STRING)::STRING
)
)
AS "t'ab449" ("co\\xc4l1145")
)
SELECT
'\xe88545dd22bb34ad6d':::BYTES AS col1146;ping node 1: dial tcp 34.74.197.211:26257: connect: connection refused
HINT: node likely crashed, check logs in artifacts > logs/1.unredacted
(test_runner.go:1122).func1: 1 dead node(s) detected
test artifacts and logs in: /artifacts/sqlsmith/setup=tpch-sf1/setting=no-mutations/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=tpch-sf1/setting=no-mutations.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest sqlsmith setup tpch setting no mutations failed roachtest sqlsmith setup tpch setting no mutations with on master sqlsmith go ping node dial tcp connect connection refused hint node likely crashed check logs in artifacts logs unredacted previous sql with co as select from values as co wi co as select from values e string e o b fw string null string null st asencodedpolyline geomfromewkb bytes geography geography bytes bytes geometry geometry string string string as t co select bytes as ping node dial tcp connect connection refused hint node likely crashed check logs in artifacts logs unredacted test runner go dead node s detected test artifacts and logs in artifacts sqlsmith setup tpch setting no mutations run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb sql queries
| 1
|
226,851
| 18,044,977,178
|
IssuesEvent
|
2021-09-18 18:38:34
|
skyjake/lagrange
|
https://api.github.com/repos/skyjake/lagrange
|
closed
|
Lagrange v1.4.0 on Ubuntu 20.04 - occasionally segfaults when opening a site/capsule
|
bug needs testing
|
Mostly it's fine but it (V1.4) has died twice previously. This has only since upgrading to v1.4 - running the 64 bit AppImage.
From syslog:
kernel: [243993.057154] AppRun[67996]:
segfault at d400000002
ip 000000d400000002
sp 00007fff35cf06f8 error 14 in lagrange[55bdfb70f000+106000]
Nothing unexpected about the intended site.
|
1.0
|
Lagrange v1.4.0 on Ubuntu 20.04 - occasionally segfaults when opening a site/capsule - Mostly it's fine but it (V1.4) has died twice previously. This has only since upgrading to v1.4 - running the 64 bit AppImage.
From syslog:
kernel: [243993.057154] AppRun[67996]:
segfault at d400000002
ip 000000d400000002
sp 00007fff35cf06f8 error 14 in lagrange[55bdfb70f000+106000]
Nothing unexpected about the intended site.
|
test
|
lagrange on ubuntu occasionally segfaults when opening a site capsule mostly it s fine but it has died twice previously this has only since upgrading to running the bit appimage from syslog kernel apprun segfault at ip sp error in lagrange nothing unexpected about the intended site
| 1
|
449,364
| 12,968,054,075
|
IssuesEvent
|
2020-07-21 04:47:10
|
dgraph-io/ristretto
|
https://api.github.com/repos/dgraph-io/ristretto
|
closed
|
cmSketch not benefitting from four rows
|
optimization priority/P1 status/accepted
|
I believe the commit f823dc4a broke the purpose of the cm sketch to have four rows. The bitwise-xor and mask are not creating the mix of indexes expected by the design. The same Increment/Estimate results, the same values, are achieved from a single row with or without the bitwise-xor. The earlier implementation seems to have been a good and very fast approximation to a distinct hash for each row except when the high 32 bits were all zero. One solution to fixing the earlier version could be to bitwise-or a single bit into the top 32 half. I can provide my testing and benchmarking on this offline if interested.
For small values of row length as used in the current unit tests, this doesn't matter. As the row length gets larger, the gap between the earlier algorithm and the current one widens and I believe becomes significant.
|
1.0
|
cmSketch not benefitting from four rows - I believe the commit f823dc4a broke the purpose of the cm sketch to have four rows. The bitwise-xor and mask are not creating the mix of indexes expected by the design. The same Increment/Estimate results, the same values, are achieved from a single row with or without the bitwise-xor. The earlier implementation seems to have been a good and very fast approximation to a distinct hash for each row except when the high 32 bits were all zero. One solution to fixing the earlier version could be to bitwise-or a single bit into the top 32 half. I can provide my testing and benchmarking on this offline if interested.
For small values of row length as used in the current unit tests, this doesn't matter. As the row length gets larger, the gap between the earlier algorithm and the current one widens and I believe becomes significant.
|
non_test
|
cmsketch not benefitting from four rows i believe the commit broke the purpose of the cm sketch to have four rows the bitwise xor and mask are not creating the mix of indexes expected by the design the same increment estimate results the same values are achieved from a single row with or without the bitwise xor the earlier implementation seems to have been a good and very fast approximation to a distinct hash for each row except when the high bits were all zero one solution to fixing the earlier version could be to bitwise or a single bit into the top half i can provide my testing and benchmarking on this offline if interested for small values of row length as used in the current unit tests this doesn t matter as the row length gets larger the gap between the earlier algorithm and the current one widens and i believe becomes significant
| 0
|
87,572
| 8,100,994,608
|
IssuesEvent
|
2018-08-12 07:46:56
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
"NoSuchMethodError: The getter 'pathSegments' was called on null." in service_extensions_test.dart
|
a: tests team team: flakes
|
Saw this flake on the linux infrabot:
```
03:53 +344: /b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart: Service extensions - posttest
unhandled error during test:
/b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart
NoSuchMethodError: The getter 'pathSegments' was called on null.
Receiver: null
Tried calling: pathSegments
#0 Object.noSuchMethod (dart:core-patch/dart:core/object_patch.dart:46)
#1 collect (package:coverage/src/collect.dart:16)
<asynchronous suspension>
#2 CoverageCollector.collectCoverage (package:flutter_tools/src/test/coverage_collector.dart:59)
<asynchronous suspension>
#3 CoverageCollector.onFinishedTest (package:flutter_tools/src/test/coverage_collector.dart:27)
<asynchronous suspension>
#4 _FlutterPlatform._startTest (package:flutter_tools/src/test/flutter_platform.dart:611)
<asynchronous suspension>
#5 _FlutterPlatform.loadChannel (package:flutter_tools/src/test/flutter_platform.dart:367)
#6 PlatformPlugin.load (package:test/src/runner/plugin/platform.dart:65)
<asynchronous suspension>
#7 Loader.loadFile.<anonymous closure> (package:test/src/runner/loader.dart:248)
<asynchronous suspension>
#8 new LoadSuite.<anonymous closure>.<anonymous closure> (package:test/src/runner/load_suite.dart:89)
<asynchronous suspension>
#9 invoke (package:test/src/utils.dart:242)
#10 new LoadSuite.<anonymous closure> (package:test/src/runner/load_suite.dart:88)
#11 Invoker._onRun.<anonymous closure>.<anonymous closure>.<anonymous closure>.<anonymous closure> (package:test/src/backend/invoker.dart:403)
<asynchronous suspension>
#12 new Future.<anonymous closure> (dart:async/future.dart:174)
#13 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209)
#14 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119)
#15 _rootRun (dart:async/zone.dart:1122)
#16 _CustomZone.run (dart:async/zone.dart:1023)
#17 _CustomZone.runGuarded (dart:async/zone.dart:925)
#18 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:965)
#19 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209)
#20 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119)
#21 _rootRun (dart:async/zone.dart:1126)
#22 _CustomZone.run (dart:async/zone.dart:1023)
#23 _CustomZone.bindCallback.<anonymous closure> (dart:async/zone.dart:949)
#24 Timer._createTimer.<anonymous closure> (dart:async-patch/dart:async/timer_patch.dart:21)
#25 _Timer._runTimers (dart:isolate-patch/dart:isolate/timer_impl.dart:382)
#26 _Timer._handleMessage (dart:isolate-patch/dart:isolate/timer_impl.dart:416)
#27 _RawReceivePortImpl._handleMessage (dart:isolate-patch/dart:isolate/isolate_patch.dart:165)
03:53 +345: /b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart: Service extensions - posttest
03:53 +345 -1: loading /b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart [E]
NoSuchMethodError: The getter 'pathSegments' was called on null.
Receiver: null
Tried calling: pathSegments
package:flutter_tools/src/test/flutter_platform.dart 652 _FlutterPlatform._startTest
===== asynchronous gap ===========================
dart:async/future_impl.dart 22 _Completer.completeError
dart:async-patch/dart:async/async_patch.dart 40 _AsyncAwaitCompleter.completeError
package:flutter_tools/src/test/flutter_platform.dart _FlutterPlatform._startTest
===== asynchronous gap ===========================
dart:async/zone.dart 1055 _CustomZone.registerUnaryCallback
dart:async-patch/dart:async/async_patch.dart 77 _asyncThenWrapperHelper
package:flutter_tools/src/test/flutter_platform.dart 374 _FlutterPlatform._startTest
package:flutter_tools/src/test/flutter_platform.dart 367 _FlutterPlatform.loadChannel
package:test/src/runner/plugin/platform.dart 65 PlatformPlugin.load
===== asynchronous gap ===========================
dart:async/zone.dart 1055 _CustomZone.registerUnaryCallback
dart:async-patch/dart:async/async_patch.dart 77 _asyncThenWrapperHelper
package:test/src/runner/loader.dart 242 Loader.loadFile.<fn>
package:test/src/runner/load_suite.dart 89 new LoadSuite.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1047 _CustomZone.registerCallback
dart:async/zone.dart 964 _CustomZone.bindCallbackGuarded
dart:async/timer.dart 52 new Timer
dart:async/timer.dart 87 Timer.run
dart:async/future.dart 172 new Future
package:test/src/backend/invoker.dart 402 Invoker._onRun.<fn>.<fn>.<fn>
```
|
1.0
|
"NoSuchMethodError: The getter 'pathSegments' was called on null." in service_extensions_test.dart - Saw this flake on the linux infrabot:
```
03:53 +344: /b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart: Service extensions - posttest
unhandled error during test:
/b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart
NoSuchMethodError: The getter 'pathSegments' was called on null.
Receiver: null
Tried calling: pathSegments
#0 Object.noSuchMethod (dart:core-patch/dart:core/object_patch.dart:46)
#1 collect (package:coverage/src/collect.dart:16)
<asynchronous suspension>
#2 CoverageCollector.collectCoverage (package:flutter_tools/src/test/coverage_collector.dart:59)
<asynchronous suspension>
#3 CoverageCollector.onFinishedTest (package:flutter_tools/src/test/coverage_collector.dart:27)
<asynchronous suspension>
#4 _FlutterPlatform._startTest (package:flutter_tools/src/test/flutter_platform.dart:611)
<asynchronous suspension>
#5 _FlutterPlatform.loadChannel (package:flutter_tools/src/test/flutter_platform.dart:367)
#6 PlatformPlugin.load (package:test/src/runner/plugin/platform.dart:65)
<asynchronous suspension>
#7 Loader.loadFile.<anonymous closure> (package:test/src/runner/loader.dart:248)
<asynchronous suspension>
#8 new LoadSuite.<anonymous closure>.<anonymous closure> (package:test/src/runner/load_suite.dart:89)
<asynchronous suspension>
#9 invoke (package:test/src/utils.dart:242)
#10 new LoadSuite.<anonymous closure> (package:test/src/runner/load_suite.dart:88)
#11 Invoker._onRun.<anonymous closure>.<anonymous closure>.<anonymous closure>.<anonymous closure> (package:test/src/backend/invoker.dart:403)
<asynchronous suspension>
#12 new Future.<anonymous closure> (dart:async/future.dart:174)
#13 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209)
#14 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119)
#15 _rootRun (dart:async/zone.dart:1122)
#16 _CustomZone.run (dart:async/zone.dart:1023)
#17 _CustomZone.runGuarded (dart:async/zone.dart:925)
#18 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:965)
#19 StackZoneSpecification._run (package:stack_trace/src/stack_zone_specification.dart:209)
#20 StackZoneSpecification._registerCallback.<anonymous closure> (package:stack_trace/src/stack_zone_specification.dart:119)
#21 _rootRun (dart:async/zone.dart:1126)
#22 _CustomZone.run (dart:async/zone.dart:1023)
#23 _CustomZone.bindCallback.<anonymous closure> (dart:async/zone.dart:949)
#24 Timer._createTimer.<anonymous closure> (dart:async-patch/dart:async/timer_patch.dart:21)
#25 _Timer._runTimers (dart:isolate-patch/dart:isolate/timer_impl.dart:382)
#26 _Timer._handleMessage (dart:isolate-patch/dart:isolate/timer_impl.dart:416)
#27 _RawReceivePortImpl._handleMessage (dart:isolate-patch/dart:isolate/isolate_patch.dart:165)
03:53 +345: /b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart: Service extensions - posttest
03:53 +345 -1: loading /b/build/slave/Linux/build/flutter/packages/flutter/test/foundation/service_extensions_test.dart [E]
NoSuchMethodError: The getter 'pathSegments' was called on null.
Receiver: null
Tried calling: pathSegments
package:flutter_tools/src/test/flutter_platform.dart 652 _FlutterPlatform._startTest
===== asynchronous gap ===========================
dart:async/future_impl.dart 22 _Completer.completeError
dart:async-patch/dart:async/async_patch.dart 40 _AsyncAwaitCompleter.completeError
package:flutter_tools/src/test/flutter_platform.dart _FlutterPlatform._startTest
===== asynchronous gap ===========================
dart:async/zone.dart 1055 _CustomZone.registerUnaryCallback
dart:async-patch/dart:async/async_patch.dart 77 _asyncThenWrapperHelper
package:flutter_tools/src/test/flutter_platform.dart 374 _FlutterPlatform._startTest
package:flutter_tools/src/test/flutter_platform.dart 367 _FlutterPlatform.loadChannel
package:test/src/runner/plugin/platform.dart 65 PlatformPlugin.load
===== asynchronous gap ===========================
dart:async/zone.dart 1055 _CustomZone.registerUnaryCallback
dart:async-patch/dart:async/async_patch.dart 77 _asyncThenWrapperHelper
package:test/src/runner/loader.dart 242 Loader.loadFile.<fn>
package:test/src/runner/load_suite.dart 89 new LoadSuite.<fn>.<fn>
===== asynchronous gap ===========================
dart:async/zone.dart 1047 _CustomZone.registerCallback
dart:async/zone.dart 964 _CustomZone.bindCallbackGuarded
dart:async/timer.dart 52 new Timer
dart:async/timer.dart 87 Timer.run
dart:async/future.dart 172 new Future
package:test/src/backend/invoker.dart 402 Invoker._onRun.<fn>.<fn>.<fn>
```
|
test
|
nosuchmethoderror the getter pathsegments was called on null in service extensions test dart saw this flake on the linux infrabot b build slave linux build flutter packages flutter test foundation service extensions test dart service extensions posttest unhandled error during test b build slave linux build flutter packages flutter test foundation service extensions test dart nosuchmethoderror the getter pathsegments was called on null receiver null tried calling pathsegments object nosuchmethod dart core patch dart core object patch dart collect package coverage src collect dart coveragecollector collectcoverage package flutter tools src test coverage collector dart coveragecollector onfinishedtest package flutter tools src test coverage collector dart flutterplatform starttest package flutter tools src test flutter platform dart flutterplatform loadchannel package flutter tools src test flutter platform dart platformplugin load package test src runner plugin platform dart loader loadfile package test src runner loader dart new loadsuite package test src runner load suite dart invoke package test src utils dart new loadsuite package test src runner load suite dart invoker onrun package test src backend invoker dart new future dart async future dart stackzonespecification run package stack trace src stack zone specification dart stackzonespecification registercallback package stack trace src stack zone specification dart rootrun dart async zone dart customzone run dart async zone dart customzone runguarded dart async zone dart customzone bindcallbackguarded dart async zone dart stackzonespecification run package stack trace src stack zone specification dart stackzonespecification registercallback package stack trace src stack zone specification dart rootrun dart async zone dart customzone run dart async zone dart customzone bindcallback dart async zone dart timer createtimer dart async patch dart async timer patch dart timer runtimers dart isolate patch dart isolate timer impl dart timer handlemessage dart isolate patch dart isolate timer impl dart rawreceiveportimpl handlemessage dart isolate patch dart isolate isolate patch dart b build slave linux build flutter packages flutter test foundation service extensions test dart service extensions posttest loading b build slave linux build flutter packages flutter test foundation service extensions test dart nosuchmethoderror the getter pathsegments was called on null receiver null tried calling pathsegments package flutter tools src test flutter platform dart flutterplatform starttest asynchronous gap dart async future impl dart completer completeerror dart async patch dart async async patch dart asyncawaitcompleter completeerror package flutter tools src test flutter platform dart flutterplatform starttest asynchronous gap dart async zone dart customzone registerunarycallback dart async patch dart async async patch dart asyncthenwrapperhelper package flutter tools src test flutter platform dart flutterplatform starttest package flutter tools src test flutter platform dart flutterplatform loadchannel package test src runner plugin platform dart platformplugin load asynchronous gap dart async zone dart customzone registerunarycallback dart async patch dart async async patch dart asyncthenwrapperhelper package test src runner loader dart loader loadfile package test src runner load suite dart new loadsuite asynchronous gap dart async zone dart customzone registercallback dart async zone dart customzone bindcallbackguarded dart async timer dart new timer dart async timer dart timer run dart async future dart new future package test src backend invoker dart invoker onrun
| 1
|
149,889
| 11,939,251,368
|
IssuesEvent
|
2020-04-02 14:57:03
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
reopened
|
Do not show progress UI if progress sequence is short
|
debug feature-request on-testplan
|
If I want to use progress events for operations that may take long in some cases, but are mostly short running, I run into the problem that I do not know upfront whether to send progress events or not.
That means I always have to use progress so that progress is displayed if I need them.
This results in flicker in the notification "bell".
To repro replace the "nextRequest" method of mockDebug by the following:
```js
protected nextRequest(response: DebugProtocol.NextResponse, args:
DebugProtocol.NextArguments): void {
const ID = '' + this._progressId++;
this.sendEvent(new ProgressStartEvent(ID, "Step"));
this.sendEvent(new ProgressUpdateEvent(ID, "update"));
this._runtime.step();
this.sendEvent(new ProgressEndEvent(ID, "end"));
this.sendResponse(response);
}
```
If you now step through a Readme.md the little red dot on the bell flashes and if notifications are open I even see a notification popping up and disappearing immediately.
I suggest that we only show the progress UI if the progress runs longer than some threshold (e.g. 500 ms).
|
1.0
|
Do not show progress UI if progress sequence is short - If I want to use progress events for operations that may take long in some cases, but are mostly short running, I run into the problem that I do not know upfront whether to send progress events or not.
That means I always have to use progress so that progress is displayed if I need them.
This results in flicker in the notification "bell".
To repro replace the "nextRequest" method of mockDebug by the following:
```js
protected nextRequest(response: DebugProtocol.NextResponse, args:
DebugProtocol.NextArguments): void {
const ID = '' + this._progressId++;
this.sendEvent(new ProgressStartEvent(ID, "Step"));
this.sendEvent(new ProgressUpdateEvent(ID, "update"));
this._runtime.step();
this.sendEvent(new ProgressEndEvent(ID, "end"));
this.sendResponse(response);
}
```
If you now step through a Readme.md the little red dot on the bell flashes and if notifications are open I even see a notification popping up and disappearing immediately.
I suggest that we only show the progress UI if the progress runs longer than some threshold (e.g. 500 ms).
|
test
|
do not show progress ui if progress sequence is short if i want to use progress events for operations that may take long in some cases but are mostly short running i run into the problem that i do not know upfront whether to send progress events or not that means i always have to use progress so that progress is displayed if i need them this results in flicker in the notification bell to repro replace the nextrequest method of mockdebug by the following js protected nextrequest response debugprotocol nextresponse args debugprotocol nextarguments void const id this progressid this sendevent new progressstartevent id step this sendevent new progressupdateevent id update this runtime step this sendevent new progressendevent id end this sendresponse response if you now step through a readme md the little red dot on the bell flashes and if notifications are open i even see a notification popping up and disappearing immediately i suggest that we only show the progress ui if the progress runs longer than some threshold e g ms
| 1
|
392,738
| 26,956,588,158
|
IssuesEvent
|
2023-02-08 15:17:47
|
CC-in-the-Cloud/General
|
https://api.github.com/repos/CC-in-the-Cloud/General
|
opened
|
Consistency of Comma Usage
|
documentation Trusted Provider/Platform Main Guidance Doc Consistency Editorial
|
Documents should be consistent on use/disuse of oxford comma ("x, y, and z" vs "x, y and z").
|
1.0
|
Consistency of Comma Usage - Documents should be consistent on use/disuse of oxford comma ("x, y, and z" vs "x, y and z").
|
non_test
|
consistency of comma usage documents should be consistent on use disuse of oxford comma x y and z vs x y and z
| 0
|
48,406
| 2,998,140,412
|
IssuesEvent
|
2015-07-23 12:29:49
|
jayway/powermock
|
https://api.github.com/repos/jayway/powermock
|
closed
|
Mocking fails if class is not public or protected when doing signed mocking.
|
bug imported Milestone-Release1.0 Priority-High
|
_From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on October 23, 2008 11:31:21_
When doing signed mocking we change the package name from x.y.z to
powermock.x.y.z. CGLib then fails to create a mock if the class to mock is
neither public or protected. An easy solution would be to change the
visibility of all classes loaded by our classloader to public.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=32_
|
1.0
|
Mocking fails if class is not public or protected when doing signed mocking. - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on October 23, 2008 11:31:21_
When doing signed mocking we change the package name from x.y.z to
powermock.x.y.z. CGLib then fails to create a mock if the class to mock is
neither public or protected. An easy solution would be to change the
visibility of all classes loaded by our classloader to public.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=32_
|
non_test
|
mocking fails if class is not public or protected when doing signed mocking from on october when doing signed mocking we change the package name from x y z to powermock x y z cglib then fails to create a mock if the class to mock is neither public or protected an easy solution would be to change the visibility of all classes loaded by our classloader to public original issue
| 0
|
33,050
| 14,003,674,676
|
IssuesEvent
|
2020-10-28 16:09:41
|
MicrosoftDocs/windowsserverdocs
|
https://api.github.com/repos/MicrosoftDocs/windowsserverdocs
|
reopened
|
No way to remove a URL for a Workspace
|
Pri2 remote-desktop-services/tech windows-server/prod
|
Nor is there a way to change the login credential that you use to connect to a workspace. Please get this functionality put into the client. It's kind of worthless for the RDWEB stuff without the ability to logoff of a workspace and test other user accounts.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 0517ea6a-9417-31a8-39a5-f6128387fee5
* Version Independent ID: 7636ec70-e1dd-8c6b-516c-8af324d9f645
* Content: [Get started with the macOS client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-mac#feedback)
* Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/remote-desktop-mac.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/remote-desktop-mac.md)
* Product: **windows-server**
* Technology: **remote-desktop-services**
* GitHub Login: @lizap
* Microsoft Alias: **elizapo**
|
1.0
|
No way to remove a URL for a Workspace - Nor is there a way to change the login credential that you use to connect to a workspace. Please get this functionality put into the client. It's kind of worthless for the RDWEB stuff without the ability to logoff of a workspace and test other user accounts.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 0517ea6a-9417-31a8-39a5-f6128387fee5
* Version Independent ID: 7636ec70-e1dd-8c6b-516c-8af324d9f645
* Content: [Get started with the macOS client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-mac#feedback)
* Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/remote-desktop-mac.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/remote-desktop-mac.md)
* Product: **windows-server**
* Technology: **remote-desktop-services**
* GitHub Login: @lizap
* Microsoft Alias: **elizapo**
|
non_test
|
no way to remove a url for a workspace nor is there a way to change the login credential that you use to connect to a workspace please get this functionality put into the client it s kind of worthless for the rdweb stuff without the ability to logoff of a workspace and test other user accounts document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product windows server technology remote desktop services github login lizap microsoft alias elizapo
| 0
|
91,900
| 15,856,770,521
|
IssuesEvent
|
2021-04-08 03:08:41
|
venkateshreddypala/AngOCR
|
https://api.github.com/repos/venkateshreddypala/AngOCR
|
opened
|
CVE-2019-10744 (High) detected in lodash-4.17.11.tgz
|
security vulnerability
|
## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: AngOCR/ui/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.1.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-10744 (High) detected in lodash-4.17.11.tgz - ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: AngOCR/ui/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.1.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file angocr ui package json path to vulnerable library angocr ui node modules lodash package json dependency hierarchy karma tgz root library x lodash tgz vulnerable library vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template step up your open source security game with whitesource
| 0
|
305,297
| 26,376,979,524
|
IssuesEvent
|
2023-01-12 04:02:30
|
risingwavelabs/risingwave
|
https://api.github.com/repos/risingwavelabs/risingwave
|
closed
|
Tracking: support CH-Benchmark for streaming e2e tests
|
component/test
|
- [x] Support streaming also, previously only batch queries
- [x] Support the missing Q5(no data)
- [x] Support the missing Q7(no data)
- [x] Support the missing Q8(no data)
- [x] Support the missing Q19(precision error)
- [x] #6875
~Related: #5190~
|
1.0
|
Tracking: support CH-Benchmark for streaming e2e tests - - [x] Support streaming also, previously only batch queries
- [x] Support the missing Q5(no data)
- [x] Support the missing Q7(no data)
- [x] Support the missing Q8(no data)
- [x] Support the missing Q19(precision error)
- [x] #6875
~Related: #5190~
|
test
|
tracking support ch benchmark for streaming tests support streaming also previously only batch queries support the missing no data support the missing no data support the missing no data support the missing precision error related
| 1
|
334,383
| 29,833,336,285
|
IssuesEvent
|
2023-06-18 14:35:46
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
opened
|
Fix miscellaneous_ops.test_torch_cummax
|
PyTorch Frontend Sub Task Failing Test
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix miscellaneous_ops.test_torch_cummax - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5303948137/jobs/9599918775"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix miscellaneous ops test torch cummax numpy a href src jax a href src torch a href src tensorflow a href src paddle a href src
| 1
|
45,463
| 5,716,610,840
|
IssuesEvent
|
2017-04-19 15:28:42
|
researchstudio-sat/webofneeds
|
https://api.github.com/repos/researchstudio-sat/webofneeds
|
closed
|
wrong use of prefixed uris in owner app
|
bug testing
|
the owner application uses wrong URIs for need descriptions (more specifically, for describing the location)
instead of defining a prefix and using it, the URIs use the string inteded as the prefix as the schema, such as `<s:GeoCoordinates>`
|
1.0
|
wrong use of prefixed uris in owner app - the owner application uses wrong URIs for need descriptions (more specifically, for describing the location)
instead of defining a prefix and using it, the URIs use the string inteded as the prefix as the schema, such as `<s:GeoCoordinates>`
|
test
|
wrong use of prefixed uris in owner app the owner application uses wrong uris for need descriptions more specifically for describing the location instead of defining a prefix and using it the uris use the string inteded as the prefix as the schema such as
| 1
|
150,180
| 11,950,762,553
|
IssuesEvent
|
2020-04-03 15:43:35
|
Cookie-AutoDelete/Cookie-AutoDelete
|
https://api.github.com/repos/Cookie-AutoDelete/Cookie-AutoDelete
|
opened
|
[BUG] High CPU usage when CAD button is clicked, and while popup is displayed
|
untested bug/issue
|
<!-- PLEASE READ THE FAQ AND DOCUMENTATION BEFORE POSTING:
Yes, I have.
**Describe the bug**
High CPU usage on a Chrome's process constantly when the CAD extension button at the top right of the browser window is clicked, and while the popup is displayed.
However, the Chrome's process' CPU usage came back to normal as soon as the popup is closed.
**To Reproduce**
Steps to reproduce the behavior:
1. Open Chrome.
2. Click on the CAD extension button at the top right.
3. Observe the Chrome's process in the Task Manager's "Details" tab.
**Expected behavior**
The Chrome's process' CPU usage should just briefly go up once the CAD button is clicked, and it should come back down.
**Screenshots**
I believe a screenshot is not needed in this case.
But please let me know if you do.
**Your System Info (please complete the following information):**
- OS: Windows 10 (x64)
- Browser Info: Chrome 80.0.3987.163 (x64)
- CookieAutoDelete Version: 3.1.1
**Additional context**
This was tested on a new and clean Chrome installation.
On Vivaldi (2.11.1811.52), the situation is worse. The process' CPU usage never comes back down until the browser is restarted. (Yes, I'm aware that Vivaldi is not officially supported. Just an FYI.)
Thank you.
|
1.0
|
[BUG] High CPU usage when CAD button is clicked, and while popup is displayed - <!-- PLEASE READ THE FAQ AND DOCUMENTATION BEFORE POSTING:
Yes, I have.
**Describe the bug**
High CPU usage on a Chrome's process constantly when the CAD extension button at the top right of the browser window is clicked, and while the popup is displayed.
However, the Chrome's process' CPU usage came back to normal as soon as the popup is closed.
**To Reproduce**
Steps to reproduce the behavior:
1. Open Chrome.
2. Click on the CAD extension button at the top right.
3. Observe the Chrome's process in the Task Manager's "Details" tab.
**Expected behavior**
The Chrome's process' CPU usage should just briefly go up once the CAD button is clicked, and it should come back down.
**Screenshots**
I believe a screenshot is not needed in this case.
But please let me know if you do.
**Your System Info (please complete the following information):**
- OS: Windows 10 (x64)
- Browser Info: Chrome 80.0.3987.163 (x64)
- CookieAutoDelete Version: 3.1.1
**Additional context**
This was tested on a new and clean Chrome installation.
On Vivaldi (2.11.1811.52), the situation is worse. The process' CPU usage never comes back down until the browser is restarted. (Yes, I'm aware that Vivaldi is not officially supported. Just an FYI.)
Thank you.
|
test
|
high cpu usage when cad button is clicked and while popup is displayed please read the faq and documentation before posting yes i have describe the bug high cpu usage on a chrome s process constantly when the cad extension button at the top right of the browser window is clicked and while the popup is displayed however the chrome s process cpu usage came back to normal as soon as the popup is closed to reproduce steps to reproduce the behavior open chrome click on the cad extension button at the top right observe the chrome s process in the task manager s details tab expected behavior the chrome s process cpu usage should just briefly go up once the cad button is clicked and it should come back down screenshots i believe a screenshot is not needed in this case but please let me know if you do your system info please complete the following information os windows browser info chrome cookieautodelete version additional context this was tested on a new and clean chrome installation on vivaldi the situation is worse the process cpu usage never comes back down until the browser is restarted yes i m aware that vivaldi is not officially supported just an fyi thank you
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.