Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
260,731 | 22,644,334,977 | IssuesEvent | 2022-07-01 07:10:25 | zkSNACKs/WalletWasabi | https://api.github.com/repos/zkSNACKs/WalletWasabi | closed | CoinJoinClient double started in Manual mode | ww2 testing | ### General Description
A clear and concise description of what the bug is.
### How To Reproduce?
Let the client run and wait until one successful round in Manual CoinJoin mode.
When the round finished both of these lines will be triggered:
https://github.com/zkSNACKs/WalletWasabi/blob/d634fa58386ec56f92d72f7d65cd019e48998042/WalletWasabi.Fluent/ViewModels/Wallets/CoinJoinStateViewModel.cs#L248
https://github.com/zkSNACKs/WalletWasabi/blob/d634fa58386ec56f92d72f7d65cd019e48998042/WalletWasabi.Fluent/ViewModels/Wallets/CoinJoinStateViewModel.cs#L253
### Logs
```
2022-05-19 15:52:07.461 [54] INFO AliceClient.SignTransactionAsync (237) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2), Alice (cbf126da-df1d-c0e5-10cb-0d5310f40a33): Posted a signature.
2022-05-19 15:52:07.464 [54] DEBUG CoinJoinClient.ProceedWithSigningStateAsync (669) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2): Alices(3) have signed the coinjoin tx.
2022-05-19 15:52:10.203 [49] DEBUG CoinJoinClient.StartRoundAsync (218) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2): Broadcasted. Coinjoin TxId: (c3c82cc4ebe577cf141273a0972d85c70ff218a8863af0749695d7f6375556c7)
2022-05-19 15:52:10.210 [49] DEBUG CoinJoinClient.LogCoinJoinSummary (425) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2):
Input total: +0.00325266 Eff: +0.00299598 NetwFee: +0.00025668 CoordFee: 0.00
Outpu total: +0.00272690 Eff: +0.00245782 NetwFee: +0.00026908
Total diff : +0.00052576
Effec diff : +0.00053816
Total fee : +0.00052576
2022-05-19 15:52:20.628 [46] DEBUG CoinJoinManager.HandleCoinJoinCommandsAsync (144) Wallet (Random Wallet): Cannot start coinjoin, bacause it is already running.
2022-05-19 15:52:20.652 [21] INFO CoinJoinManager.HandleCoinJoinFinalizationAsync (306) Wallet (Random Wallet): CoinJoinClient finished!
2022-05-19 15:52:37.124 [19] DEBUG CoinJoinManager.HandleCoinJoinCommandsAsync (198) Wallet (Random Wallet): Coinjoin client started, auto-coinjoin: 'False' overridePlebStop:'True'.
2022-05-19 15:52:42.586 [6] DEBUG CoinJoinClient.CreateRegisterAndConfirmCoinsAsync (289) Round (ed826cce3d4b385f847dc12160f32cb46bc9f982ec45c0440fa22ec112d99bae): Input registration started - it will end in: 00:01:01.
2022-05-19 15:52:44.308 [52] DEBUG WabiSabiHttpApiClient.SendWithRetriesAsync (89) Received a response for RegisterInput in 0.01s.
2022-05-19 15:52:44.326 [52] INFO AliceClient.RegisterInputAsync (98) Round (ed826cce3d4b385f847dc12160f32cb46bc9f982ec45c0440fa22ec112d99bae), Alice (313f9064-9ded-0170-d2c6-9312f71d8ffb): Registered 77f39ee5efbc2c935334d02500323f2dcd50693a4827dffc950d89baa1279142-0.
2022-05-19 15:52:46.083 [50] DEBUG WabiSabiHttpApiClient.SendWithRetriesAsync (89) Received a response for RegisterInput in 0.01s.
2022-05-19 15:52:46.103 [50] INFO AliceClient.RegisterInputAsync (98) Round (ed826cce3d4b385f847dc12160f32cb46bc9f982ec45c0440fa22ec112d99bae), Alice (da2eba0f-fae2-36e1-aba5-dc6ff21d02c1): Registered deca4c5ba3309a4a19cae20f65fd34a4641db6a730a17271355377c280c2d42c-1.
```
| 1.0 | CoinJoinClient double started in Manual mode - ### General Description
A clear and concise description of what the bug is.
### How To Reproduce?
Let the client run and wait until one successful round in Manual CoinJoin mode.
When the round finished both of these lines will be triggered:
https://github.com/zkSNACKs/WalletWasabi/blob/d634fa58386ec56f92d72f7d65cd019e48998042/WalletWasabi.Fluent/ViewModels/Wallets/CoinJoinStateViewModel.cs#L248
https://github.com/zkSNACKs/WalletWasabi/blob/d634fa58386ec56f92d72f7d65cd019e48998042/WalletWasabi.Fluent/ViewModels/Wallets/CoinJoinStateViewModel.cs#L253
### Logs
```
2022-05-19 15:52:07.461 [54] INFO AliceClient.SignTransactionAsync (237) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2), Alice (cbf126da-df1d-c0e5-10cb-0d5310f40a33): Posted a signature.
2022-05-19 15:52:07.464 [54] DEBUG CoinJoinClient.ProceedWithSigningStateAsync (669) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2): Alices(3) have signed the coinjoin tx.
2022-05-19 15:52:10.203 [49] DEBUG CoinJoinClient.StartRoundAsync (218) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2): Broadcasted. Coinjoin TxId: (c3c82cc4ebe577cf141273a0972d85c70ff218a8863af0749695d7f6375556c7)
2022-05-19 15:52:10.210 [49] DEBUG CoinJoinClient.LogCoinJoinSummary (425) Round (e55e776f07f15550d03f34a75d4b1de59fe363bdf40cf0cdc31094d165dd93c2):
Input total: +0.00325266 Eff: +0.00299598 NetwFee: +0.00025668 CoordFee: 0.00
Outpu total: +0.00272690 Eff: +0.00245782 NetwFee: +0.00026908
Total diff : +0.00052576
Effec diff : +0.00053816
Total fee : +0.00052576
2022-05-19 15:52:20.628 [46] DEBUG CoinJoinManager.HandleCoinJoinCommandsAsync (144) Wallet (Random Wallet): Cannot start coinjoin, bacause it is already running.
2022-05-19 15:52:20.652 [21] INFO CoinJoinManager.HandleCoinJoinFinalizationAsync (306) Wallet (Random Wallet): CoinJoinClient finished!
2022-05-19 15:52:37.124 [19] DEBUG CoinJoinManager.HandleCoinJoinCommandsAsync (198) Wallet (Random Wallet): Coinjoin client started, auto-coinjoin: 'False' overridePlebStop:'True'.
2022-05-19 15:52:42.586 [6] DEBUG CoinJoinClient.CreateRegisterAndConfirmCoinsAsync (289) Round (ed826cce3d4b385f847dc12160f32cb46bc9f982ec45c0440fa22ec112d99bae): Input registration started - it will end in: 00:01:01.
2022-05-19 15:52:44.308 [52] DEBUG WabiSabiHttpApiClient.SendWithRetriesAsync (89) Received a response for RegisterInput in 0.01s.
2022-05-19 15:52:44.326 [52] INFO AliceClient.RegisterInputAsync (98) Round (ed826cce3d4b385f847dc12160f32cb46bc9f982ec45c0440fa22ec112d99bae), Alice (313f9064-9ded-0170-d2c6-9312f71d8ffb): Registered 77f39ee5efbc2c935334d02500323f2dcd50693a4827dffc950d89baa1279142-0.
2022-05-19 15:52:46.083 [50] DEBUG WabiSabiHttpApiClient.SendWithRetriesAsync (89) Received a response for RegisterInput in 0.01s.
2022-05-19 15:52:46.103 [50] INFO AliceClient.RegisterInputAsync (98) Round (ed826cce3d4b385f847dc12160f32cb46bc9f982ec45c0440fa22ec112d99bae), Alice (da2eba0f-fae2-36e1-aba5-dc6ff21d02c1): Registered deca4c5ba3309a4a19cae20f65fd34a4641db6a730a17271355377c280c2d42c-1.
```
| test | coinjoinclient double started in manual mode general description a clear and concise description of what the bug is how to reproduce let the client run and wait until one successful round in manual coinjoin mode when the round finished both of these lines will be triggered logs info aliceclient signtransactionasync round alice posted a signature debug coinjoinclient proceedwithsigningstateasync round alices have signed the coinjoin tx debug coinjoinclient startroundasync round broadcasted coinjoin txid debug coinjoinclient logcoinjoinsummary round input total eff netwfee coordfee outpu total eff netwfee total diff effec diff total fee debug coinjoinmanager handlecoinjoincommandsasync wallet random wallet cannot start coinjoin bacause it is already running info coinjoinmanager handlecoinjoinfinalizationasync wallet random wallet coinjoinclient finished debug coinjoinmanager handlecoinjoincommandsasync wallet random wallet coinjoin client started auto coinjoin false overrideplebstop true debug coinjoinclient createregisterandconfirmcoinsasync round input registration started it will end in debug wabisabihttpapiclient sendwithretriesasync received a response for registerinput in info aliceclient registerinputasync round alice registered debug wabisabihttpapiclient sendwithretriesasync received a response for registerinput in info aliceclient registerinputasync round alice registered | 1 |
211,617 | 16,329,769,648 | IssuesEvent | 2021-05-12 07:45:48 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: tpcc/headroom/n4cpu16 failed | C-test-failure O-roachtest O-robot branch-release-20.1 release-blocker | [(roachtest).tpcc/headroom/n4cpu16 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2949856&tab=buildLog) on [release-20.1@3ae6b59f0ea417486c3c41f85ea5baff3b5a00df](https://github.com/cockroachdb/cockroach/commits/3ae6b59f0ea417486c3c41f85ea5baff3b5a00df):
```
| 2883.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2883.0s 0 0.0 182.0 0.0 0.0 0.0 0.0 payment
| 2883.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
| 2884.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 delivery
| 2884.0s 0 0.0 181.1 0.0 0.0 0.0 0.0 newOrder
| 2884.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2884.0s 0 0.0 181.9 0.0 0.0 0.0 0.0 payment
| 2884.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
| _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
| 2885.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 delivery
| 2885.0s 0 0.0 181.0 0.0 0.0 0.0 0.0 newOrder
| 2885.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2885.0s 0 0.0 181.8 0.0 0.0 0.0 0.0 payment
| 2885.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
| 2886.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 delivery
| 2886.0s 0 0.0 181.0 0.0 0.0 0.0 0.0 newOrder
| 2886.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2886.0s 0 0.0 181.8 0.0 0.0 0.0 0.0 payment
| 2886.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2629,tpcc.go:174,tpcc.go:238,test_runner.go:749: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2617
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2625
| main.runTPCC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:174
| main.registerTPCC.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:238
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:749
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2673
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2587
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/tpcc/headroom/n4cpu16](https://teamcity.cockroachdb.com/viewLog.html?buildId=2949856&tab=artifacts#/tpcc/headroom/n4cpu16)
Related:
- #64624 roachtest: tpcc/headroom/n4cpu16 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpcc%2Fheadroom%2Fn4cpu16.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: tpcc/headroom/n4cpu16 failed - [(roachtest).tpcc/headroom/n4cpu16 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2949856&tab=buildLog) on [release-20.1@3ae6b59f0ea417486c3c41f85ea5baff3b5a00df](https://github.com/cockroachdb/cockroach/commits/3ae6b59f0ea417486c3c41f85ea5baff3b5a00df):
```
| 2883.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2883.0s 0 0.0 182.0 0.0 0.0 0.0 0.0 payment
| 2883.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
| 2884.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 delivery
| 2884.0s 0 0.0 181.1 0.0 0.0 0.0 0.0 newOrder
| 2884.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2884.0s 0 0.0 181.9 0.0 0.0 0.0 0.0 payment
| 2884.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
| _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
| 2885.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 delivery
| 2885.0s 0 0.0 181.0 0.0 0.0 0.0 0.0 newOrder
| 2885.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2885.0s 0 0.0 181.8 0.0 0.0 0.0 0.0 payment
| 2885.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
| 2886.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 delivery
| 2886.0s 0 0.0 181.0 0.0 0.0 0.0 0.0 newOrder
| 2886.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 orderStatus
| 2886.0s 0 0.0 181.8 0.0 0.0 0.0 0.0 payment
| 2886.0s 0 0.0 18.2 0.0 0.0 0.0 0.0 stockLevel
Wraps: (5) exit status 20
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2629,tpcc.go:174,tpcc.go:238,test_runner.go:749: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2617
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2625
| main.runTPCC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:174
| main.registerTPCC.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:238
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:749
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2673
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2587
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/tpcc/headroom/n4cpu16](https://teamcity.cockroachdb.com/viewLog.html?buildId=2949856&tab=artifacts#/tpcc/headroom/n4cpu16)
Related:
- #64624 roachtest: tpcc/headroom/n4cpu16 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpcc%2Fheadroom%2Fn4cpu16.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest tpcc headroom failed on orderstatus payment stocklevel delivery neworder orderstatus payment stocklevel elapsed errors ops sec inst ops sec cum ms ms ms pmax ms delivery neworder orderstatus payment stocklevel delivery neworder orderstatus payment stocklevel wraps exit status error types withstack withstack safedetails withsafedetails errutil withmessage main withcommanddetails exec exiterror cluster go tpcc go tpcc go test runner go monitor failure monitor task failed t fatal was called attached stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runtpcc home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main registertpcc home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withmessage withstack withstack errutil withmessage withstack withstack errors errorstring more artifacts related roachtest tpcc headroom failed powered by | 1 |
668,643 | 22,592,327,609 | IssuesEvent | 2022-06-28 21:14:12 | Ijwu/Archipelago.HollowKnight | https://api.github.com/repos/Ijwu/Archipelago.HollowKnight | closed | Stag stations don't cost Geo, but should | enhancement mid priority | StagStation placement handler should place vanilla cost for the Stag station and force the player to have to buy it to unlock it. | 1.0 | Stag stations don't cost Geo, but should - StagStation placement handler should place vanilla cost for the Stag station and force the player to have to buy it to unlock it. | non_test | stag stations don t cost geo but should stagstation placement handler should place vanilla cost for the stag station and force the player to have to buy it to unlock it | 0 |
27,533 | 4,320,627,453 | IssuesEvent | 2016-07-25 06:30:37 | fossasia/engelsystem | https://api.github.com/repos/fossasia/engelsystem | opened | Writing test for Travis-CI | testing | We need to set up Travis to run tests for our system. We need to write test codes for model and view, that Travis will run, and script that will allow travis to set up database and run other commands for tests | 1.0 | Writing test for Travis-CI - We need to set up Travis to run tests for our system. We need to write test codes for model and view, that Travis will run, and script that will allow travis to set up database and run other commands for tests | test | writing test for travis ci we need to set up travis to run tests for our system we need to write test codes for model and view that travis will run and script that will allow travis to set up database and run other commands for tests | 1 |
270,016 | 23,484,582,541 | IssuesEvent | 2022-08-17 13:29:36 | instadeepai/Mava | https://api.github.com/repos/instadeepai/Mava | closed | [TEST] Jax separate networks step | test | ### What do you want to test?
Unit test for the `MAPGWithTrustRegionStepSeparateNetworks` component of the Jax PPO implementation that makes use of separate critic and policy netowrks.
### Outline of test structure
* Unit tests
* Test components and hooks
### Definition of done
Passing checks, cover all hooks, edge cases considered
### Mandatory checklist before making a PR
* [ ] The success criteria laid down in “Definition of done” are met.
* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.
* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation. | 1.0 | [TEST] Jax separate networks step - ### What do you want to test?
Unit test for the `MAPGWithTrustRegionStepSeparateNetworks` component of the Jax PPO implementation that makes use of separate critic and policy netowrks.
### Outline of test structure
* Unit tests
* Test components and hooks
### Definition of done
Passing checks, cover all hooks, edge cases considered
### Mandatory checklist before making a PR
* [ ] The success criteria laid down in “Definition of done” are met.
* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.
* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation. | test | jax separate networks step what do you want to test unit test for the mapgwithtrustregionstepseparatenetworks component of the jax ppo implementation that makes use of separate critic and policy netowrks outline of test structure unit tests test components and hooks definition of done passing checks cover all hooks edge cases considered mandatory checklist before making a pr the success criteria laid down in “definition of done” are met test code is documented docstrings for methods and classes static types for arguments documentation is updated readme contributing or other documentation | 1 |
9,625 | 8,056,467,992 | IssuesEvent | 2018-08-02 12:49:29 | eslint/eslint | https://api.github.com/repos/eslint/eslint | opened | Internal consistent-docs-url rule crashes if meta.docs isn't present | accepted bug infrastructure rule | <!--
ESLint adheres to the [JS Foundation Code of Conduct](https://js.foundation/community/code-of-conduct).
This template is for bug reports. If you are here for another reason, please see below:
1. To propose a new rule: https://eslint.org/docs/developer-guide/contributing/new-rules
2. To request a rule change: https://eslint.org/docs/developer-guide/contributing/rule-changes
3. To request a change that is not a bug fix, rule change, or new rule: https://eslint.org/docs/developer-guide/contributing/changes
4. If you have any questions, please stop by our chatroom: https://gitter.im/eslint/eslint
Note that leaving sections blank will make it difficult for us to troubleshoot and we may have to close the issue.
-->
**Tell us about your environment**
* **ESLint Version:** master
* **Node Version:** n/a
* **npm Version:** n/a
**What parser (default, Babel-ESLint, etc.) are you using?**
Default
**Please show your full configuration:**
According to the internal ESLint config around rules:
```yml
rules:
rulesdir/consistent-docs-url: "error"
```
**What did you do? Please include the actual source code causing the issue, as well as the command that you used to run ESLint.**
<!-- Paste the source code below: -->
```js
module.exports = {
meta: {}
};
```
<!-- Paste the command you used to run ESLint: -->
```bash
npm run lint
```
**What did you expect to happen?**
Either a lint error about the lack of `meta.docs` (or `meta.docs.url`), or nothing if that is covered by another rule.
**What actually happened? Please include the actual, raw output from ESLint.**
ESLint crashes, with an error message from `context.report` that the node location must be passed if the node is not passed.
--------
I believe the problem is in [this area](https://github.com/eslint/eslint/blob/master/tools/internal-rules/consistent-docs-url.js#L48-L57), where we check for the absence of `metaDocsUrl` but don't check that `metaDocs` is non-null before sending it to `context.report`. | 1.0 | Internal consistent-docs-url rule crashes if meta.docs isn't present - <!--
ESLint adheres to the [JS Foundation Code of Conduct](https://js.foundation/community/code-of-conduct).
This template is for bug reports. If you are here for another reason, please see below:
1. To propose a new rule: https://eslint.org/docs/developer-guide/contributing/new-rules
2. To request a rule change: https://eslint.org/docs/developer-guide/contributing/rule-changes
3. To request a change that is not a bug fix, rule change, or new rule: https://eslint.org/docs/developer-guide/contributing/changes
4. If you have any questions, please stop by our chatroom: https://gitter.im/eslint/eslint
Note that leaving sections blank will make it difficult for us to troubleshoot and we may have to close the issue.
-->
**Tell us about your environment**
* **ESLint Version:** master
* **Node Version:** n/a
* **npm Version:** n/a
**What parser (default, Babel-ESLint, etc.) are you using?**
Default
**Please show your full configuration:**
According to the internal ESLint config around rules:
```yml
rules:
rulesdir/consistent-docs-url: "error"
```
**What did you do? Please include the actual source code causing the issue, as well as the command that you used to run ESLint.**
<!-- Paste the source code below: -->
```js
module.exports = {
meta: {}
};
```
<!-- Paste the command you used to run ESLint: -->
```bash
npm run lint
```
**What did you expect to happen?**
Either a lint error about the lack of `meta.docs` (or `meta.docs.url`), or nothing if that is covered by another rule.
**What actually happened? Please include the actual, raw output from ESLint.**
ESLint crashes, with an error message from `context.report` that the node location must be passed if the node is not passed.
--------
I believe the problem is in [this area](https://github.com/eslint/eslint/blob/master/tools/internal-rules/consistent-docs-url.js#L48-L57), where we check for the absence of `metaDocsUrl` but don't check that `metaDocs` is non-null before sending it to `context.report`. | non_test | internal consistent docs url rule crashes if meta docs isn t present eslint adheres to the this template is for bug reports if you are here for another reason please see below to propose a new rule to request a rule change to request a change that is not a bug fix rule change or new rule if you have any questions please stop by our chatroom note that leaving sections blank will make it difficult for us to troubleshoot and we may have to close the issue tell us about your environment eslint version master node version n a npm version n a what parser default babel eslint etc are you using default please show your full configuration according to the internal eslint config around rules yml rules rulesdir consistent docs url error what did you do please include the actual source code causing the issue as well as the command that you used to run eslint js module exports meta bash npm run lint what did you expect to happen either a lint error about the lack of meta docs or meta docs url or nothing if that is covered by another rule what actually happened please include the actual raw output from eslint eslint crashes with an error message from context report that the node location must be passed if the node is not passed i believe the problem is in where we check for the absence of metadocsurl but don t check that metadocs is non null before sending it to context report | 0 |
75,577 | 7,477,582,706 | IssuesEvent | 2018-04-04 08:47:23 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | [cluster] MembershipFailureTest_withTCP.secondMastershipClaimByYounger_shouldRetry_when_firstMastershipClaimByElder_accepted | Team: Core Type: Test-Failure | ```
java.lang.AssertionError: expected:<[127.0.0.1]:5702> but was:<[127.0.0.1]:5701>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at com.hazelcast.test.HazelcastTestSupport.assertMasterAddress(HazelcastTestSupport.java:924)
at com.hazelcast.test.HazelcastTestSupport$14.run(HazelcastTestSupport.java:934)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1066)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1083)
at com.hazelcast.test.HazelcastTestSupport.assertMasterAddressEventually(HazelcastTestSupport.java:929)
at com.hazelcast.internal.cluster.impl.MembershipFailureTest.secondMastershipClaimByYounger_shouldRetry_when_firstMastershipClaimByElder_accepted(MembershipFailureTest.java:642)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:105)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:97)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
```
https://hazelcast-l337.ci.cloudbees.com/view/Sonar/job/Hazelcast-3.x-sonar/com.hazelcast$hazelcast/1563/testReport/junit/com.hazelcast.internal.cluster.impl/MembershipFailureTest_withTCP/secondMastershipClaimByYounger_shouldRetry_when_firstMastershipClaimByElder_accepted/ | 1.0 | [cluster] MembershipFailureTest_withTCP.secondMastershipClaimByYounger_shouldRetry_when_firstMastershipClaimByElder_accepted - ```
java.lang.AssertionError: expected:<[127.0.0.1]:5702> but was:<[127.0.0.1]:5701>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at com.hazelcast.test.HazelcastTestSupport.assertMasterAddress(HazelcastTestSupport.java:924)
at com.hazelcast.test.HazelcastTestSupport$14.run(HazelcastTestSupport.java:934)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1066)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1083)
at com.hazelcast.test.HazelcastTestSupport.assertMasterAddressEventually(HazelcastTestSupport.java:929)
at com.hazelcast.internal.cluster.impl.MembershipFailureTest.secondMastershipClaimByYounger_shouldRetry_when_firstMastershipClaimByElder_accepted(MembershipFailureTest.java:642)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:105)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:97)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
```
https://hazelcast-l337.ci.cloudbees.com/view/Sonar/job/Hazelcast-3.x-sonar/com.hazelcast$hazelcast/1563/testReport/junit/com.hazelcast.internal.cluster.impl/MembershipFailureTest_withTCP/secondMastershipClaimByYounger_shouldRetry_when_firstMastershipClaimByElder_accepted/ | test | membershipfailuretest withtcp secondmastershipclaimbyyounger shouldretry when firstmastershipclaimbyelder accepted java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com hazelcast test hazelcasttestsupport assertmasteraddress hazelcasttestsupport java at com hazelcast test hazelcasttestsupport run hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport assertmasteraddresseventually hazelcasttestsupport java at com hazelcast internal cluster impl membershipfailuretest secondmastershipclaimbyyounger shouldretry when firstmastershipclaimbyelder accepted membershipfailuretest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java | 1 |
61,569 | 17,023,728,265 | IssuesEvent | 2021-07-03 03:31:27 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Oauth Authorize access page doesn't give the currently logged in user | Component: website Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 5.29pm, Friday, 1st July 2011]**
It would be useful to give the name of the currently logged in user to the Oauth auth screen, so that you can confirm that you are giving access to the correct account, just in case the wrong user happens to be logged in, for example someone else has used your machine or you have multiple account. | 1.0 | Oauth Authorize access page doesn't give the currently logged in user - **[Submitted to the original trac issue database at 5.29pm, Friday, 1st July 2011]**
It would be useful to give the name of the currently logged in user to the Oauth auth screen, so that you can confirm that you are giving access to the correct account, just in case the wrong user happens to be logged in, for example someone else has used your machine or you have multiple account. | non_test | oauth authorize access page doesn t give the currently logged in user it would be useful to give the name of the currently logged in user to the oauth auth screen so that you can confirm that you are giving access to the correct account just in case the wrong user happens to be logged in for example someone else has used your machine or you have multiple account | 0 |
324,787 | 24,016,854,327 | IssuesEvent | 2022-09-15 02:10:55 | DanCampos12/DivDados | https://api.github.com/repos/DanCampos12/DivDados | closed | 3º versão - Proposta de Solução de Software. | documentation enhancement | Correção da Proposta de Solução de Software com base na última revisão. | 1.0 | 3º versão - Proposta de Solução de Software. - Correção da Proposta de Solução de Software com base na última revisão. | non_test | versão proposta de solução de software correção da proposta de solução de software com base na última revisão | 0 |
29,513 | 4,506,189,454 | IssuesEvent | 2016-09-02 02:08:34 | raveloda/Coatl-Aerospace | https://api.github.com/repos/raveloda/Coatl-Aerospace | closed | Test RemoteTech antenna Configs | testing work in progress | See #5
General Balancing pass on the antenna stats based on stock and RT configs. This should include consideration for the new antenna beings added.
- [x] Test komodo's configs (screwed up by Github formatting) | 1.0 | Test RemoteTech antenna Configs - See #5
General Balancing pass on the antenna stats based on stock and RT configs. This should include consideration for the new antenna beings added.
- [x] Test komodo's configs (screwed up by Github formatting) | test | test remotetech antenna configs see general balancing pass on the antenna stats based on stock and rt configs this should include consideration for the new antenna beings added test komodo s configs screwed up by github formatting | 1 |
710,972 | 24,445,791,830 | IssuesEvent | 2022-10-06 17:50:46 | MCFabian/social-media-widget | https://api.github.com/repos/MCFabian/social-media-widget | closed | Bug report: hover style for phone does not work -> wrong id in new ui (gen.js/style.css) | bug high priority | Since the new ui update and the new gen.js, the dynamic id from the gen.js script as changed from telefon to phone.
so change the css (all styles) or change the give id from the gen.js script

| 1.0 | Bug report: hover style for phone does not work -> wrong id in new ui (gen.js/style.css) - Since the new ui update and the new gen.js, the dynamic id from the gen.js script as changed from telefon to phone.
so change the css (all styles) or change the give id from the gen.js script

| non_test | bug report hover style for phone does not work wrong id in new ui gen js style css since the new ui update and the new gen js the dynamic id from the gen js script as changed from telefon to phone so change the css all styles or change the give id from the gen js script | 0 |
785,757 | 27,624,232,421 | IssuesEvent | 2023-03-10 04:36:48 | AY2223S2-CS2113-W12-1/tp | https://api.github.com/repos/AY2223S2-CS2113-W12-1/tp | opened | Create a remove appointment feature. | type.Story priority.High | As a user, I am able to remove appointments if necessary so that the appointment list is not clogged up. | 1.0 | Create a remove appointment feature. - As a user, I am able to remove appointments if necessary so that the appointment list is not clogged up. | non_test | create a remove appointment feature as a user i am able to remove appointments if necessary so that the appointment list is not clogged up | 0 |
805,554 | 29,524,826,258 | IssuesEvent | 2023-06-05 06:51:38 | telerik/kendo-react | https://api.github.com/repos/telerik/kendo-react | opened | Heatmap renders wrong colors if there are negative values | bug pkg:charts Priority 1 SEV: Medium | When the Heatmap has negative values, the default logic for generating the colors fails and it is using the range from 0 to the max positive value:
- https://stackblitz.com/edit/react-n6g4fp?file=app%2Fmain.jsx
The expected behavior is to have the colors range from the min value to the max value.
A temporary workaround is to define a custom "color" for the series:
- https://stackblitz.com/edit/react-c5hpwv?file=app%2Fmake-data-objects.js,app%2Fmain.jsx | 1.0 | Heatmap renders wrong colors if there are negative values - When the Heatmap has negative values, the default logic for generating the colors fails and it is using the range from 0 to the max positive value:
- https://stackblitz.com/edit/react-n6g4fp?file=app%2Fmain.jsx
The expected behavior is to have the colors range from the min value to the max value.
A temporary workaround is to define a custom "color" for the series:
- https://stackblitz.com/edit/react-c5hpwv?file=app%2Fmake-data-objects.js,app%2Fmain.jsx | non_test | heatmap renders wrong colors if there are negative values when the heatmap has negative values the default logic for generating the colors fails and it is using the range from to the max positive value the expected behavior is to have the colors range from the min value to the max value a temporary workaround is to define a custom color for the series | 0 |
646,134 | 21,038,523,355 | IssuesEvent | 2022-03-31 10:04:42 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | reopened | appengine.standard_python3.warmup.main_test: test_index failed | priority: p1 type: bug api: appengine samples flakybot: issue flakybot: flaky | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 3b6d797140afa6ca900a1e38a554795bfe800368
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/7e7321f5-302f-441b-b01b-0047fd05b625), [Sponge](http://sponge2/7e7321f5-302f-441b-b01b-0047fd05b625)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/appengine/standard_python3/warmup/main_test.py", line 22, in test_index
r = client.get('/')
File "/workspace/appengine/standard_python3/warmup/.nox/py-3-7/lib/python3.7/site-packages/werkzeug/test.py", line 1134, in get
return self.open(*args, **kw)
File "/workspace/appengine/standard_python3/warmup/.nox/py-3-7/lib/python3.7/site-packages/flask/testing.py", line 220, in open
follow_redirects=follow_redirects,
File "/workspace/appengine/standard_python3/warmup/.nox/py-3-7/lib/python3.7/site-packages/werkzeug/test.py", line 1081, in open
builder = EnvironBuilder(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'as_tuple'</pre></details> | 1.0 | appengine.standard_python3.warmup.main_test: test_index failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 3b6d797140afa6ca900a1e38a554795bfe800368
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/7e7321f5-302f-441b-b01b-0047fd05b625), [Sponge](http://sponge2/7e7321f5-302f-441b-b01b-0047fd05b625)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/appengine/standard_python3/warmup/main_test.py", line 22, in test_index
r = client.get('/')
File "/workspace/appengine/standard_python3/warmup/.nox/py-3-7/lib/python3.7/site-packages/werkzeug/test.py", line 1134, in get
return self.open(*args, **kw)
File "/workspace/appengine/standard_python3/warmup/.nox/py-3-7/lib/python3.7/site-packages/flask/testing.py", line 220, in open
follow_redirects=follow_redirects,
File "/workspace/appengine/standard_python3/warmup/.nox/py-3-7/lib/python3.7/site-packages/werkzeug/test.py", line 1081, in open
builder = EnvironBuilder(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'as_tuple'</pre></details> | non_test | appengine standard warmup main test test index failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output traceback most recent call last file workspace appengine standard warmup main test py line in test index r client get file workspace appengine standard warmup nox py lib site packages werkzeug test py line in get return self open args kw file workspace appengine standard warmup nox py lib site packages flask testing py line in open follow redirects follow redirects file workspace appengine standard warmup nox py lib site packages werkzeug test py line in open builder environbuilder args kwargs typeerror init got an unexpected keyword argument as tuple | 0 |
29,772 | 5,887,120,004 | IssuesEvent | 2017-05-17 06:16:15 | catmaid/CATMAID | https://api.github.com/repos/catmaid/CATMAID | closed | Node radius can only be edited in XY view | status: done type: defect | In orthogonal views the mouse movements don't register properly. | 1.0 | Node radius can only be edited in XY view - In orthogonal views the mouse movements don't register properly. | non_test | node radius can only be edited in xy view in orthogonal views the mouse movements don t register properly | 0 |
280,234 | 24,286,131,074 | IssuesEvent | 2022-09-28 22:27:01 | nrwl/nx | https://api.github.com/repos/nrwl/nx | closed | Jest resolves the whole tree structure of all exported elements in a index.ts file | type: bug scope: testing tools | ## Current Behavior
We have created a local module in nx libs folder, we import two elements from this lib in one of our test file. When we run the test, Jest is resolving the whole tree structure of all exported elements in the lib index.ts file.
## Expected Behavior
Jest should not resolve the whole tree structure of all exported elements, but only resolve it for elements imported in the test file.
## Steps to Reproduce
Here is a github [repository](https://github.com/jcabannes/nx-angular-ngxs-jest) which reproduces the bug if you run `npm test` command
### Failure Logs
```
FAIL my-app apps/my-app/src/app/tests/test-app.spec.ts
● Test suite failed to run
TypeError: Cannot read properties of undefined (reading 'child')
8 | imports: [
9 | TestModule.forRoot(
> 10 | EnvServiceProvider.useFactory().parent.child
| ^
11 | ? {
12 | id: 'test',
13 | }
at Object.<anonymous> (../../libs/features/src/app/shared/auth/auth.module.ts:10:45)
at Object.<anonymous> (../../libs/features/src/app/core/core.module.ts:4:1)
at Object.<anonymous> (../../libs/features/src/app/app.module.ts:6:1)
at Object.<anonymous> (../../libs/features/src/index.ts:6:1)
at Object.<anonymous> (src/app/tests/test-app.spec.ts:2:1)
```
Error comes from the metadata of an ngModule imported by an exported module in our lib.
### Environment
```
Node : 16.15.0
OS : linux x64
yarn : 1.22.19
nx : 14.1.7
@nrwl/angular : 14.1.7
@nrwl/cypress : 14.1.7
@nrwl/detox : Not Found
@nrwl/devkit : 14.1.7
@nrwl/eslint-plugin-nx : 14.1.7
@nrwl/express : Not Found
@nrwl/jest : 14.1.7
@nrwl/js : Not Found
@nrwl/linter : 14.1.7
@nrwl/nest : Not Found
@nrwl/next : Not Found
@nrwl/node : Not Found
@nrwl/nx-cloud : Not Found
@nrwl/nx-plugin : Not Found
@nrwl/react : Not Found
@nrwl/react-native : Not Found
@nrwl/schematics : Not Found
@nrwl/storybook : 14.1.7
@nrwl/web : Not Found
@nrwl/workspace : 14.1.7
typescript : 4.6.4
rxjs : 7.5.5
---------------------------------------
Community plugins:
```
| 1.0 | Jest resolves the whole tree structure of all exported elements in a index.ts file - ## Current Behavior
We have created a local module in nx libs folder, we import two elements from this lib in one of our test file. When we run the test, Jest is resolving the whole tree structure of all exported elements in the lib index.ts file.
## Expected Behavior
Jest should not resolve the whole tree structure of all exported elements, but only resolve it for elements imported in the test file.
## Steps to Reproduce
Here is a github [repository](https://github.com/jcabannes/nx-angular-ngxs-jest) which reproduces the bug if you run `npm test` command
### Failure Logs
```
FAIL my-app apps/my-app/src/app/tests/test-app.spec.ts
● Test suite failed to run
TypeError: Cannot read properties of undefined (reading 'child')
8 | imports: [
9 | TestModule.forRoot(
> 10 | EnvServiceProvider.useFactory().parent.child
| ^
11 | ? {
12 | id: 'test',
13 | }
at Object.<anonymous> (../../libs/features/src/app/shared/auth/auth.module.ts:10:45)
at Object.<anonymous> (../../libs/features/src/app/core/core.module.ts:4:1)
at Object.<anonymous> (../../libs/features/src/app/app.module.ts:6:1)
at Object.<anonymous> (../../libs/features/src/index.ts:6:1)
at Object.<anonymous> (src/app/tests/test-app.spec.ts:2:1)
```
Error comes from the metadata of an ngModule imported by an exported module in our lib.
### Environment
```
Node : 16.15.0
OS : linux x64
yarn : 1.22.19
nx : 14.1.7
@nrwl/angular : 14.1.7
@nrwl/cypress : 14.1.7
@nrwl/detox : Not Found
@nrwl/devkit : 14.1.7
@nrwl/eslint-plugin-nx : 14.1.7
@nrwl/express : Not Found
@nrwl/jest : 14.1.7
@nrwl/js : Not Found
@nrwl/linter : 14.1.7
@nrwl/nest : Not Found
@nrwl/next : Not Found
@nrwl/node : Not Found
@nrwl/nx-cloud : Not Found
@nrwl/nx-plugin : Not Found
@nrwl/react : Not Found
@nrwl/react-native : Not Found
@nrwl/schematics : Not Found
@nrwl/storybook : 14.1.7
@nrwl/web : Not Found
@nrwl/workspace : 14.1.7
typescript : 4.6.4
rxjs : 7.5.5
---------------------------------------
Community plugins:
```
| test | jest resolves the whole tree structure of all exported elements in a index ts file current behavior we have created a local module in nx libs folder we import two elements from this lib in one of our test file when we run the test jest is resolving the whole tree structure of all exported elements in the lib index ts file expected behavior jest should not resolve the whole tree structure of all exported elements but only resolve it for elements imported in the test file steps to reproduce here is a github which reproduces the bug if you run npm test command failure logs fail my app apps my app src app tests test app spec ts ● test suite failed to run typeerror cannot read properties of undefined reading child imports testmodule forroot envserviceprovider usefactory parent child id test at object libs features src app shared auth auth module ts at object libs features src app core core module ts at object libs features src app app module ts at object libs features src index ts at object src app tests test app spec ts error comes from the metadata of an ngmodule imported by an exported module in our lib environment node os linux yarn nx nrwl angular nrwl cypress nrwl detox not found nrwl devkit nrwl eslint plugin nx nrwl express not found nrwl jest nrwl js not found nrwl linter nrwl nest not found nrwl next not found nrwl node not found nrwl nx cloud not found nrwl nx plugin not found nrwl react not found nrwl react native not found nrwl schematics not found nrwl storybook nrwl web not found nrwl workspace typescript rxjs community plugins | 1 |
84,608 | 7,928,729,769 | IssuesEvent | 2018-07-06 12:48:34 | ArkEcosystem/core | https://api.github.com/repos/ArkEcosystem/core | opened | Unify tests | development tests | Currently we don't have a unified way to test all the packages: some of them are using `core-test-utils` or have received more love than others.
- [ ] Unify tests (share more tools between packages and use a standard way of mocking, expecting, etc.) | 1.0 | Unify tests - Currently we don't have a unified way to test all the packages: some of them are using `core-test-utils` or have received more love than others.
- [ ] Unify tests (share more tools between packages and use a standard way of mocking, expecting, etc.) | test | unify tests currently we don t have a unified way to test all the packages some of them are using core test utils or have received more love than others unify tests share more tools between packages and use a standard way of mocking expecting etc | 1 |
130,992 | 10,677,680,524 | IssuesEvent | 2019-10-21 15:50:09 | cigumo/krli | https://api.github.com/repos/cigumo/krli | closed | Armor buff aura and Dante's Relics of Power | kr2 needs testing | In the clip, Dante got Relics of Power lvl 3.
https://streamable.com/cb42p
the buff aura is off and the armor/resistance is lowered 100% by dante. However, the enemy still has armor/resistance when it had none before the buff.
| 1.0 | Armor buff aura and Dante's Relics of Power - In the clip, Dante got Relics of Power lvl 3.
https://streamable.com/cb42p
the buff aura is off and the armor/resistance is lowered 100% by dante. However, the enemy still has armor/resistance when it had none before the buff.
| test | armor buff aura and dante s relics of power in the clip dante got relics of power lvl the buff aura is off and the armor resistance is lowered by dante however the enemy still has armor resistance when it had none before the buff | 1 |
163,861 | 12,748,503,798 | IssuesEvent | 2020-06-26 20:16:44 | QuantConnect/Lean | https://api.github.com/repos/QuantConnect/Lean | closed | Add regression algorithm trading during extended market hours | good first issue testing up for grabs | <!--- This template provides sections for bugs and features. Please delete any irrelevant sections before submitting -->
#### Expected Behavior
<!--- Required. Describe the behavior you expect to see for your case. -->
- Lean has a regression test algorithm that explicitly and actively trades during extended market hours. With the objective of testing extended market hours data and Leans behavior using it.
#### Actual Behavior
<!--- Required. Describe the actual behavior for your case. -->
- There is no such regression algorithm
#### Potential Solution
<!--- Optional. Describe any potential solutions and/or thoughts as to what may be causing the difference between expected and actual behavior. -->
- Implement C#/Py
- Algorithm should use data already present in the repo. See https://github.com/QuantConnect/Lean/tree/master/Data/equity/usa/minute
#### Reproducing the Problem
<!--- Required for Bugs. Describe how to reproduce the problem. This can be via a failing unit test or a simplified algorithm that reliably demonstrates this issue. -->
N/A
#### System Information
<!--- Required for Bugs. Include any system specific information, such as OS. -->
N/A
#### Checklist
<!--- Confirm that you've provided all the required information. -->
<!--- Required fields --->
- [x] I have completely filled out this template
- [x] I have confirmed that this issue exists on the current `master` branch
- [x] I have confirmed that this is not a duplicate issue by searching [issues](https://github.com/QuantConnect/Lean/issues)
<!--- Required for Bugs, feature request can delete the line below. -->
- [x] I have provided detailed steps to reproduce the issue
<!--- Template inspired by https://github.com/stevemao/github-issue-templates --> | 1.0 | Add regression algorithm trading during extended market hours - <!--- This template provides sections for bugs and features. Please delete any irrelevant sections before submitting -->
#### Expected Behavior
<!--- Required. Describe the behavior you expect to see for your case. -->
- Lean has a regression test algorithm that explicitly and actively trades during extended market hours. With the objective of testing extended market hours data and Leans behavior using it.
#### Actual Behavior
<!--- Required. Describe the actual behavior for your case. -->
- There is no such regression algorithm
#### Potential Solution
<!--- Optional. Describe any potential solutions and/or thoughts as to what may be causing the difference between expected and actual behavior. -->
- Implement C#/Py
- Algorithm should use data already present in the repo. See https://github.com/QuantConnect/Lean/tree/master/Data/equity/usa/minute
#### Reproducing the Problem
<!--- Required for Bugs. Describe how to reproduce the problem. This can be via a failing unit test or a simplified algorithm that reliably demonstrates this issue. -->
N/A
#### System Information
<!--- Required for Bugs. Include any system specific information, such as OS. -->
N/A
#### Checklist
<!--- Confirm that you've provided all the required information. -->
<!--- Required fields --->
- [x] I have completely filled out this template
- [x] I have confirmed that this issue exists on the current `master` branch
- [x] I have confirmed that this is not a duplicate issue by searching [issues](https://github.com/QuantConnect/Lean/issues)
<!--- Required for Bugs, feature request can delete the line below. -->
- [x] I have provided detailed steps to reproduce the issue
<!--- Template inspired by https://github.com/stevemao/github-issue-templates --> | test | add regression algorithm trading during extended market hours expected behavior lean has a regression test algorithm that explicitly and actively trades during extended market hours with the objective of testing extended market hours data and leans behavior using it actual behavior there is no such regression algorithm potential solution implement c py algorithm should use data already present in the repo see reproducing the problem n a system information n a checklist i have completely filled out this template i have confirmed that this issue exists on the current master branch i have confirmed that this is not a duplicate issue by searching i have provided detailed steps to reproduce the issue | 1 |
7,774 | 5,198,951,570 | IssuesEvent | 2017-01-23 19:32:37 | Starcounter/Starcounter | https://api.github.com/repos/Starcounter/Starcounter | closed | Using shortnames in queries: problem with ambiguous names and mapped database views | usability | When we are now moving to having several smaller apps running together that have their own mapped views of the database, the likelyhood of having more then one databasetype with the same shortname increases.
This poses a problem since it is really easy to break an working app, by starting another one that happen to have the same shortname for a dabasetype in their mapped model.
This has been mentioned before, but only briefly in shorter discussions, so I add it as an issue so we can discuss and come up with a solution.
As far as I can see there is not too many different options to solve this. I can come up with two:
- The shortnames are resolved by application, so that each app have there local database views usable with shortnames. Not sure if this is even possible to implement.
- Require all database types to be specified with fully qualified name in queries. This will be the simplest solution and will make it certain that all queries will work. The downside is of course that you have to write the fullname which can be quite long.
@malx122, @warpech, @miyconst, @dan31, @k-rus, @Starcounter-Jack
| True | Using shortnames in queries: problem with ambiguous names and mapped database views - When we are now moving to having several smaller apps running together that have their own mapped views of the database, the likelyhood of having more then one databasetype with the same shortname increases.
This poses a problem since it is really easy to break an working app, by starting another one that happen to have the same shortname for a dabasetype in their mapped model.
This has been mentioned before, but only briefly in shorter discussions, so I add it as an issue so we can discuss and come up with a solution.
As far as I can see there is not too many different options to solve this. I can come up with two:
- The shortnames are resolved by application, so that each app have there local database views usable with shortnames. Not sure if this is even possible to implement.
- Require all database types to be specified with fully qualified name in queries. This will be the simplest solution and will make it certain that all queries will work. The downside is of course that you have to write the fullname which can be quite long.
@malx122, @warpech, @miyconst, @dan31, @k-rus, @Starcounter-Jack
| non_test | using shortnames in queries problem with ambiguous names and mapped database views when we are now moving to having several smaller apps running together that have their own mapped views of the database the likelyhood of having more then one databasetype with the same shortname increases this poses a problem since it is really easy to break an working app by starting another one that happen to have the same shortname for a dabasetype in their mapped model this has been mentioned before but only briefly in shorter discussions so i add it as an issue so we can discuss and come up with a solution as far as i can see there is not too many different options to solve this i can come up with two the shortnames are resolved by application so that each app have there local database views usable with shortnames not sure if this is even possible to implement require all database types to be specified with fully qualified name in queries this will be the simplest solution and will make it certain that all queries will work the downside is of course that you have to write the fullname which can be quite long warpech miyconst k rus starcounter jack | 0 |
40,091 | 10,450,689,814 | IssuesEvent | 2019-09-19 11:08:48 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Support the Ubuntu Cross Toolchain | area: Build System feature request | **Is your feature request related to a problem? Please describe.**
It would be helpful for many Ubuntu users, and also for continuous integration (CI), to be able to use Ubuntu's `gcc-arm-none-eabi` cross-compiler for building zephyr for arm microcontrollers.
In particular, many CI loops use Ubuntu's cross-compiler because it is so easy to install from Ubuntu's default package repository in many containerized and non-containerized build environments.
It's as easy as `apt-get update && apt install -y gcc-arm-none-eabi`.
Currently, I believe that only `gnuarmemb` and `crosstool-ng` toolchains are supported. In both cases, it's possible that downloading & installing them might require significantly more time and bandwidth than installing the Ubuntu toolchain. In particular, Travis CI hosts their own Ubuntu package repository, and so it would be very quick to install from that source.
**Describe the solution you'd like**
Support for Ubuntu's toolchain along with a small bit of documentation on how to install it in the "3rd Party Toolchains" section.
**Describe alternatives you've considered**
I've used the gnuarmemb toolchain, but it does add some overhead to my CI process. | 1.0 | Support the Ubuntu Cross Toolchain - **Is your feature request related to a problem? Please describe.**
It would be helpful for many Ubuntu users, and also for continuous integration (CI), to be able to use Ubuntu's `gcc-arm-none-eabi` cross-compiler for building zephyr for arm microcontrollers.
In particular, many CI loops use Ubuntu's cross-compiler because it is so easy to install from Ubuntu's default package repository in many containerized and non-containerized build environments.
It's as easy as `apt-get update && apt install -y gcc-arm-none-eabi`.
Currently, I believe that only `gnuarmemb` and `crosstool-ng` toolchains are supported. In both cases, it's possible that downloading & installing them might require significantly more time and bandwidth than installing the Ubuntu toolchain. In particular, Travis CI hosts their own Ubuntu package repository, and so it would be very quick to install from that source.
**Describe the solution you'd like**
Support for Ubuntu's toolchain along with a small bit of documentation on how to install it in the "3rd Party Toolchains" section.
**Describe alternatives you've considered**
I've used the gnuarmemb toolchain, but it does add some overhead to my CI process. | non_test | support the ubuntu cross toolchain is your feature request related to a problem please describe it would be helpful for many ubuntu users and also for continuous integration ci to be able to use ubuntu s gcc arm none eabi cross compiler for building zephyr for arm microcontrollers in particular many ci loops use ubuntu s cross compiler because it is so easy to install from ubuntu s default package repository in many containerized and non containerized build environments it s as easy as apt get update apt install y gcc arm none eabi currently i believe that only gnuarmemb and crosstool ng toolchains are supported in both cases it s possible that downloading installing them might require significantly more time and bandwidth than installing the ubuntu toolchain in particular travis ci hosts their own ubuntu package repository and so it would be very quick to install from that source describe the solution you d like support for ubuntu s toolchain along with a small bit of documentation on how to install it in the party toolchains section describe alternatives you ve considered i ve used the gnuarmemb toolchain but it does add some overhead to my ci process | 0 |
278,067 | 24,121,807,008 | IssuesEvent | 2022-09-20 19:26:27 | mozilla-mobile/mobile-test-eng | https://api.github.com/repos/mozilla-mobile/mobile-test-eng | closed | [META] Deprecate old GCP / Firebase resources | infra:ui-test maintenance META | DESCRIPTION
our team should take responsibility for deprecating and mobile GCP projects.
NOTES
- This would likely only be former focus-android project, any projects used for Rocket (Firefox Lite) and Amazon
- Re-confirm w/ Stefan before decommissioning Rocket projects
PROJECTS-TO-BE-DEL
1. moz-fx-mobile-firebase-testlab
2. moz-demo-project
3. moz-demo-project2
4. moz-firefox-ios?
5. moz-l10n-screenshots
6. moz-firefox-tv
MISC
1. Remove all old keys
2. Set (and document!) data retention threshold for Focus-android
3. Revisit 9 mo data retention for fenix - is 3? 6? mos of Firebase data enough? | 1.0 | [META] Deprecate old GCP / Firebase resources - DESCRIPTION
our team should take responsibility for deprecating and mobile GCP projects.
NOTES
- This would likely only be former focus-android project, any projects used for Rocket (Firefox Lite) and Amazon
- Re-confirm w/ Stefan before decommissioning Rocket projects
PROJECTS-TO-BE-DEL
1. moz-fx-mobile-firebase-testlab
2. moz-demo-project
3. moz-demo-project2
4. moz-firefox-ios?
5. moz-l10n-screenshots
6. moz-firefox-tv
MISC
1. Remove all old keys
2. Set (and document!) data retention threshold for Focus-android
3. Revisit 9 mo data retention for fenix - is 3? 6? mos of Firebase data enough? | test | deprecate old gcp firebase resources description our team should take responsibility for deprecating and mobile gcp projects notes this would likely only be former focus android project any projects used for rocket firefox lite and amazon re confirm w stefan before decommissioning rocket projects projects to be del moz fx mobile firebase testlab moz demo project moz demo moz firefox ios moz screenshots moz firefox tv misc remove all old keys set and document data retention threshold for focus android revisit mo data retention for fenix is mos of firebase data enough | 1 |
123,315 | 10,263,919,298 | IssuesEvent | 2019-08-22 15:16:19 | eclipse/openj9 | https://api.github.com/repos/eclipse/openj9 | closed | TestJcmd_0 | comp:vm test failure | All extended testing.
```
FAILED: testJcmdHelps
java.lang.AssertionError: Help text corrupt: [Usage : jcmd <vmid> <arguments>, , -J : supply arguments to the Java VM running jcmd, -l : list JVM processes on the local machine, -h : print this help message, , <vmid> : Attach API VM ID as shown in jps or other Attach API-based tools, , arguments:, help : print the list of diagnostic commands, help <command> : print help for the specific command, <command> [command arguments] : command from the list returned by "help", , list JVM processes on the local machine. Default behavior when no options are specified., , NOTE: this utility may significantly affect the performance of the target JVM., The available diagnostic commands are determined by, the target VM and may vary between VMs.] expected [true] but found [false]
at org.testng.Assert.fail(Assert.java:96)
at org.testng.Assert.failNotEquals(Assert.java:776)
at org.testng.Assert.assertTrue(Assert.java:44)
at org.openj9.test.attachAPI.TestJcmd.testJcmdHelps(TestJcmd.java:87)
``` | 1.0 | TestJcmd_0 - All extended testing.
```
FAILED: testJcmdHelps
java.lang.AssertionError: Help text corrupt: [Usage : jcmd <vmid> <arguments>, , -J : supply arguments to the Java VM running jcmd, -l : list JVM processes on the local machine, -h : print this help message, , <vmid> : Attach API VM ID as shown in jps or other Attach API-based tools, , arguments:, help : print the list of diagnostic commands, help <command> : print help for the specific command, <command> [command arguments] : command from the list returned by "help", , list JVM processes on the local machine. Default behavior when no options are specified., , NOTE: this utility may significantly affect the performance of the target JVM., The available diagnostic commands are determined by, the target VM and may vary between VMs.] expected [true] but found [false]
at org.testng.Assert.fail(Assert.java:96)
at org.testng.Assert.failNotEquals(Assert.java:776)
at org.testng.Assert.assertTrue(Assert.java:44)
at org.openj9.test.attachAPI.TestJcmd.testJcmdHelps(TestJcmd.java:87)
``` | test | testjcmd all extended testing failed testjcmdhelps java lang assertionerror help text corrupt command from the list returned by help list jvm processes on the local machine default behavior when no options are specified note this utility may significantly affect the performance of the target jvm the available diagnostic commands are determined by the target vm and may vary between vms expected but found at org testng assert fail assert java at org testng assert failnotequals assert java at org testng assert asserttrue assert java at org test attachapi testjcmd testjcmdhelps testjcmd java | 1 |
48,174 | 5,948,045,218 | IssuesEvent | 2017-05-26 10:07:58 | LDMW/app | https://api.github.com/repos/LDMW/app | closed | Feedback page inputs go into spreadsheet | please-test T1d T4h technical | As a London Minds admin, I would like to be able to see the feedback on the application (captured in the Feedback Page #131) in a quick and easy way so that I can digest the information.
## Acceptance Criteria
+ [x] When a user enters data in the feedback page #131, upon pressing the submit button, that data is visible in a google spreadsheet. | 1.0 | Feedback page inputs go into spreadsheet - As a London Minds admin, I would like to be able to see the feedback on the application (captured in the Feedback Page #131) in a quick and easy way so that I can digest the information.
## Acceptance Criteria
+ [x] When a user enters data in the feedback page #131, upon pressing the submit button, that data is visible in a google spreadsheet. | test | feedback page inputs go into spreadsheet as a london minds admin i would like to be able to see the feedback on the application captured in the feedback page in a quick and easy way so that i can digest the information acceptance criteria when a user enters data in the feedback page upon pressing the submit button that data is visible in a google spreadsheet | 1 |
57,561 | 6,550,730,953 | IssuesEvent | 2017-09-05 12:21:06 | EyeSeeTea/QAApp | https://api.github.com/repos/EyeSeeTea/QAApp | closed | bb and regular version of HNQIS on same device | testing | It seems that it is no longer possible to keep both apps (bb and regular version as downloaded from gPlay) on a device any more. When we have downloaded bb#59, we have been asked to remove the other version of HNQIS from the device.
This wasn't happening before. Reporting this in case this is something you want to fix (?). | 1.0 | bb and regular version of HNQIS on same device - It seems that it is no longer possible to keep both apps (bb and regular version as downloaded from gPlay) on a device any more. When we have downloaded bb#59, we have been asked to remove the other version of HNQIS from the device.
This wasn't happening before. Reporting this in case this is something you want to fix (?). | test | bb and regular version of hnqis on same device it seems that it is no longer possible to keep both apps bb and regular version as downloaded from gplay on a device any more when we have downloaded bb we have been asked to remove the other version of hnqis from the device this wasn t happening before reporting this in case this is something you want to fix | 1 |
361,058 | 25,323,319,251 | IssuesEvent | 2022-11-18 06:53:31 | andrewcargill/johans_eco_timber | https://api.github.com/repos/andrewcargill/johans_eco_timber | opened | Optimal image sizes | documentation | As a Owner I want the site to perform well so that users are not distracted by slow loading pages
Criteria:
All oversized images should be resized
All images should use a good compression format
All images should show on all screen sizes
| 1.0 | Optimal image sizes - As a Owner I want the site to perform well so that users are not distracted by slow loading pages
Criteria:
All oversized images should be resized
All images should use a good compression format
All images should show on all screen sizes
| non_test | optimal image sizes as a owner i want the site to perform well so that users are not distracted by slow loading pages criteria all oversized images should be resized all images should use a good compression format all images should show on all screen sizes | 0 |
342,393 | 30,619,424,117 | IssuesEvent | 2023-07-24 07:09:47 | iamlogand/republic-of-rome-online | https://api.github.com/repos/iamlogand/republic-of-rome-online | closed | Unit test for an API call | Testing | Add a unit test to the backend using Django’s test-execution framework and REST framework's test classes. | 1.0 | Unit test for an API call - Add a unit test to the backend using Django’s test-execution framework and REST framework's test classes. | test | unit test for an api call add a unit test to the backend using django’s test execution framework and rest framework s test classes | 1 |
609,869 | 18,889,625,055 | IssuesEvent | 2021-11-15 11:45:16 | enviroCar/enviroCar-app | https://api.github.com/repos/enviroCar/enviroCar-app | closed | Toggle button in Adapter Selection does not change state on deny | bug 3 - Done Priority - 3 - Low | **Describe the bug**
On trying to enable bluetooth from the app the toggle button does not change it state to off if we click on deny on anywhere on screen.
**To Reproduce**
Steps to reproduce the behavior:
1. Click on OBD selection in Dasboard fragment.
2. Click on toggle button.
3. Click on deny or anywhere on screen.
4. Toggle button switches to ON state rather it should be in off state if deny or nothing is selected.
**Expected behavior**
Reset toggle button to off state if no selection is made or deny is clicked. Also we can add ON /OFF instead of simple toggle button.
| 1.0 | Toggle button in Adapter Selection does not change state on deny - **Describe the bug**
On trying to enable bluetooth from the app the toggle button does not change it state to off if we click on deny on anywhere on screen.
**To Reproduce**
Steps to reproduce the behavior:
1. Click on OBD selection in Dasboard fragment.
2. Click on toggle button.
3. Click on deny or anywhere on screen.
4. Toggle button switches to ON state rather it should be in off state if deny or nothing is selected.
**Expected behavior**
Reset toggle button to off state if no selection is made or deny is clicked. Also we can add ON /OFF instead of simple toggle button.
| non_test | toggle button in adapter selection does not change state on deny describe the bug on trying to enable bluetooth from the app the toggle button does not change it state to off if we click on deny on anywhere on screen to reproduce steps to reproduce the behavior click on obd selection in dasboard fragment click on toggle button click on deny or anywhere on screen toggle button switches to on state rather it should be in off state if deny or nothing is selected expected behavior reset toggle button to off state if no selection is made or deny is clicked also we can add on off instead of simple toggle button | 0 |
323,886 | 23,971,747,744 | IssuesEvent | 2022-09-13 08:20:39 | wirDesign-communication-AG/wirHub-doc | https://api.github.com/repos/wirDesign-communication-AG/wirHub-doc | closed | v2.4.2 | documentation | Voraussichtlicher Release 12.09.
## Bug
- [x] #336
- [x] #338
- [x] #339
- [x] #341
- [x] #343
- [x] #345
## Enhancement
- [x] #340
| 1.0 | v2.4.2 - Voraussichtlicher Release 12.09.
## Bug
- [x] #336
- [x] #338
- [x] #339
- [x] #341
- [x] #343
- [x] #345
## Enhancement
- [x] #340
| non_test | voraussichtlicher release bug enhancement | 0 |
194,014 | 14,667,245,319 | IssuesEvent | 2020-12-29 18:08:52 | Thy-Vipe/BeastsOfBermuda-issues | https://api.github.com/repos/Thy-Vipe/BeastsOfBermuda-issues | closed | [Bug] Sliding on land after darting | Animation Fixed! Potential fix bug tester-team | _Originally written by **TripTrap | 76561198378851871**_
Game Version: 1.1.1076
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1070 Ti
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Context: **Lurdu and Ichthy**
Map: Rival_Shores
*Expected Results:* Animation replicating properly to other clients.
*Actual Results:* Lurdu slide backwards while turning on land after darting.
*Replication:* Go ichthy or lurdu, two players. Now watch one turn on land, note that it will replicate the turn properly. Now dart in water onto land and turn on land again. Note that the creature will now be sliding around and sliding backwards. | 1.0 | [Bug] Sliding on land after darting - _Originally written by **TripTrap | 76561198378851871**_
Game Version: 1.1.1076
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1070 Ti
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Context: **Lurdu and Ichthy**
Map: Rival_Shores
*Expected Results:* Animation replicating properly to other clients.
*Actual Results:* Lurdu slide backwards while turning on land after darting.
*Replication:* Go ichthy or lurdu, two players. Now watch one turn on land, note that it will replicate the turn properly. Now dart in water onto land and turn on land again. Note that the creature will now be sliding around and sliding backwards. | test | sliding on land after darting originally written by triptrap game version system specs cpu brand intel r core tm cpu vendor genuineintel gpu brand nvidia geforce gtx ti gpu driver info unknown num cpu cores context lurdu and ichthy map rival shores expected results animation replicating properly to other clients actual results lurdu slide backwards while turning on land after darting replication go ichthy or lurdu two players now watch one turn on land note that it will replicate the turn properly now dart in water onto land and turn on land again note that the creature will now be sliding around and sliding backwards | 1 |
53,534 | 6,334,029,482 | IssuesEvent | 2017-07-26 15:50:01 | wordpress-mobile/WordPress-iOS | https://api.github.com/repos/wordpress-mobile/WordPress-iOS | closed | Improve the PostServiceRemote tests | Network/Sync Testing [Pri] Low | Let's add some test coverage and stub out the network calls for the following endpoints:
For REST:
- `sites/$site/posts`
- `sites/$site/posts/new?context=edit"`
- `sites/$site/posts/$post?context=edit`
- `sites/$site/posts/$post/delete`
- `sites/$site/posts/$post/restore`
For XMLRPC (methods):
- `wp.getPost`
- `wp.getPosts`
- `metaWeblog.newPost`
- `metaWeblog.editPost`
- `wp.deletePost`
| 1.0 | Improve the PostServiceRemote tests - Let's add some test coverage and stub out the network calls for the following endpoints:
For REST:
- `sites/$site/posts`
- `sites/$site/posts/new?context=edit"`
- `sites/$site/posts/$post?context=edit`
- `sites/$site/posts/$post/delete`
- `sites/$site/posts/$post/restore`
For XMLRPC (methods):
- `wp.getPost`
- `wp.getPosts`
- `metaWeblog.newPost`
- `metaWeblog.editPost`
- `wp.deletePost`
| test | improve the postserviceremote tests let s add some test coverage and stub out the network calls for the following endpoints for rest sites site posts sites site posts new context edit sites site posts post context edit sites site posts post delete sites site posts post restore for xmlrpc methods wp getpost wp getposts metaweblog newpost metaweblog editpost wp deletepost | 1 |
23,573 | 4,028,430,647 | IssuesEvent | 2016-05-18 06:16:51 | nir0s/distro | https://api.github.com/repos/nir0s/distro | closed | Testcase for Scientific Linux is missing | area: test help wanted | Scientific Linux (ID: `scientific`) is one of the distros with a reliable ID. A testcase should be added.
If you want to help, it is sufficient to post to this issue:
* The content of the `/etc/os-release` file, if any.
* The file names and content of the `/etc/*release` and `/etc/*version` files, if any.
* The output of the command: `lsb_release -a`, if available. | 1.0 | Testcase for Scientific Linux is missing - Scientific Linux (ID: `scientific`) is one of the distros with a reliable ID. A testcase should be added.
If you want to help, it is sufficient to post to this issue:
* The content of the `/etc/os-release` file, if any.
* The file names and content of the `/etc/*release` and `/etc/*version` files, if any.
* The output of the command: `lsb_release -a`, if available. | test | testcase for scientific linux is missing scientific linux id scientific is one of the distros with a reliable id a testcase should be added if you want to help it is sufficient to post to this issue the content of the etc os release file if any the file names and content of the etc release and etc version files if any the output of the command lsb release a if available | 1 |
176,710 | 6,564,280,378 | IssuesEvent | 2017-09-08 00:21:22 | OpenBazaar/openbazaar-desktop | https://api.github.com/repos/OpenBazaar/openbazaar-desktop | closed | Free shipping filter not working | bug Medium Priority | Looks like the client might not be detecting ALL in the free shipping field in the index | 1.0 | Free shipping filter not working - Looks like the client might not be detecting ALL in the free shipping field in the index | non_test | free shipping filter not working looks like the client might not be detecting all in the free shipping field in the index | 0 |
342,359 | 30,617,123,035 | IssuesEvent | 2023-07-24 04:51:49 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix general.test_clip_matrix_norm | Sub Task Failing Test | | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix general.test_clip_matrix_norm - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5640828524/job/15277993042"><img src=https://img.shields.io/badge/-success-success></a>
| test | fix general test clip matrix norm jax a href src numpy a href src torch a href src tensorflow a href src paddle a href src | 1 |
192,656 | 15,355,224,352 | IssuesEvent | 2021-03-01 10:48:30 | 99003578/mahavira_team1_calculator | https://api.github.com/repos/99003578/mahavira_team1_calculator | closed | Issues in Low level Requirements_by_3581 | documentation medium | ADDITION:
Didn't mention which type of inputs(integers or floating point numbers ) should be given in the low level requirements.
SUBTRACTION:
Didn't mention which type of inputs(integers or floating point numbers ) should be given in the low level requirements.
MULTIPLICATION:
Didn't mention which type of inputs(integers or floating point numbers) should be given for multiplication operation in the low level requirements.
DIVISION:
No clear mentioning of input types(integers or floating point numbers) for division operation in the low level requirements.
Nth Power:
The type of input for N is not mentioned
The type of number for which the Nth power is calculated is not mentioned
SQUARE:
Didn't mention which type of input(integer or floating point number ) should be given for square operation in the low level requirements.
| 1.0 | Issues in Low level Requirements_by_3581 - ADDITION:
Didn't mention which type of inputs(integers or floating point numbers ) should be given in the low level requirements.
SUBTRACTION:
Didn't mention which type of inputs(integers or floating point numbers ) should be given in the low level requirements.
MULTIPLICATION:
Didn't mention which type of inputs(integers or floating point numbers) should be given for multiplication operation in the low level requirements.
DIVISION:
No clear mentioning of input types(integers or floating point numbers) for division operation in the low level requirements.
Nth Power:
The type of input for N is not mentioned
The type of number for which the Nth power is calculated is not mentioned
SQUARE:
Didn't mention which type of input(integer or floating point number ) should be given for square operation in the low level requirements.
| non_test | issues in low level requirements by addition didn t mention which type of inputs integers or floating point numbers should be given in the low level requirements subtraction didn t mention which type of inputs integers or floating point numbers should be given in the low level requirements multiplication didn t mention which type of inputs integers or floating point numbers should be given for multiplication operation in the low level requirements division no clear mentioning of input types integers or floating point numbers for division operation in the low level requirements nth power the type of input for n is not mentioned the type of number for which the nth power is calculated is not mentioned square didn t mention which type of input integer or floating point number should be given for square operation in the low level requirements | 0 |
4,897 | 25,139,035,265 | IssuesEvent | 2022-11-09 21:16:51 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Opening the table inspector should not clear the cell selection | type: bug work: frontend status: ready restricted: maintainers | ## Steps to reproduce
1. Open a table page.
1. Close the table inspector.
1. Select one cell.
1. Open the table inspector.
1. Expect the cell to remain selected once the table inspector is opened.
1. Instead, observe that opening the table inspector has cleared the cell selection.
CC: @rajatvijay
| True | Opening the table inspector should not clear the cell selection - ## Steps to reproduce
1. Open a table page.
1. Close the table inspector.
1. Select one cell.
1. Open the table inspector.
1. Expect the cell to remain selected once the table inspector is opened.
1. Instead, observe that opening the table inspector has cleared the cell selection.
CC: @rajatvijay
| non_test | opening the table inspector should not clear the cell selection steps to reproduce open a table page close the table inspector select one cell open the table inspector expect the cell to remain selected once the table inspector is opened instead observe that opening the table inspector has cleared the cell selection cc rajatvijay | 0 |
70,003 | 7,168,577,444 | IssuesEvent | 2018-01-30 01:15:53 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available | priority/failing-test | The rescheduler test `[sig-scheduling] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available` has failed for a long time.
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/rescheduler.go:55
Expected error:
<*errors.errorString | 0xc420eccb70>: {
s: "error while scaling RC kubernetes-dashboard to 2 replicas: Scaling the resource failed with: unable to get client for deployments.extensions: unable to get full preferred group-version-resource for deployments.extensions: the cache has not been filled yet",
}
error while scaling RC kubernetes-dashboard to 2 replicas: Scaling the resource failed with: unable to get client for deployments.extensions: unable to get full preferred group-version-resource for deployments.extensions: the cache has not been filled yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/rescheduler.go:73
```
Let's either fix this test or skip it.
@kubernetes/sig-scheduling-misc | 1.0 | Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available - The rescheduler test `[sig-scheduling] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available` has failed for a long time.
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/rescheduler.go:55
Expected error:
<*errors.errorString | 0xc420eccb70>: {
s: "error while scaling RC kubernetes-dashboard to 2 replicas: Scaling the resource failed with: unable to get client for deployments.extensions: unable to get full preferred group-version-resource for deployments.extensions: the cache has not been filled yet",
}
error while scaling RC kubernetes-dashboard to 2 replicas: Scaling the resource failed with: unable to get client for deployments.extensions: unable to get full preferred group-version-resource for deployments.extensions: the cache has not been filled yet
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/rescheduler.go:73
```
Let's either fix this test or skip it.
@kubernetes/sig-scheduling-misc | test | rescheduler should ensure that critical pod is scheduled in case there is no resources available the rescheduler test rescheduler should ensure that critical pod is scheduled in case there is no resources available has failed for a long time go src io kubernetes output dockerized go src io kubernetes test scheduling rescheduler go expected error s error while scaling rc kubernetes dashboard to replicas scaling the resource failed with unable to get client for deployments extensions unable to get full preferred group version resource for deployments extensions the cache has not been filled yet error while scaling rc kubernetes dashboard to replicas scaling the resource failed with unable to get client for deployments extensions unable to get full preferred group version resource for deployments extensions the cache has not been filled yet not to have occurred go src io kubernetes output dockerized go src io kubernetes test scheduling rescheduler go let s either fix this test or skip it kubernetes sig scheduling misc | 1 |
274,746 | 20,866,193,611 | IssuesEvent | 2022-03-22 07:27:35 | vuestorefront/magento2 | https://api.github.com/repos/vuestorefront/magento2 | closed | Document layout is wrong for custom query | documentation triage-needed | ### Provide a description of requested docs changes

Expected: UI should be rendered into correct syntax block.
### Able to fix / change the documentation?
- [X] Yes
- [ ] No
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | Document layout is wrong for custom query - ### Provide a description of requested docs changes

Expected: UI should be rendered into correct syntax block.
### Able to fix / change the documentation?
- [X] Yes
- [ ] No
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_test | document layout is wrong for custom query provide a description of requested docs changes expected ui should be rendered into correct syntax block able to fix change the documentation yes no code of conduct i agree to follow this project s code of conduct | 0 |
99,324 | 8,697,316,360 | IssuesEvent | 2018-12-04 19:55:19 | PRUNERS/FLiT | https://api.github.com/repos/PRUNERS/FLiT | closed | Linker flag `-no-pie` not supported in older versions of clang | bug make python tests | ## Bug Report
**Describe the problem**
Older versions of clang (such as clang 3.8) do not have the flang `-no-pie`, but instead use `-nopie`. In modern versions they are aliased.
**Suggested Fix**
There is already a switch which disables this flag if we are using an older version of gcc. Simply adding another switch which checks if the compiler is clang would do.
| 1.0 | Linker flag `-no-pie` not supported in older versions of clang - ## Bug Report
**Describe the problem**
Older versions of clang (such as clang 3.8) do not have the flang `-no-pie`, but instead use `-nopie`. In modern versions they are aliased.
**Suggested Fix**
There is already a switch which disables this flag if we are using an older version of gcc. Simply adding another switch which checks if the compiler is clang would do.
| test | linker flag no pie not supported in older versions of clang bug report describe the problem older versions of clang such as clang do not have the flang no pie but instead use nopie in modern versions they are aliased suggested fix there is already a switch which disables this flag if we are using an older version of gcc simply adding another switch which checks if the compiler is clang would do | 1 |
184,955 | 32,077,739,533 | IssuesEvent | 2023-09-25 12:09:48 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Adding Software Privacy Statement and Terms of Use into About section of Settings | ui-team Design-QA E:Desktop Bugfixes 0.15 | Designs have been done for bringing the Status Software Privacy Statement and Terms of Use copy into the About section of Settings.
Some small tweaks have been made to the `About` page menu, clarifying which items are embedded in the app vs linked externally.
<img width="1440" alt="About" src="https://github.com/status-im/status-desktop/assets/110033914/178bf427-0d41-4f9c-a6b5-28c1aa3bdb65">
Status Software Privacy Statement and Terms of Use inside the application. NB as noted in the Figma, the Status Software Privacy Statement and Terms of Use page designs are representative to clarify formatting. They should not be used as a reference for the final copy for these pages. The final copy should be taken from the docs linked in the relevant place inside the Figma.
<img width="1906" alt="Screenshot 2023-09-18 at 14 06 41" src="https://github.com/status-im/status-desktop/assets/110033914/07947b62-8012-484e-81f5-4a83c16784eb">
Figma designs:
https://www.figma.com/file/idUoxN7OIW2Jpp3PMJ1Rl8/%E2%9A%99%EF%B8%8F-Settings-%7C-Desktop?type=design&node-id=18977-144942&mode=design&t=iRaNpFuK2LYieyvN-4 | 1.0 | Adding Software Privacy Statement and Terms of Use into About section of Settings - Designs have been done for bringing the Status Software Privacy Statement and Terms of Use copy into the About section of Settings.
Some small tweaks have been made to the `About` page menu, clarifying which items are embedded in the app vs linked externally.
<img width="1440" alt="About" src="https://github.com/status-im/status-desktop/assets/110033914/178bf427-0d41-4f9c-a6b5-28c1aa3bdb65">
Status Software Privacy Statement and Terms of Use inside the application. NB as noted in the Figma, the Status Software Privacy Statement and Terms of Use page designs are representative to clarify formatting. They should not be used as a reference for the final copy for these pages. The final copy should be taken from the docs linked in the relevant place inside the Figma.
<img width="1906" alt="Screenshot 2023-09-18 at 14 06 41" src="https://github.com/status-im/status-desktop/assets/110033914/07947b62-8012-484e-81f5-4a83c16784eb">
Figma designs:
https://www.figma.com/file/idUoxN7OIW2Jpp3PMJ1Rl8/%E2%9A%99%EF%B8%8F-Settings-%7C-Desktop?type=design&node-id=18977-144942&mode=design&t=iRaNpFuK2LYieyvN-4 | non_test | adding software privacy statement and terms of use into about section of settings designs have been done for bringing the status software privacy statement and terms of use copy into the about section of settings some small tweaks have been made to the about page menu clarifying which items are embedded in the app vs linked externally img width alt about src status software privacy statement and terms of use inside the application nb as noted in the figma the status software privacy statement and terms of use page designs are representative to clarify formatting they should not be used as a reference for the final copy for these pages the final copy should be taken from the docs linked in the relevant place inside the figma img width alt screenshot at src figma designs | 0 |
703,083 | 24,146,270,093 | IssuesEvent | 2022-09-21 19:02:55 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | [Forwardport v2.7] SUC doesn't appear to be installed in long-named v2prov clusters | priority/1 [zube]: QA Working QA/XS area/provisioning-v2 team/area2 | This is a forwardport issue for https://github.com/rancher/rancher/issues/36752, automatically created via rancherbot by @snasovich
Original issue description:
SUC does not appear to be successfully installed in a `v2prov` cluster with a long cluster name.
The `fleet-agent` has logs like:
```
time="2022-03-03T23:11:04Z" level=info msg="Deleting unknown bundle ID mcc-test-custom-three-node-managed-system-upgrade-controller, release cattle-system/mcc-test-custom-three-node-managed-system-upgra-f834d, expecting release cattle-system/mcc-test-custom-three-node-managed-system-upgrade-controller"
```
| 1.0 | [Forwardport v2.7] SUC doesn't appear to be installed in long-named v2prov clusters - This is a forwardport issue for https://github.com/rancher/rancher/issues/36752, automatically created via rancherbot by @snasovich
Original issue description:
SUC does not appear to be successfully installed in a `v2prov` cluster with a long cluster name.
The `fleet-agent` has logs like:
```
time="2022-03-03T23:11:04Z" level=info msg="Deleting unknown bundle ID mcc-test-custom-three-node-managed-system-upgrade-controller, release cattle-system/mcc-test-custom-three-node-managed-system-upgra-f834d, expecting release cattle-system/mcc-test-custom-three-node-managed-system-upgrade-controller"
```
| non_test | suc doesn t appear to be installed in long named clusters this is a forwardport issue for automatically created via rancherbot by snasovich original issue description suc does not appear to be successfully installed in a cluster with a long cluster name the fleet agent has logs like time level info msg deleting unknown bundle id mcc test custom three node managed system upgrade controller release cattle system mcc test custom three node managed system upgra expecting release cattle system mcc test custom three node managed system upgrade controller | 0 |
190,319 | 15,227,220,525 | IssuesEvent | 2021-02-18 09:55:17 | hedgedoc/hedgedoc | https://api.github.com/repos/hedgedoc/hedgedoc | opened | Document supported Markdown flavour | type: documentation | We should document what flavor of Markdown we support.
We use [markdown-it](https://github.com/markdown-it/markdown-it), which supports CommonMark plus tables and strikethrough from [GitHub Flavored Markdown](https://github.github.com/gfm).
The following plugins are added (excerpt from 1.x `package.json`):
```
"markdown-it-abbr": "^1.0.4",
"markdown-it-container": "^3.0.0",
"markdown-it-deflist": "^2.0.1",
"markdown-it-emoji": "^2.0.0",
"markdown-it-footnote": "^3.0.1",
"markdown-it-imsize": "^2.0.1",
"markdown-it-ins": "^3.0.0",
"markdown-it-mark": "^3.0.0",
"markdown-it-mathjax": "^2.0.0",
"markdown-it-regexp": "^0.4.0",
"markdown-it-sub": "^1.0.0",
"markdown-it-sup": "^1.0.0",
```
This might bring us closer to full GFM-compatibility, but also has many more extensions, so we probably end up with a weird superset of CommonMark, that might be GFM but not really.
I would like to have a table in the docs that list the various features of CommonMark, GFM and HedgeDocMark™, so the compatibility is made clear. | 1.0 | Document supported Markdown flavour - We should document what flavor of Markdown we support.
We use [markdown-it](https://github.com/markdown-it/markdown-it), which supports CommonMark plus tables and strikethrough from [GitHub Flavored Markdown](https://github.github.com/gfm).
The following plugins are added (excerpt from 1.x `package.json`):
```
"markdown-it-abbr": "^1.0.4",
"markdown-it-container": "^3.0.0",
"markdown-it-deflist": "^2.0.1",
"markdown-it-emoji": "^2.0.0",
"markdown-it-footnote": "^3.0.1",
"markdown-it-imsize": "^2.0.1",
"markdown-it-ins": "^3.0.0",
"markdown-it-mark": "^3.0.0",
"markdown-it-mathjax": "^2.0.0",
"markdown-it-regexp": "^0.4.0",
"markdown-it-sub": "^1.0.0",
"markdown-it-sup": "^1.0.0",
```
This might bring us closer to full GFM-compatibility, but also has many more extensions, so we probably end up with a weird superset of CommonMark, that might be GFM but not really.
I would like to have a table in the docs that list the various features of CommonMark, GFM and HedgeDocMark™, so the compatibility is made clear. | non_test | document supported markdown flavour we should document what flavor of markdown we support we use which supports commonmark plus tables and strikethrough from the following plugins are added excerpt from x package json markdown it abbr markdown it container markdown it deflist markdown it emoji markdown it footnote markdown it imsize markdown it ins markdown it mark markdown it mathjax markdown it regexp markdown it sub markdown it sup this might bring us closer to full gfm compatibility but also has many more extensions so we probably end up with a weird superset of commonmark that might be gfm but not really i would like to have a table in the docs that list the various features of commonmark gfm and hedgedocmark™ so the compatibility is made clear | 0 |
534,718 | 15,647,507,380 | IssuesEvent | 2021-03-23 03:27:14 | DreamExposure/DisCal-Discord-Bot | https://api.github.com/repos/DreamExposure/DisCal-Discord-Bot | closed | Assign a role on Event RSVP | Module: Bot Priority: Immediate Action Needed enhancement | **Is your feature request related to a problem? Please describe.**
No, but would tie into issue #52 nicely.
**Describe the solution you'd like**
The ability for the bot to assign a role to users who RSVP to an event.
**Describe alternatives you've considered**
The functionality could be gained once #52 is complete by using a combination of DisCal and a dedicated reaction role bot like Zira.
**Additional context**
There are many good uses cases for an RSVP also assigning a role. Access to special events, or even recurring events could be easily regulated. Announceent subsciption pings could be sent to the role assigned by the bot too. | 1.0 | Assign a role on Event RSVP - **Is your feature request related to a problem? Please describe.**
No, but would tie into issue #52 nicely.
**Describe the solution you'd like**
The ability for the bot to assign a role to users who RSVP to an event.
**Describe alternatives you've considered**
The functionality could be gained once #52 is complete by using a combination of DisCal and a dedicated reaction role bot like Zira.
**Additional context**
There are many good uses cases for an RSVP also assigning a role. Access to special events, or even recurring events could be easily regulated. Announceent subsciption pings could be sent to the role assigned by the bot too. | non_test | assign a role on event rsvp is your feature request related to a problem please describe no but would tie into issue nicely describe the solution you d like the ability for the bot to assign a role to users who rsvp to an event describe alternatives you ve considered the functionality could be gained once is complete by using a combination of discal and a dedicated reaction role bot like zira additional context there are many good uses cases for an rsvp also assigning a role access to special events or even recurring events could be easily regulated announceent subsciption pings could be sent to the role assigned by the bot too | 0 |
176,836 | 13,654,805,869 | IssuesEvent | 2020-09-27 19:09:50 | RGPosadas/Mull | https://api.github.com/repos/RGPosadas/Mull | opened | AT-1.1: Create Events | acceptance tests | **Issue Tracking**
This acceptance test is for #27.
See the test running [here](link to e2e GIF).
**User Acceptance Flow**
1. User logs in
1. User presses "Create Event" button on the navigation bar
1. User fills in an event form (without uploading photo and location)
a. If user submits invalid data, then error messages show up
1. User submits event form
1. Event is created and is shown on their "My Events" tab, as well as "Upcoming" tab | 1.0 | AT-1.1: Create Events - **Issue Tracking**
This acceptance test is for #27.
See the test running [here](link to e2e GIF).
**User Acceptance Flow**
1. User logs in
1. User presses "Create Event" button on the navigation bar
1. User fills in an event form (without uploading photo and location)
a. If user submits invalid data, then error messages show up
1. User submits event form
1. Event is created and is shown on their "My Events" tab, as well as "Upcoming" tab | test | at create events issue tracking this acceptance test is for see the test running link to gif user acceptance flow user logs in user presses create event button on the navigation bar user fills in an event form without uploading photo and location a if user submits invalid data then error messages show up user submits event form event is created and is shown on their my events tab as well as upcoming tab | 1 |
81,947 | 7,807,944,400 | IssuesEvent | 2018-06-11 18:32:17 | FoxyLinkIO/FoxyLink | https://api.github.com/repos/FoxyLinkIO/FoxyLink | closed | Доработать функцию FL_Messages.DeserializeContext | enhancement in testing urgent / important | Так же на вход должен приниматься `Код` \ `Code` сообщения | 1.0 | Доработать функцию FL_Messages.DeserializeContext - Так же на вход должен приниматься `Код` \ `Code` сообщения | test | доработать функцию fl messages deserializecontext так же на вход должен приниматься код code сообщения | 1 |
169,816 | 20,841,935,779 | IssuesEvent | 2022-03-21 01:53:44 | UpendoVentures/generator-upendodnn | https://api.github.com/repos/UpendoVentures/generator-upendodnn | opened | CVE-2022-24772 (High) detected in node-forge-0.10.0.tgz | security vulnerability | ## CVE-2022-24772 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /generators/mvc-spa/templates/package.json</p>
<p>Path to vulnerable library: /generators/mvc-spa/templates/node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.3.tgz (Root Library)
- selfsigned-1.10.14.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24772>CVE-2022-24772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-24772 (High) detected in node-forge-0.10.0.tgz - ## CVE-2022-24772 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /generators/mvc-spa/templates/package.json</p>
<p>Path to vulnerable library: /generators/mvc-spa/templates/node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.3.tgz (Root Library)
- selfsigned-1.10.14.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24772>CVE-2022-24772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in node forge tgz cve high severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file generators mvc spa templates package json path to vulnerable library generators mvc spa templates node modules node forge package json dependency hierarchy webpack dev server tgz root library selfsigned tgz x node forge tgz vulnerable library found in base branch master vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code does not check for tailing garbage bytes after decoding a digestinfo asn structure this can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource | 0 |
27,952 | 5,413,750,375 | IssuesEvent | 2017-03-01 17:25:21 | CleverRaven/Cataclysm-DDA | https://api.github.com/repos/CleverRaven/Cataclysm-DDA | closed | Issue with removing bionics | Bionics Documentation Enhancement | I tried removing my faulty bionics, I had a trench knife, a combat knife and 5 first aid kits.
However, upon selecting my bionic to remove it told me it needed the tools first.
I later got a pari of scissors, and hen I could do the removal. I suppose this is not intended?
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/4276256-issue-with-removing-bionics?utm_campaign=plugin&utm_content=tracker%2F146201&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F146201&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| 1.0 | Issue with removing bionics - I tried removing my faulty bionics, I had a trench knife, a combat knife and 5 first aid kits.
However, upon selecting my bionic to remove it told me it needed the tools first.
I later got a pari of scissors, and hen I could do the removal. I suppose this is not intended?
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/4276256-issue-with-removing-bionics?utm_campaign=plugin&utm_content=tracker%2F146201&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F146201&utm_medium=issues&utm_source=github).
</bountysource-plugin>
| non_test | issue with removing bionics i tried removing my faulty bionics i had a trench knife a combat knife and first aid kits however upon selecting my bionic to remove it told me it needed the tools first i later got a pari of scissors and hen i could do the removal i suppose this is not intended want to back this issue we accept bounties via | 0 |
102,084 | 16,546,469,799 | IssuesEvent | 2021-05-28 01:04:20 | hugh-whitesource/classitransformers | https://api.github.com/repos/hugh-whitesource/classitransformers | opened | CVE-2021-29559 (High) detected in tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl | security vulnerability | ## CVE-2021-29559 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/e8/3e/bf817be24fe71c430775da74e839150d386de236dc35c26da15d7c9a57a3/tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/e8/3e/bf817be24fe71c430775da74e839150d386de236dc35c26da15d7c9a57a3/tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: classitransformers</p>
<p>Path to vulnerable library: classitransformers</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can access data outside of bounds of heap allocated array in `tf.raw_ops.UnicodeEncode`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/472c1f12ad9063405737679d4f6bd43094e1d36d/tensorflow/core/kernels/unicode_ops.cc) assumes that the `input_value`/`input_splits` pair specify a valid sparse tensor. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29559>CVE-2021-29559</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"tensorflow-gpu","packageVersion":"2.4.0","packageFilePaths":["classitransformers"],"isTransitiveDependency":false,"dependencyTree":"tensorflow-gpu:2.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-29559","vulnerabilityDetails":"TensorFlow is an end-to-end open source platform for machine learning. An attacker can access data outside of bounds of heap allocated array in `tf.raw_ops.UnicodeEncode`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/472c1f12ad9063405737679d4f6bd43094e1d36d/tensorflow/core/kernels/unicode_ops.cc) assumes that the `input_value`/`input_splits` pair specify a valid sparse tensor. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29559","cvss3Severity":"high","cvss3Score":"7.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-29559 (High) detected in tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl - ## CVE-2021-29559 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/e8/3e/bf817be24fe71c430775da74e839150d386de236dc35c26da15d7c9a57a3/tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/e8/3e/bf817be24fe71c430775da74e839150d386de236dc35c26da15d7c9a57a3/tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: classitransformers</p>
<p>Path to vulnerable library: classitransformers</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow_gpu-2.4.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can access data outside of bounds of heap allocated array in `tf.raw_ops.UnicodeEncode`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/472c1f12ad9063405737679d4f6bd43094e1d36d/tensorflow/core/kernels/unicode_ops.cc) assumes that the `input_value`/`input_splits` pair specify a valid sparse tensor. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29559>CVE-2021-29559</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"tensorflow-gpu","packageVersion":"2.4.0","packageFilePaths":["classitransformers"],"isTransitiveDependency":false,"dependencyTree":"tensorflow-gpu:2.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-29559","vulnerabilityDetails":"TensorFlow is an end-to-end open source platform for machine learning. An attacker can access data outside of bounds of heap allocated array in `tf.raw_ops.UnicodeEncode`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/472c1f12ad9063405737679d4f6bd43094e1d36d/tensorflow/core/kernels/unicode_ops.cc) assumes that the `input_value`/`input_splits` pair specify a valid sparse tensor. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29559","cvss3Severity":"high","cvss3Score":"7.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in tensorflow gpu whl cve high severity vulnerability vulnerable library tensorflow gpu whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file classitransformers path to vulnerable library classitransformers dependency hierarchy x tensorflow gpu whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning an attacker can access data outside of bounds of heap allocated array in tf raw ops unicodeencode this is because the implementation assumes that the input value input splits pair specify a valid sparse tensor the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree tensorflow gpu isminimumfixversionavailable true minimumfixversion tensorflow tensorflow cpu tensorflow gpu basebranches vulnerabilityidentifier cve vulnerabilitydetails tensorflow is an end to end open source platform for machine learning an attacker can access data outside of bounds of heap allocated array in tf raw ops unicodeencode this is because the implementation assumes that the input value input splits pair specify a valid sparse tensor the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range vulnerabilityurl | 0 |
181,634 | 14,884,354,307 | IssuesEvent | 2021-01-20 14:29:31 | Fuenfgeld/ATeamDatenmanagementUndArchivierung | https://api.github.com/repos/Fuenfgeld/ATeamDatenmanagementUndArchivierung | closed | Creation of graphical database scheme | documentation good first issue | Database documentation:
- Create a graphical database scheme
- What are master data and what are transaction data tables | 1.0 | Creation of graphical database scheme - Database documentation:
- Create a graphical database scheme
- What are master data and what are transaction data tables | non_test | creation of graphical database scheme database documentation create a graphical database scheme what are master data and what are transaction data tables | 0 |
63,402 | 8,678,392,752 | IssuesEvent | 2018-11-30 19:46:22 | AboudyKreidieh/truck-code | https://api.github.com/repos/AboudyKreidieh/truck-code | opened | Update README | documentation | **Due date**: December 17, 2018
**Tasks**:
- [ ] Add a section about general code structure, with the picture I've created before.
- [ ] Brief description of each software component.
- [ ] Reference to documentation | 1.0 | Update README - **Due date**: December 17, 2018
**Tasks**:
- [ ] Add a section about general code structure, with the picture I've created before.
- [ ] Brief description of each software component.
- [ ] Reference to documentation | non_test | update readme due date december tasks add a section about general code structure with the picture i ve created before brief description of each software component reference to documentation | 0 |
109,990 | 9,422,816,305 | IssuesEvent | 2019-04-11 10:14:06 | tarantool/tarantool | https://api.github.com/repos/tarantool/tarantool | closed | test: vinyl/errinj_ddl.test.lua fails in parallel mode | flaky test vinyl | Tarantool version:
2.1
OS version:
Ubuntu 18.04
Bug description:
[001] vinyl/errinj_ddl.test.lua [ fail ]
[001]
[001] Test failed! Result content mismatch:
[001] --- vinyl/errinj_ddl.result Tue Mar 19 17:52:48 2019
[001] +++ vinyl/errinj_ddl.reject Tue Mar 19 18:07:55 2019
[001] @@ -504,6 +504,7 @@
[001] ...
[001] _ = s1:create_index('sk', {parts = {2, 'unsigned'}})
[001] ---
[001] +- error: Tuple field 2 required by space format is missing
[001] ...
[001] errinj.set("ERRINJ_VY_READ_PAGE_TIMEOUT", 0)
[001] ---
[001] @@ -511,23 +512,24 @@
[001] ...
[001] c1:get() -- false (transaction was aborted)
[001] ---
[001] +- true
[001] +...
[001] +c2:get() -- true
[001] +---
[001] +- true
[001] +...
[001] +s1:get(2) == nil
[001] +---
[001] - false
[001] ...
[001] -c2:get() -- true
[001] ----
[001] -- true
[001] -...
[001] -s1:get(2) == nil
[001] ----
[001] -- true
[001] -...
[001] s2:get(2) ~= nil
[001] ---
[001] - true
[001] ...
[001] s1.index.pk:count() == s1.index.sk:count()
[001] ---
[001] -- true
[001] +- error: '[string "return s1.index.pk:count() == s1.index.sk:cou..."]:1: attempt to
[001] + index field ''sk'' (a nil value)'
[001] ...
[001] s1:drop()
[001] ---
[001]
Steps to reproduce:
./test-run.py -j 50 --force vinyl/
Optional (but very desirable):
* coredump
* backtrace
* netstat
| 1.0 | test: vinyl/errinj_ddl.test.lua fails in parallel mode - Tarantool version:
2.1
OS version:
Ubuntu 18.04
Bug description:
[001] vinyl/errinj_ddl.test.lua [ fail ]
[001]
[001] Test failed! Result content mismatch:
[001] --- vinyl/errinj_ddl.result Tue Mar 19 17:52:48 2019
[001] +++ vinyl/errinj_ddl.reject Tue Mar 19 18:07:55 2019
[001] @@ -504,6 +504,7 @@
[001] ...
[001] _ = s1:create_index('sk', {parts = {2, 'unsigned'}})
[001] ---
[001] +- error: Tuple field 2 required by space format is missing
[001] ...
[001] errinj.set("ERRINJ_VY_READ_PAGE_TIMEOUT", 0)
[001] ---
[001] @@ -511,23 +512,24 @@
[001] ...
[001] c1:get() -- false (transaction was aborted)
[001] ---
[001] +- true
[001] +...
[001] +c2:get() -- true
[001] +---
[001] +- true
[001] +...
[001] +s1:get(2) == nil
[001] +---
[001] - false
[001] ...
[001] -c2:get() -- true
[001] ----
[001] -- true
[001] -...
[001] -s1:get(2) == nil
[001] ----
[001] -- true
[001] -...
[001] s2:get(2) ~= nil
[001] ---
[001] - true
[001] ...
[001] s1.index.pk:count() == s1.index.sk:count()
[001] ---
[001] -- true
[001] +- error: '[string "return s1.index.pk:count() == s1.index.sk:cou..."]:1: attempt to
[001] + index field ''sk'' (a nil value)'
[001] ...
[001] s1:drop()
[001] ---
[001]
Steps to reproduce:
./test-run.py -j 50 --force vinyl/
Optional (but very desirable):
* coredump
* backtrace
* netstat
| test | test vinyl errinj ddl test lua fails in parallel mode tarantool version os version ubuntu bug description vinyl errinj ddl test lua test failed result content mismatch vinyl errinj ddl result tue mar vinyl errinj ddl reject tue mar create index sk parts unsigned error tuple field required by space format is missing errinj set errinj vy read page timeout get false transaction was aborted true get true true get nil false get true true get nil true get nil true index pk count index sk count true error attempt to index field sk a nil value drop steps to reproduce test run py j force vinyl optional but very desirable coredump backtrace netstat | 1 |
215,324 | 16,664,584,731 | IssuesEvent | 2021-06-06 23:34:40 | AtlasOfLivingAustralia/la-pipelines | https://api.github.com/repos/AtlasOfLivingAustralia/la-pipelines | opened | Museum Victoria provider for OZCAM | collection-community-testing testing-findings | Number of Records differ:
test site
hRps://biocache-test.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&disableAllQualityFilters=true&fq=collec1on_uid%3A%22co39%
22#tab_recordsView
Live site - filters turned off
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid:dr342&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700#tab_recordsView
Test site - filters turned off
hRps://biocache-test.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&disableAllQualityFilters=true&fq=collec1on_uid%3A%22co39%
22&fq=family%3A%22Halic1dae%22#tab_recordsView
Search: Data resource: Museums Victoria provider for
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
Page 2 of 11
Live Site - filters off
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&qualityProfile=ALA&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700&fq=family%3A%22Halic1dae%22#tab_recordsView
biocache-test.ala.org.au
Search: Data resource: Museums Victoria provider for
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
biocache.ala.org.au
Page 3 of 11
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&qualityProfile=ALA&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700&fq=family%3A%22Halic1dae%22#tab_recordsView
Search: Data resource: Museums Victoria provider for
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
biocache.ala.org.au
Page 4 of 11
Test site Halic1dae
Taxa not alphabe1cally sorted
Page 5 of 11
Live site alphabe1cally sorted
Page 6 of 11
Number of specimens match for checked taxa
Test site
Live site
Halic1dae test site map looks same as live site map
Test site map
Page 7 of 11
Live site Halic1dae map
Test site open record similar data but slightly different wording
hRps://biocache-test.ala.org.au/occurrences/9b325b94-09a4-4ed7-bb24-d5796bd18c3a
Page 8 of 11
Live site
hRps://biocache.ala.org.au/occurrences/9b325b94-09a4-4ed7-bb24-d5796bd18c3a
Map for species looks same:
Test site
hRps://biocache-test.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&disableAllQualityFilters=true&fq=collec1on_uid%3A%22co39%
22&fq=family%3A%22Halic1dae%22&fq=taxon_name%3A%22Lasioglossum%20(Chilalictus)%20
alacarinatum%22#tab_mapView
Record: Entomology:T15341 | Occurrence record | Atlas of
Living Australia
Atlas of Living Australia
biocache.ala.org.au
Search: Data resource: Museums Victoria provider for
Page 9 of 11
Live site
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&qualityProfile=ALA&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700&fq=family%3A%22Halic1dae%22&fq=taxon_name%3A%22Lasioglossum%20(Chilalictus)%
20alacarinatum%22#tab_mapView
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
biocache-test.ala.org.au
Page 10 of 11
Just no1ced something on the live site when filters are turned on.
For Museums Victoria records for the bee family Halic1dae 11,465 are excluded as duplicates;
however, when that is applied there are only 4 images shown.
hRps://biocache.ala.org.au/occurrences/search?
q=lsid%3Aurn%3Alsid%3Abiodiversity.org.au%3Aafd.taxon%3Ad658068e-c8h-44c5-b0fcbb8baa0ff20b&
qualityProfile=ALA&fq=ins1tu1on_name%3A%22Museums%20Victoria%22#tab_
recordImages
When the duplicates filter is turned off many more images are shown. It appears the duplicates
but with images are excluded.
hRps://biocache.ala.org.au/occurrences/search?
q=lsid%3Aurn%3Alsid%3Abiodiversity.org.au%3Aafd.taxon%3Ad658068e-c8h-44c5-b0fcbb8baa0ff20b&
qualityProfile=ALA&fq=ins1tu1on_name%3A%22Museums%20Victoria%22&disa
bleQualityFilter=duplicates#tab_recordImages
Search: FAMILY: HALICTIDAE | Occurrence records | Atlas of
Living Australia
Atlas of Living Australia
biocache.ala.org.au
Page 11 of 11
Nice job.
Hope this helps. | 2.0 | Museum Victoria provider for OZCAM - Number of Records differ:
test site
hRps://biocache-test.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&disableAllQualityFilters=true&fq=collec1on_uid%3A%22co39%
22#tab_recordsView
Live site - filters turned off
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid:dr342&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700#tab_recordsView
Test site - filters turned off
hRps://biocache-test.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&disableAllQualityFilters=true&fq=collec1on_uid%3A%22co39%
22&fq=family%3A%22Halic1dae%22#tab_recordsView
Search: Data resource: Museums Victoria provider for
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
Page 2 of 11
Live Site - filters off
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&qualityProfile=ALA&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700&fq=family%3A%22Halic1dae%22#tab_recordsView
biocache-test.ala.org.au
Search: Data resource: Museums Victoria provider for
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
biocache.ala.org.au
Page 3 of 11
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&qualityProfile=ALA&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700&fq=family%3A%22Halic1dae%22#tab_recordsView
Search: Data resource: Museums Victoria provider for
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
biocache.ala.org.au
Page 4 of 11
Test site Halic1dae
Taxa not alphabe1cally sorted
Page 5 of 11
Live site alphabe1cally sorted
Page 6 of 11
Number of specimens match for checked taxa
Test site
Live site
Halic1dae test site map looks same as live site map
Test site map
Page 7 of 11
Live site Halic1dae map
Test site open record similar data but slightly different wording
hRps://biocache-test.ala.org.au/occurrences/9b325b94-09a4-4ed7-bb24-d5796bd18c3a
Page 8 of 11
Live site
hRps://biocache.ala.org.au/occurrences/9b325b94-09a4-4ed7-bb24-d5796bd18c3a
Map for species looks same:
Test site
hRps://biocache-test.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&disableAllQualityFilters=true&fq=collec1on_uid%3A%22co39%
22&fq=family%3A%22Halic1dae%22&fq=taxon_name%3A%22Lasioglossum%20(Chilalictus)%20
alacarinatum%22#tab_mapView
Record: Entomology:T15341 | Occurrence record | Atlas of
Living Australia
Atlas of Living Australia
biocache.ala.org.au
Search: Data resource: Museums Victoria provider for
Page 9 of 11
Live site
hRps://biocache.ala.org.au/occurrences/search?
q=data_resource_uid%3Adr342&qualityProfile=ALA&disableQualityFilter=spa1allysuspect&
disableQualityFilter=scien1ficname&
disableQualityFilter=loca1on&disableQualityFilter=duplicates&disableQualityFilter=loca1
on-uncertainty&disableQualityFilter=userasser
1ons&disableQualityFilter=outliers&disableQualityFilter=recordtype&
disableQualityFilter=occurrence-status&disableQualityFilter=dates-post-
1700&fq=family%3A%22Halic1dae%22&fq=taxon_name%3A%22Lasioglossum%20(Chilalictus)%
20alacarinatum%22#tab_mapView
OZCAM | Occurrence records | Atlas of Living Australia
Atlas of Living Australia
biocache-test.ala.org.au
Page 10 of 11
Just no1ced something on the live site when filters are turned on.
For Museums Victoria records for the bee family Halic1dae 11,465 are excluded as duplicates;
however, when that is applied there are only 4 images shown.
hRps://biocache.ala.org.au/occurrences/search?
q=lsid%3Aurn%3Alsid%3Abiodiversity.org.au%3Aafd.taxon%3Ad658068e-c8h-44c5-b0fcbb8baa0ff20b&
qualityProfile=ALA&fq=ins1tu1on_name%3A%22Museums%20Victoria%22#tab_
recordImages
When the duplicates filter is turned off many more images are shown. It appears the duplicates
but with images are excluded.
hRps://biocache.ala.org.au/occurrences/search?
q=lsid%3Aurn%3Alsid%3Abiodiversity.org.au%3Aafd.taxon%3Ad658068e-c8h-44c5-b0fcbb8baa0ff20b&
qualityProfile=ALA&fq=ins1tu1on_name%3A%22Museums%20Victoria%22&disa
bleQualityFilter=duplicates#tab_recordImages
Search: FAMILY: HALICTIDAE | Occurrence records | Atlas of
Living Australia
Atlas of Living Australia
biocache.ala.org.au
Page 11 of 11
Nice job.
Hope this helps. | test | museum victoria provider for ozcam number of records differ test site hrps biocache test ala org au occurrences search q data resource uid disableallqualityfilters true fq uid tab recordsview live site filters turned off hrps biocache ala org au occurrences search q data resource uid disablequalityfilter disablequalityfilter disablequalityfilter disablequalityfilter duplicates disablequalityfilter on uncertainty disablequalityfilter userasser disablequalityfilter outliers disablequalityfilter recordtype disablequalityfilter occurrence status disablequalityfilter dates post tab recordsview test site filters turned off hrps biocache test ala org au occurrences search q data resource uid disableallqualityfilters true fq uid fq family tab recordsview search data resource museums victoria provider for ozcam occurrence records atlas of living australia atlas of living australia page of live site filters off hrps biocache ala org au occurrences search q data resource uid qualityprofile ala disablequalityfilter disablequalityfilter disablequalityfilter disablequalityfilter duplicates disablequalityfilter on uncertainty disablequalityfilter userasser disablequalityfilter outliers disablequalityfilter recordtype disablequalityfilter occurrence status disablequalityfilter dates post fq family tab recordsview biocache test ala org au search data resource museums victoria provider for ozcam occurrence records atlas of living australia atlas of living australia biocache ala org au page of hrps biocache ala org au occurrences search q data resource uid qualityprofile ala disablequalityfilter disablequalityfilter disablequalityfilter disablequalityfilter duplicates disablequalityfilter on uncertainty disablequalityfilter userasser disablequalityfilter outliers disablequalityfilter recordtype disablequalityfilter occurrence status disablequalityfilter dates post fq family tab recordsview search data resource museums victoria provider for ozcam occurrence records atlas of living australia atlas of living australia biocache ala org au page of test site taxa not sorted page of live site sorted page of number of specimens match for checked taxa test site live site test site map looks same as live site map test site map page of live site map test site open record similar data but slightly different wording hrps biocache test ala org au occurrences page of live site hrps biocache ala org au occurrences map for species looks same test site hrps biocache test ala org au occurrences search q data resource uid disableallqualityfilters true fq uid fq family fq taxon name chilalictus alacarinatum tab mapview record entomology occurrence record atlas of living australia atlas of living australia biocache ala org au search data resource museums victoria provider for page of live site hrps biocache ala org au occurrences search q data resource uid qualityprofile ala disablequalityfilter disablequalityfilter disablequalityfilter disablequalityfilter duplicates disablequalityfilter on uncertainty disablequalityfilter userasser disablequalityfilter outliers disablequalityfilter recordtype disablequalityfilter occurrence status disablequalityfilter dates post fq family fq taxon name chilalictus tab mapview ozcam occurrence records atlas of living australia atlas of living australia biocache test ala org au page of just something on the live site when filters are turned on for museums victoria records for the bee family are excluded as duplicates however when that is applied there are only images shown hrps biocache ala org au occurrences search q lsid org au taxon qualityprofile ala fq name tab recordimages when the duplicates filter is turned off many more images are shown it appears the duplicates but with images are excluded hrps biocache ala org au occurrences search q lsid org au taxon qualityprofile ala fq name disa blequalityfilter duplicates tab recordimages search family halictidae occurrence records atlas of living australia atlas of living australia biocache ala org au page of nice job hope this helps | 1 |
297,707 | 25,757,904,595 | IssuesEvent | 2022-12-08 17:51:50 | heroku/libcnb.rs | https://api.github.com/repos/heroku/libcnb.rs | opened | libcnb-test: Expose Pack build output as combined stdout + stderr | libcnb-test | So in writing some `libcnb-test` integration tests now, I found it frustrating that there's no way to get a unified stdout+stderr output, since in most cases I don't really care about testing whether I've output to stdout vs stderr (after all, `log_error` will take care of that, so there isn't much room for user error).
However, I _do_ care about how the output renders as a whole, which I can't check properly if half of it is on stdout, and the other half is on stderr. (Even if I stitch them together afterwards, the ordering will then be wrong.)
Also, relatedly, I seems to be regularly running into occasions where I'm debugging a test, and one of my output assertions is failing, but it's not clear why, because the useful part of the build log is on the other stream. But of course the test stops running after the first assertion failure (given it's a panic), and so I never get to see the assertion against the other stream. This means I then have to spend time thinking about the "sensible order" of asserting stdout vs stderr, and varying that order depending on whether the test is testing a success case, or a failure case.
All in all, it seems this is adding friction, for very little test coverage gain.
As such, I'm wondering if we shouldn't just combine the Pack CLI's stdout+stderr into one (tie the two together in the `Command`), and have libcnb-test only expose that as a single output?
Pros:
- Reduced mental burden of having to think about whether one is asserting against stdout or stderr
- Able to test visual UX of full info/warning/error output in cases that are currently untestable (if the output straddles stdout and stderr)
- Less frustrating debugging of test failures due to the assertion issue mentioned above
Cons:
- Not able to test that particular log output is on the expected stream. (Though IMO this isn't a significant drawback given we plan on leaning heavily on shared utils like `log_error()`)
We could also explore providing both forms, though that does seem to increase implementation and libcnb-test public API complexity, for potentially little benefit. | 1.0 | libcnb-test: Expose Pack build output as combined stdout + stderr - So in writing some `libcnb-test` integration tests now, I found it frustrating that there's no way to get a unified stdout+stderr output, since in most cases I don't really care about testing whether I've output to stdout vs stderr (after all, `log_error` will take care of that, so there isn't much room for user error).
However, I _do_ care about how the output renders as a whole, which I can't check properly if half of it is on stdout, and the other half is on stderr. (Even if I stitch them together afterwards, the ordering will then be wrong.)
Also, relatedly, I seems to be regularly running into occasions where I'm debugging a test, and one of my output assertions is failing, but it's not clear why, because the useful part of the build log is on the other stream. But of course the test stops running after the first assertion failure (given it's a panic), and so I never get to see the assertion against the other stream. This means I then have to spend time thinking about the "sensible order" of asserting stdout vs stderr, and varying that order depending on whether the test is testing a success case, or a failure case.
All in all, it seems this is adding friction, for very little test coverage gain.
As such, I'm wondering if we shouldn't just combine the Pack CLI's stdout+stderr into one (tie the two together in the `Command`), and have libcnb-test only expose that as a single output?
Pros:
- Reduced mental burden of having to think about whether one is asserting against stdout or stderr
- Able to test visual UX of full info/warning/error output in cases that are currently untestable (if the output straddles stdout and stderr)
- Less frustrating debugging of test failures due to the assertion issue mentioned above
Cons:
- Not able to test that particular log output is on the expected stream. (Though IMO this isn't a significant drawback given we plan on leaning heavily on shared utils like `log_error()`)
We could also explore providing both forms, though that does seem to increase implementation and libcnb-test public API complexity, for potentially little benefit. | test | libcnb test expose pack build output as combined stdout stderr so in writing some libcnb test integration tests now i found it frustrating that there s no way to get a unified stdout stderr output since in most cases i don t really care about testing whether i ve output to stdout vs stderr after all log error will take care of that so there isn t much room for user error however i do care about how the output renders as a whole which i can t check properly if half of it is on stdout and the other half is on stderr even if i stitch them together afterwards the ordering will then be wrong also relatedly i seems to be regularly running into occasions where i m debugging a test and one of my output assertions is failing but it s not clear why because the useful part of the build log is on the other stream but of course the test stops running after the first assertion failure given it s a panic and so i never get to see the assertion against the other stream this means i then have to spend time thinking about the sensible order of asserting stdout vs stderr and varying that order depending on whether the test is testing a success case or a failure case all in all it seems this is adding friction for very little test coverage gain as such i m wondering if we shouldn t just combine the pack cli s stdout stderr into one tie the two together in the command and have libcnb test only expose that as a single output pros reduced mental burden of having to think about whether one is asserting against stdout or stderr able to test visual ux of full info warning error output in cases that are currently untestable if the output straddles stdout and stderr less frustrating debugging of test failures due to the assertion issue mentioned above cons not able to test that particular log output is on the expected stream though imo this isn t a significant drawback given we plan on leaning heavily on shared utils like log error we could also explore providing both forms though that does seem to increase implementation and libcnb test public api complexity for potentially little benefit | 1 |
818,790 | 30,704,666,527 | IssuesEvent | 2023-07-27 04:38:36 | ladybirdweb/agora-invoicing-community | https://api.github.com/repos/ladybirdweb/agora-invoicing-community | opened | Email Changes | Bug High Priority | **Email:** subscription_going_to_end_mail
**Description:**
- This email is sent out to users before product expiry reminding them to renewal the product
- This email is send out only to those who have not enabled auto renewal
**Cron Name:** Subscription renewal reminder - Manual payment
**Tool Tip:** This cron is to trigger email which are sent out to users before product expiry reminding them to renew the product. This email is send out only to those who have not enabled auto renewal
**Frequency:** 30 days in advance, 15 days, 7 days, day of expiry. Total 4 emails
| 1.0 | Email Changes - **Email:** subscription_going_to_end_mail
**Description:**
- This email is sent out to users before product expiry reminding them to renewal the product
- This email is send out only to those who have not enabled auto renewal
**Cron Name:** Subscription renewal reminder - Manual payment
**Tool Tip:** This cron is to trigger email which are sent out to users before product expiry reminding them to renew the product. This email is send out only to those who have not enabled auto renewal
**Frequency:** 30 days in advance, 15 days, 7 days, day of expiry. Total 4 emails
| non_test | email changes email subscription going to end mail description this email is sent out to users before product expiry reminding them to renewal the product this email is send out only to those who have not enabled auto renewal cron name subscription renewal reminder manual payment tool tip this cron is to trigger email which are sent out to users before product expiry reminding them to renew the product this email is send out only to those who have not enabled auto renewal frequency days in advance days days day of expiry total emails | 0 |
142,417 | 11,472,026,554 | IssuesEvent | 2020-02-09 14:57:10 | dhenry-KCI/FredCo-Post-Go-Live- | https://api.github.com/repos/dhenry-KCI/FredCo-Post-Go-Live- | closed | RESBLDG - Revision Fee- TEST | Test Accepted | Revision fee did not generate automatically when a revision was submitted - zoning had resulted the review as a resubmit and it should have triggered the revision fee. Application went straight to revision assignment.
TEST 259399

| 1.0 | RESBLDG - Revision Fee- TEST - Revision fee did not generate automatically when a revision was submitted - zoning had resulted the review as a resubmit and it should have triggered the revision fee. Application went straight to revision assignment.
TEST 259399

| test | resbldg revision fee test revision fee did not generate automatically when a revision was submitted zoning had resulted the review as a resubmit and it should have triggered the revision fee application went straight to revision assignment test | 1 |
336,071 | 24,486,752,489 | IssuesEvent | 2022-10-09 14:41:42 | JonasPfeifer05/rui | https://api.github.com/repos/JonasPfeifer05/rui | closed | Resize function for Component | documentation | This is just for me to remember I should save and not always create a new Layoutcomponent and add a resize function | 1.0 | Resize function for Component - This is just for me to remember I should save and not always create a new Layoutcomponent and add a resize function | non_test | resize function for component this is just for me to remember i should save and not always create a new layoutcomponent and add a resize function | 0 |
522,008 | 15,146,817,349 | IssuesEvent | 2021-02-11 08:03:39 | CMSCompOps/WmAgentScripts | https://api.github.com/repos/CMSCompOps/WmAgentScripts | opened | Filtering and automation panel: Implement functions to get AAA settings | New Feature Priority: Medium | **Impact of the new feature**
Management of resubmissions
**Is your feature request related to a problem? Please describe.**
Motivation: #731
**Describe the solution you'd like**
These functions [1] should get the AAA settings of the workflow.
[1] https://github.com/CMSCompOps/WmAgentScripts/blob/21cb7dc8efbd992d5282b24e8598a8f842b3f70c/assistance/Workflow.py#L118-L136
**Describe alternatives you've considered**
-
**Additional context**
@hbakhshi FYI
| 1.0 | Filtering and automation panel: Implement functions to get AAA settings - **Impact of the new feature**
Management of resubmissions
**Is your feature request related to a problem? Please describe.**
Motivation: #731
**Describe the solution you'd like**
These functions [1] should get the AAA settings of the workflow.
[1] https://github.com/CMSCompOps/WmAgentScripts/blob/21cb7dc8efbd992d5282b24e8598a8f842b3f70c/assistance/Workflow.py#L118-L136
**Describe alternatives you've considered**
-
**Additional context**
@hbakhshi FYI
| non_test | filtering and automation panel implement functions to get aaa settings impact of the new feature management of resubmissions is your feature request related to a problem please describe motivation describe the solution you d like these functions should get the aaa settings of the workflow describe alternatives you ve considered additional context hbakhshi fyi | 0 |
347,639 | 31,237,756,411 | IssuesEvent | 2023-08-20 13:35:51 | NadalB/Precision-Playhouse-Assault-cube-server | https://api.github.com/repos/NadalB/Precision-Playhouse-Assault-cube-server | closed | Maptop | bug #2 test phase | Hello,
New bug, an image speaks more than a text
https://gyazo.com/c506eec19bcced92391ed9d65d8e51af
PS: remember when we talked about that time thing in mapsettings and I said 1mn should be possible... I thought it was an indicator time to how long it would take to complete the map, not the game time for the map ahah now I understand why can't under 15 ! Aso I saw you fixed averages of !ratemap. But still can't see map stats : what is it supposed to show ? Also don't forget that sub access thing !!
Chobbz | 1.0 | Maptop - Hello,
New bug, an image speaks more than a text
https://gyazo.com/c506eec19bcced92391ed9d65d8e51af
PS: remember when we talked about that time thing in mapsettings and I said 1mn should be possible... I thought it was an indicator time to how long it would take to complete the map, not the game time for the map ahah now I understand why can't under 15 ! Aso I saw you fixed averages of !ratemap. But still can't see map stats : what is it supposed to show ? Also don't forget that sub access thing !!
Chobbz | test | maptop hello new bug an image speaks more than a text ps remember when we talked about that time thing in mapsettings and i said should be possible i thought it was an indicator time to how long it would take to complete the map not the game time for the map ahah now i understand why can t under aso i saw you fixed averages of ratemap but still can t see map stats what is it supposed to show also don t forget that sub access thing chobbz | 1 |
11,256 | 13,226,501,211 | IssuesEvent | 2020-08-18 00:05:37 | lmn8/nord-geshi | https://api.github.com/repos/lmn8/nord-geshi | closed | Release 0.5.2 | context-api scope-compatibility scope-stability type-task | Improve CSS ID selector highlighting (mapped consistently to `$nord7` = `#8fbcbb` instead of `$nord15` = `#b48ead`). | True | Release 0.5.2 - Improve CSS ID selector highlighting (mapped consistently to `$nord7` = `#8fbcbb` instead of `$nord15` = `#b48ead`). | non_test | release improve css id selector highlighting mapped consistently to instead of | 0 |
246,574 | 20,884,736,740 | IssuesEvent | 2022-03-23 02:54:50 | vaop/vaop | https://api.github.com/repos/vaop/vaop | opened | Flight Unit Testing | testing | Unit testing needs to added to the Flight Domain, including all of the Actions, DataTransferObjects, and Observers. | 1.0 | Flight Unit Testing - Unit testing needs to added to the Flight Domain, including all of the Actions, DataTransferObjects, and Observers. | test | flight unit testing unit testing needs to added to the flight domain including all of the actions datatransferobjects and observers | 1 |
181,923 | 21,664,468,081 | IssuesEvent | 2022-05-07 01:27:02 | eldorplus/portfolio | https://api.github.com/repos/eldorplus/portfolio | closed | CVE-2018-11695 (High) detected in opennms-opennms-source-23.0.3-1 - autoclosed | security vulnerability | ## CVE-2018-11695 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-23.0.3-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/eldorplus/portfolio/commits/d39c6030d1112cc864fbcffa04099b507e753f36">d39c6030d1112cc864fbcffa04099b507e753f36</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (13)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /portfolio/node_modules/node-sass/src/sass_context_wrapper.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/expand.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/parser.hpp
- /portfolio/node_modules/node-sass/src/libsass/src/util.hpp
- /portfolio/node_modules/node-sass/src/libsass/src/cssize.cpp
- /portfolio/node_modules/node-gyp/gyp/pylib/gyp/MSVSUtil.py
- /portfolio/node_modules/node-sass/src/libsass/src/functions.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /portfolio/node_modules/node-sass/src/callback_bridge.h
- /portfolio/node_modules/node-sass/src/libsass/src/sass.cpp
- /portfolio/node_modules/node-sass/src/sass_context_wrapper.h
- /portfolio/node_modules/node-sass/src/libsass/src/eval.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/debugger.hpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.2. A NULL pointer dereference was found in the function Sass::Expand::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11695>CVE-2018-11695</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-11695 (High) detected in opennms-opennms-source-23.0.3-1 - autoclosed - ## CVE-2018-11695 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-23.0.3-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/eldorplus/portfolio/commits/d39c6030d1112cc864fbcffa04099b507e753f36">d39c6030d1112cc864fbcffa04099b507e753f36</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (13)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /portfolio/node_modules/node-sass/src/sass_context_wrapper.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/expand.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/parser.hpp
- /portfolio/node_modules/node-sass/src/libsass/src/util.hpp
- /portfolio/node_modules/node-sass/src/libsass/src/cssize.cpp
- /portfolio/node_modules/node-gyp/gyp/pylib/gyp/MSVSUtil.py
- /portfolio/node_modules/node-sass/src/libsass/src/functions.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /portfolio/node_modules/node-sass/src/callback_bridge.h
- /portfolio/node_modules/node-sass/src/libsass/src/sass.cpp
- /portfolio/node_modules/node-sass/src/sass_context_wrapper.h
- /portfolio/node_modules/node-sass/src/libsass/src/eval.cpp
- /portfolio/node_modules/node-sass/src/libsass/src/debugger.hpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.2. A NULL pointer dereference was found in the function Sass::Expand::operator which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11695>CVE-2018-11695</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in opennms opennms source autoclosed cve high severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries portfolio node modules node sass src sass context wrapper cpp portfolio node modules node sass src libsass src expand cpp portfolio node modules node sass src libsass src parser hpp portfolio node modules node sass src libsass src util hpp portfolio node modules node sass src libsass src cssize cpp portfolio node modules node gyp gyp pylib gyp msvsutil py portfolio node modules node sass src libsass src functions cpp portfolio node modules node sass src libsass src prelexer cpp portfolio node modules node sass src callback bridge h portfolio node modules node sass src libsass src sass cpp portfolio node modules node sass src sass context wrapper h portfolio node modules node sass src libsass src eval cpp portfolio node modules node sass src libsass src debugger hpp vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass expand operator which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
230,970 | 17,658,702,764 | IssuesEvent | 2021-08-21 03:48:18 | AprilSylph/XKit-Rewritten | https://api.github.com/repos/AprilSylph/XKit-Rewritten | closed | Wiki: Update link to content moved from contributing.md | documentation | The [wiki's installation section](https://github.com/AprilSylph/XKit-Rewritten/wiki/Installation#chrome) describes instructions for installing a webextension in developer mode and links to https://github.com/AprilSylph/XKit-Rewritten/blob/master/CONTRIBUTING.md, but this content has been moved to https://github.com/AprilSylph/XKit-Rewritten/blob/master/docs/Chapter%202%20-%20Getting%20started.md.
(Can't believe you can't PR a Github wiki! Seems like an odd omission.) | 1.0 | Wiki: Update link to content moved from contributing.md - The [wiki's installation section](https://github.com/AprilSylph/XKit-Rewritten/wiki/Installation#chrome) describes instructions for installing a webextension in developer mode and links to https://github.com/AprilSylph/XKit-Rewritten/blob/master/CONTRIBUTING.md, but this content has been moved to https://github.com/AprilSylph/XKit-Rewritten/blob/master/docs/Chapter%202%20-%20Getting%20started.md.
(Can't believe you can't PR a Github wiki! Seems like an odd omission.) | non_test | wiki update link to content moved from contributing md the describes instructions for installing a webextension in developer mode and links to but this content has been moved to can t believe you can t pr a github wiki seems like an odd omission | 0 |
82,794 | 7,852,916,584 | IssuesEvent | 2018-06-20 15:49:14 | moby/moby | https://api.github.com/repos/moby/moby | closed | Flakey test : TestDaemonNoSpaceLeftOnDeviceError on powerpc | area/testing | ```
12:03:26 FAIL: docker_cli_daemon_test.go:1800: DockerDaemonSuite.TestDaemonNoSpaceLeftOnDeviceError
12:03:26
12:03:26 [d9976ae56ed66] waiting for daemon to start
12:03:26 [d9976ae56ed66] exiting daemon
12:03:26 docker_cli_daemon_test.go:1817:
12:03:26 s.d.Start(c, "--data-root", filepath.Join(testDir, "test-mount"))
12:03:26 /go/src/github.com/docker/docker/internal/test/daemon/daemon.go:203:
12:03:26 t.Fatalf("Error starting daemon with arguments: %v", args)
12:03:26 ... Error: Error starting daemon with arguments: [--data-root /tmp/no-space-left-on-device-test813660284/test-mount]
```
https://jenkins.dockerproject.org/job/Docker-PRs-powerpc/10203/console from https://github.com/moby/moby/pull/37243. | 1.0 | Flakey test : TestDaemonNoSpaceLeftOnDeviceError on powerpc - ```
12:03:26 FAIL: docker_cli_daemon_test.go:1800: DockerDaemonSuite.TestDaemonNoSpaceLeftOnDeviceError
12:03:26
12:03:26 [d9976ae56ed66] waiting for daemon to start
12:03:26 [d9976ae56ed66] exiting daemon
12:03:26 docker_cli_daemon_test.go:1817:
12:03:26 s.d.Start(c, "--data-root", filepath.Join(testDir, "test-mount"))
12:03:26 /go/src/github.com/docker/docker/internal/test/daemon/daemon.go:203:
12:03:26 t.Fatalf("Error starting daemon with arguments: %v", args)
12:03:26 ... Error: Error starting daemon with arguments: [--data-root /tmp/no-space-left-on-device-test813660284/test-mount]
```
https://jenkins.dockerproject.org/job/Docker-PRs-powerpc/10203/console from https://github.com/moby/moby/pull/37243. | test | flakey test testdaemonnospaceleftondeviceerror on powerpc fail docker cli daemon test go dockerdaemonsuite testdaemonnospaceleftondeviceerror waiting for daemon to start exiting daemon docker cli daemon test go s d start c data root filepath join testdir test mount go src github com docker docker internal test daemon daemon go t fatalf error starting daemon with arguments v args error error starting daemon with arguments from | 1 |
135,374 | 10,980,363,609 | IssuesEvent | 2019-11-30 13:47:48 | influxdata/influxdb | https://api.github.com/repos/influxdata/influxdb | closed | Flaky Test: TestCoordinatingTaskService_ClaimTaskUpdatesLatestCompleted | flaky test wontfix | Test is consistently failing on master
```
--- FAIL: TestCoordinatingTaskService_ClaimTaskUpdatesLatestCompleted (0.00s)
middleware_test.go:260: failed up update latest completed in claimed task
FAIL
``` | 1.0 | Flaky Test: TestCoordinatingTaskService_ClaimTaskUpdatesLatestCompleted - Test is consistently failing on master
```
--- FAIL: TestCoordinatingTaskService_ClaimTaskUpdatesLatestCompleted (0.00s)
middleware_test.go:260: failed up update latest completed in claimed task
FAIL
``` | test | flaky test testcoordinatingtaskservice claimtaskupdateslatestcompleted test is consistently failing on master fail testcoordinatingtaskservice claimtaskupdateslatestcompleted middleware test go failed up update latest completed in claimed task fail | 1 |
173,932 | 13,450,019,133 | IssuesEvent | 2020-09-08 17:50:12 | guardianproject/orbot | https://api.github.com/repos/guardianproject/orbot | closed | VPN mode is barely usable on Samsung Galaxy S9+ | PLEASE TEST bug | I have enabled AUTO for all the HTTP, DNS and SOCKS ports, the Tor circuit is stablished 100%, but VPN mode works only 1 out of 10 tries. Most times the app under VPN mode cannot connect to the internet at all, sometimes it can only reach clearnet.
Constantly restarting, enabling and disabling VPN somehow make it work, but I am not sure why | 1.0 | VPN mode is barely usable on Samsung Galaxy S9+ - I have enabled AUTO for all the HTTP, DNS and SOCKS ports, the Tor circuit is stablished 100%, but VPN mode works only 1 out of 10 tries. Most times the app under VPN mode cannot connect to the internet at all, sometimes it can only reach clearnet.
Constantly restarting, enabling and disabling VPN somehow make it work, but I am not sure why | test | vpn mode is barely usable on samsung galaxy i have enabled auto for all the http dns and socks ports the tor circuit is stablished but vpn mode works only out of tries most times the app under vpn mode cannot connect to the internet at all sometimes it can only reach clearnet constantly restarting enabling and disabling vpn somehow make it work but i am not sure why | 1 |
326,218 | 27,979,488,144 | IssuesEvent | 2023-03-26 01:02:52 | F4KER-X/TalentVault-SOEN-341-Project-2023 | https://api.github.com/repos/F4KER-X/TalentVault-SOEN-341-Project-2023 | opened | UAT 1.3: Sign Up V3 | user acceptance test User Story 1 |
**User Acceptance Flow**
1. User leaves some fields empty
2. User clicks on Sign up button
3. Error is displayed to the user to fill out the remaining fields
4. No HTTP request is made and no redirection happens | 1.0 | UAT 1.3: Sign Up V3 -
**User Acceptance Flow**
1. User leaves some fields empty
2. User clicks on Sign up button
3. Error is displayed to the user to fill out the remaining fields
4. No HTTP request is made and no redirection happens | test | uat sign up user acceptance flow user leaves some fields empty user clicks on sign up button error is displayed to the user to fill out the remaining fields no http request is made and no redirection happens | 1 |
365,466 | 25,537,117,551 | IssuesEvent | 2022-11-29 12:50:31 | shuttle-hq/shuttle | https://api.github.com/repos/shuttle-hq/shuttle | closed | Document self-deployments | documentation self-deployment | The entire platform, client and server code, is open-source and its code is in this repo. However, self-deployment is not documented anywhere so it makes it hard for someone to implement self-deployments by themselves without requiring help from Discord.
### What we can do
Document, as a "getting started" step in the `shuttle-service` crate and in the `README.md` here, that self-deployment is an option and point to a small guide (probably another `.md` file in the root of the repo or a wiki page) which explains how to self-deploy. | 1.0 | Document self-deployments - The entire platform, client and server code, is open-source and its code is in this repo. However, self-deployment is not documented anywhere so it makes it hard for someone to implement self-deployments by themselves without requiring help from Discord.
### What we can do
Document, as a "getting started" step in the `shuttle-service` crate and in the `README.md` here, that self-deployment is an option and point to a small guide (probably another `.md` file in the root of the repo or a wiki page) which explains how to self-deploy. | non_test | document self deployments the entire platform client and server code is open source and its code is in this repo however self deployment is not documented anywhere so it makes it hard for someone to implement self deployments by themselves without requiring help from discord what we can do document as a getting started step in the shuttle service crate and in the readme md here that self deployment is an option and point to a small guide probably another md file in the root of the repo or a wiki page which explains how to self deploy | 0 |
524,404 | 15,213,231,181 | IssuesEvent | 2021-02-17 11:30:03 | HSLdevcom/bultti | https://api.github.com/repos/HSLdevcom/bultti | closed | Bump emission class on equipment with NoxBooster installed | Priority 2 enhancement | A device can be installed on vehicles to make their emission class ranking better. Bump the emission class rank on vehicles where Bultti can determine that a device has been installed.
The installed status can be found in JORE ajoneuvo table in the field: "pakokaauspuhd".
Screenshot of the option from vehicle registry:

| 1.0 | Bump emission class on equipment with NoxBooster installed - A device can be installed on vehicles to make their emission class ranking better. Bump the emission class rank on vehicles where Bultti can determine that a device has been installed.
The installed status can be found in JORE ajoneuvo table in the field: "pakokaauspuhd".
Screenshot of the option from vehicle registry:

| non_test | bump emission class on equipment with noxbooster installed a device can be installed on vehicles to make their emission class ranking better bump the emission class rank on vehicles where bultti can determine that a device has been installed the installed status can be found in jore ajoneuvo table in the field pakokaauspuhd screenshot of the option from vehicle registry | 0 |
503,900 | 14,601,272,181 | IssuesEvent | 2020-12-21 08:24:15 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.disneyplus.com - site is not usable | browser-focus-geckoview engine-gecko priority-important | <!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63954 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.disneyplus.com/login
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
Does not load and buttons are unclickable
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.disneyplus.com - site is not usable - <!-- @browser: Firefox Mobile 84.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63954 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.disneyplus.com/login
**Browser / Version**: Firefox Mobile 84.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
Does not load and buttons are unclickable
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description buttons or links not working steps to reproduce does not load and buttons are unclickable browser configuration none from with ❤️ | 0 |
247,160 | 18,857,380,754 | IssuesEvent | 2021-11-12 08:31:55 | boonhaii/pe | https://api.github.com/repos/boonhaii/pe | opened | Missing Explanation on Subject Field | type.DocumentationBug severity.Low | In the User Guide under the section on "Find", there is a field for "subject", but this has not been explained to the user.

<!--session: 1636703502312-e3cace0f-1f57-4831-bf69-d128c435aae1-->
<!--Version: Web v3.4.1--> | 1.0 | Missing Explanation on Subject Field - In the User Guide under the section on "Find", there is a field for "subject", but this has not been explained to the user.

<!--session: 1636703502312-e3cace0f-1f57-4831-bf69-d128c435aae1-->
<!--Version: Web v3.4.1--> | non_test | missing explanation on subject field in the user guide under the section on find there is a field for subject but this has not been explained to the user | 0 |
82,165 | 7,822,064,394 | IssuesEvent | 2018-06-14 00:05:42 | istio/istio | https://api.github.com/repos/istio/istio | closed | Vendor from the gometalinter.v2 build image for stable linter tests | area/test and release kind/test failure | Our linter tests fail due to gometalinter.v2 download timeout:
+ go get -u gopkg.in/alecthomas/gometalinter.v2
package gopkg.in/alecthomas/gometalinter.v2: unrecognized import path "gopkg.in/alecthomas/gometalinter.v2" (https fetch: Get https://gopkg.in/alecthomas/gometalinter.v2?go-get=1: dial tcp 45.33.37.13:443: i/o timeout)
Makefile:269: recipe for target 'lint' failed
make: *** [lint] Error 1
https://circleci.com/gh/istio/istio/36525?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
We should vendor the gometalinter.v2 build images (or via other similar ways) to make sure our linter tests are stable. | 2.0 | Vendor from the gometalinter.v2 build image for stable linter tests - Our linter tests fail due to gometalinter.v2 download timeout:
+ go get -u gopkg.in/alecthomas/gometalinter.v2
package gopkg.in/alecthomas/gometalinter.v2: unrecognized import path "gopkg.in/alecthomas/gometalinter.v2" (https fetch: Get https://gopkg.in/alecthomas/gometalinter.v2?go-get=1: dial tcp 45.33.37.13:443: i/o timeout)
Makefile:269: recipe for target 'lint' failed
make: *** [lint] Error 1
https://circleci.com/gh/istio/istio/36525?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
We should vendor the gometalinter.v2 build images (or via other similar ways) to make sure our linter tests are stable. | test | vendor from the gometalinter build image for stable linter tests our linter tests fail due to gometalinter download timeout go get u gopkg in alecthomas gometalinter package gopkg in alecthomas gometalinter unrecognized import path gopkg in alecthomas gometalinter https fetch get dial tcp i o timeout makefile recipe for target lint failed make error we should vendor the gometalinter build images or via other similar ways to make sure our linter tests are stable | 1 |
229,239 | 18,286,668,365 | IssuesEvent | 2021-10-05 11:04:49 | DILCISBoard/eark-ip-test-corpus | https://api.github.com/repos/DILCISBoard/eark-ip-test-corpus | closed | CSIP111 Test Case Description | test case | **Specification:**
- **Name:** E-ARK CSIP
- **Version:** 2.0-DRAFT
- **URL:** http://earkcsip.dilcis.eu/
**Requirement:**
- **Id:** CSIP111
- **Link:** http://earkcsip.dilcis.eu/#CSIP111
**Error Level:** ERROR
**Description:**
CSIP111 | Type of link structMap/div/div/mptr/@xlink:type | Attribute used with the value “simple”. Value list is maintained by the xlink standard | 1..1 MUST
-- | -- | -- | --
| 1.0 | CSIP111 Test Case Description - **Specification:**
- **Name:** E-ARK CSIP
- **Version:** 2.0-DRAFT
- **URL:** http://earkcsip.dilcis.eu/
**Requirement:**
- **Id:** CSIP111
- **Link:** http://earkcsip.dilcis.eu/#CSIP111
**Error Level:** ERROR
**Description:**
CSIP111 | Type of link structMap/div/div/mptr/@xlink:type | Attribute used with the value “simple”. Value list is maintained by the xlink standard | 1..1 MUST
-- | -- | -- | --
| test | test case description specification name e ark csip version draft url requirement id link error level error description type of link structmap div div mptr xlink type attribute used with the value “simple” value list is maintained by the xlink standard must | 1 |
209,287 | 16,013,965,272 | IssuesEvent | 2021-04-20 14:01:09 | Hamlib/Hamlib | https://api.github.com/repos/Hamlib/Hamlib | closed | IC-756 split mode stays on VFOB after | JTDX WSJTX bug critical needs test | [2021-04-14 23:42:31.869233][00:00:19.557463][RIGCTRL:trace] #: 2 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 0Hz} Mode: 3; SPLIT: off; PTT: off)
[2021-04-14 23:42:31.869233][00:00:19.557492][RIGCTRL:trace] #: 3 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 7074500Hz} Mode: 3; SPLIT: on; PTT: off)
[2021-04-14 23:42:31.869233][00:00:19.557518][RIGCTRL:trace] txf: 7074500 reversed: false
[2021-04-14 23:42:31.869233][00:00:19.557533][RIGCTRL:trace] RX VFO=Main TX VFO=Sub
[2021-04-14 23:42:31.869233][00:00:19.557552][RIGCTRL:trace] rig_set_split_vfo split=1
[2021-04-14 23:42:31.869233][00:00:19.557590][RIGCTRL:debug] rig.c(4227):rig_set_split_vfo entered
[2021-04-14 23:42:31.869233][00:00:19.557614][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:31.869233][00:00:19.557634][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:31.869233][00:00:19.557665][RIGCTRL:debug] icom_set_split_vfo called vfo='currVFO', split=1, tx_vfo=Sub, curr_vfo=Main
[2021-04-14 23:42:31.869233][00:00:19.557688][RIGCTRL:trace] icom_set_split_vfo: vfo clause 4
[2021-04-14 23:42:31.869233][00:00:19.557709][RIGCTRL:trace] icom_set_split_vfo: set_vfo because tx_vfo=Sub
[2021-04-14 23:42:31.869233][00:00:19.557731][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.869233][00:00:19.557757][RIGCTRL:debug] icom_transaction: cmd=0x0f, subcmd=0x01, payload_len=0
[2021-04-14 23:42:31.869233][00:00:19.557780][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.869233][00:00:19.557803][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.869985][00:00:19.557827][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:31.869985][00:00:19.557848][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.869985][00:00:19.557869][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.869985][00:00:19.557886][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.869985][00:00:19.557937][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.869985][00:00:19.557955][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.869985][00:00:19.558066][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:31.869985][00:00:19.558089][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:31.869985][00:00:19.558108][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.869985][00:00:19.558125][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.870735][00:00:19.558931][SYSLOG:trace] #: 2 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 7074000Hz} Mode: 3; SPLIT: unknown; PTT: off)
[2021-04-14 23:42:31.890999][00:00:19.579220][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:31.890999][00:00:19.579259][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:31.890999][00:00:19.579284][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:31.890999][00:00:19.579305][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.890999][00:00:19.579325][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.890999][00:00:19.579571][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:31.891748][00:00:19.579596][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:31.891748][00:00:19.579619][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:31.891748][00:00:19.579645][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:31.891748][00:00:19.579668][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:31.891748][00:00:19.579691][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:31.891748][00:00:19.579731][RIGCTRL:debug] icom_set_split_vfo: vfo=Main curr_vfo=Main rx_vfo=Main tx_vfo=Sub split=1
[2021-04-14 23:42:31.891748][00:00:19.579754][RIGCTRL:debug] icom.c(5345):icom_set_split_vfo return(0)
[2021-04-14 23:42:31.891748][00:00:19.579779][RIGCTRL:debug] rig.c(4256):rig_set_split_vfo return(0)
[2021-04-14 23:42:31.891748][00:00:19.579797][RIGCTRL:trace] rig_set_split_freq_mode freq=7074500 mode = USB
[2021-04-14 23:42:31.891748][00:00:19.579827][RIGCTRL:debug] rig.c(4071):rig_set_split_freq_mode entered
[2021-04-14 23:42:31.891748][00:00:19.579860][RIGCTRL:debug] rig_set_split_freq_mode: vfo=VFOB, tx_freq=7074500, tx_mode=USB, tx_width=-1
[2021-04-14 23:42:31.891748][00:00:19.579894][RIGCTRL:debug] icom_set_split_freq_mode called vfo=VFOB
[2021-04-14 23:42:31.891748][00:00:19.579908][RIGCTRL:debug] icom_set_split_freq_mode: curr_vfo=Main
[2021-04-14 23:42:31.891748][00:00:19.579921][RIGCTRL:debug] icom_set_split_freq_mode: before get_split_vfos rx_vfo=Main tx_vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.579935][RIGCTRL:debug] icom_get_split_vfos called
[2021-04-14 23:42:31.891748][00:00:19.579948][RIGCTRL:trace] icom_get_split_vfos: VFO_HAS_MAIN_SUB_ONLY, split=1, rx=Main, tx=Sub
[2021-04-14 23:42:31.891748][00:00:19.579960][RIGCTRL:debug] icom.c(4201):icom_get_split_vfos return(0)
[2021-04-14 23:42:31.891748][00:00:19.579971][RIGCTRL:debug] icom_set_split_freq_mode: after get_split_vfos rx_vfo=Main tx_vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.579982][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:31.891748][00:00:19.579993][RIGCTRL:debug] rig_set_vfo called vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580004][RIGCTRL:trace] vfo_fixup: vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580015][RIGCTRL:debug] rig.c(4326):rig_get_split_vfo entered
[2021-04-14 23:42:31.891748][00:00:19.580026][RIGCTRL:debug] rig.c(4342):rig_get_split_vfo return(-11)
[2021-04-14 23:42:31.891748][00:00:19.580039][RIGCTRL:trace] vfo_fixup: RIG_VFO_TX changed to Sub, split=1, satmode=0
[2021-04-14 23:42:31.891748][00:00:19.580049][RIGCTRL:trace] vfo_fixup: final vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580060][RIGCTRL:debug] icom_set_vfo called vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580071][RIGCTRL:trace] icom_set_vfo: VFO changing from Main to Sub
[2021-04-14 23:42:31.891748][00:00:19.580085][RIGCTRL:trace] icom_set_vfo: line#2295
[2021-04-14 23:42:31.891748][00:00:19.580100][RIGCTRL:trace] icom_set_vfo: Sub asked for, ended up with vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580111][RIGCTRL:trace] icom_set_vfo: line#2448
[2021-04-14 23:42:31.891748][00:00:19.580122][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.891748][00:00:19.580135][RIGCTRL:debug] icom_transaction: cmd=0x07, subcmd=0xd1, payload_len=0
[2021-04-14 23:42:31.891748][00:00:19.580146][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.891748][00:00:19.580158][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.891748][00:00:19.580169][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:31.891748][00:00:19.580179][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.891748][00:00:19.580190][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.891748][00:00:19.580199][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.891748][00:00:19.580234][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.891748][00:00:19.580249][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.892499][00:00:19.580413][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:31.892499][00:00:19.580439][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:31.892499][00:00:19.580458][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.892499][00:00:19.580475][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.913515][00:00:19.601668][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:31.913515][00:00:19.601703][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:31.913515][00:00:19.601719][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:31.913515][00:00:19.601732][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.913515][00:00:19.601743][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.924536][00:00:19.612617][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:31.924536][00:00:19.612648][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:31.924536][00:00:19.612663][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:31.924536][00:00:19.612678][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:31.924536][00:00:19.612691][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:31.924536][00:00:19.612702][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:31.924536][00:00:19.612713][RIGCTRL:trace] icom_set_vfo: line#2451
[2021-04-14 23:42:31.924536][00:00:19.612724][RIGCTRL:trace] icom_set_vfo: line#2474 curr_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612735][RIGCTRL:debug] icom.c(2475):icom_set_vfo return(0)
[2021-04-14 23:42:31.924536][00:00:19.612746][RIGCTRL:trace] rig_set_vfo: rig->state.current_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612758][RIGCTRL:debug] icom_get_freq called for Sub, curr_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612768][RIGCTRL:debug] icom_get_freq: using vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612779][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612789][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:31.924536][00:00:19.612800][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:31.924536][00:00:19.612811][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.924536][00:00:19.612828][RIGCTRL:debug] icom_transaction: cmd=0x03, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:31.924536][00:00:19.612839][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.924536][00:00:19.612850][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.924536][00:00:19.612861][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:31.924536][00:00:19.612871][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.924536][00:00:19.612882][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.924536][00:00:19.612892][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.924536][00:00:19.612922][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.924536][00:00:19.612933][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.924536][00:00:19.613023][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:31.924536][00:00:19.613043][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:31.924536][00:00:19.613058][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.924536][00:00:19.613069][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.946300][00:00:19.634381][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:31.946300][00:00:19.634414][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:31.946300][00:00:19.634429][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:31.946300][00:00:19.634443][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.946300][00:00:19.634454][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.956809][00:00:19.645119][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:31.956809][00:00:19.645160][RIGCTRL:trace] 0000 fe fe e0 50 03 00 45 07 07 00 fd ...P..E....
[2021-04-14 23:42:31.956809][00:00:19.645184][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:31.956809][00:00:19.645208][RIGCTRL:trace] icom_one_transaction: frm_len=11, frm_len-1=fd, frm_len-2=00
[2021-04-14 23:42:31.956809][00:00:19.645228][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:31.956809][00:00:19.645247][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:31.956809][00:00:19.645267][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.956809][00:00:19.645285][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:31.956809][00:00:19.645304][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:31.957562][00:00:19.645346][RIGCTRL:debug] from_bcd called
[2021-04-14 23:42:31.957562][00:00:19.645376][RIGCTRL:debug] icom_get_freq exit vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645397][RIGCTRL:debug] icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:31.957562][00:00:19.645428][RIGCTRL:trace] rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:31.957562][00:00:19.645455][RIGCTRL:trace] rig_set_vfo: return 0, vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645475][RIGCTRL:debug] rig.c(2568):rig_set_vfo return(0)
[2021-04-14 23:42:31.957562][00:00:19.645504][RIGCTRL:debug] rig_set_freq called vfo=currVFO, freq=7074500
[2021-04-14 23:42:31.957562][00:00:19.645524][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645542][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:31.957562][00:00:19.645560][RIGCTRL:trace] rig_set_freq: TARGETABLE_FREQ vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645585][RIGCTRL:debug] icom_set_freq called currVFO=7074500.000000
[2021-04-14 23:42:31.957562][00:00:19.645605][RIGCTRL:trace] icom_set_freq: currVFO asked for so vfo set to Sub
[2021-04-14 23:42:31.957562][00:00:19.645622][RIGCTRL:trace] icom_set_freq: set_vfo_curr=Sub
[2021-04-14 23:42:31.957562][00:00:19.645639][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645656][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:31.957562][00:00:19.645675][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:31.957562][00:00:19.645694][RIGCTRL:debug] rig_get_freq called vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645730][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645748][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:31.957562][00:00:19.645783][RIGCTRL:debug] rig.c(1512):rig_get_cache entered
[2021-04-14 23:42:31.957562][00:00:19.645803][RIGCTRL:trace] rig_get_cache: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645847][RIGCTRL:trace] rig_get_cache: vfo=Sub, freq=0
[2021-04-14 23:42:31.957562][00:00:19.645878][RIGCTRL:debug] rig.c(1621):rig_get_cache return(0)
[2021-04-14 23:42:31.957562][00:00:19.645897][RIGCTRL:trace] rig_get_freq: cache check1 age=26640ms
[2021-04-14 23:42:31.957562][00:00:19.645935][RIGCTRL:trace] rig_get_freq: cache miss age=26640ms, cached_vfo=currVFO, asked_vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645956][RIGCTRL:debug] icom_get_freq called for currVFO, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645975][RIGCTRL:debug] icom_get_freq: using vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645993][RIGCTRL:trace] set_vfo_curr: vfo=currVFO, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.646010][RIGCTRL:trace] set_vfo_curr: Asking for currVFO, currVFO=Sub
[2021-04-14 23:42:31.957562][00:00:19.646029][RIGCTRL:debug] icom.c(7776):set_vfo_curr return(0)
[2021-04-14 23:42:31.957562][00:00:19.646047][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.957562][00:00:19.646068][RIGCTRL:debug] icom_transaction: cmd=0x03, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:31.958309][00:00:19.646090][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.958309][00:00:19.646110][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.958309][00:00:19.646130][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:31.958309][00:00:19.646144][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.958309][00:00:19.646155][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.958309][00:00:19.646165][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.958309][00:00:19.646199][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.958309][00:00:19.646210][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.958309][00:00:19.646304][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:31.958309][00:00:19.646320][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:31.958309][00:00:19.646337][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.958309][00:00:19.646356][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.011304][00:00:19.699353][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.011304][00:00:19.699390][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:32.011304][00:00:19.699414][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.011304][00:00:19.699434][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.011304][00:00:19.699453][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.021813][00:00:19.710007][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:32.021813][00:00:19.710044][RIGCTRL:trace] 0000 fe fe e0 50 03 00 45 07 07 00 fd ...P..E....
[2021-04-14 23:42:32.021813][00:00:19.710059][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:32.021813][00:00:19.710076][RIGCTRL:trace] icom_one_transaction: frm_len=11, frm_len-1=fd, frm_len-2=00
[2021-04-14 23:42:32.021813][00:00:19.710088][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.021813][00:00:19.710099][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.021813][00:00:19.710113][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.021813][00:00:19.710124][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:32.021813][00:00:19.710136][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:32.021813][00:00:19.710146][RIGCTRL:debug] from_bcd called
[2021-04-14 23:42:32.021813][00:00:19.710162][RIGCTRL:debug] icom_get_freq exit vfo=currVFO, curr_vfo=Sub
[2021-04-14 23:42:32.021813][00:00:19.710173][RIGCTRL:debug] icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:32.021813][00:00:19.710199][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.021813][00:00:19.710210][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.021813][00:00:19.710236][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=7074500
[2021-04-14 23:42:32.021813][00:00:19.710259][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.021813][00:00:19.710293][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.021813][00:00:19.710304][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.022563][00:00:19.710338][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=7074500
[2021-04-14 23:42:32.022563][00:00:19.710362][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.022563][00:00:19.710382][RIGCTRL:debug] rig.c(2065):rig_get_freq return(0)
[2021-04-14 23:42:32.022563][00:00:19.710393][RIGCTRL:debug] to_bcd called
[2021-04-14 23:42:32.022563][00:00:19.710407][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.022563][00:00:19.710421][RIGCTRL:debug] icom_transaction: cmd=0x05, subcmd=0xffffffff, payload_len=5
[2021-04-14 23:42:32.022563][00:00:19.710432][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.022563][00:00:19.710443][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.022563][00:00:19.710454][RIGCTRL:debug] frame.c(90):make_cmd_frame return(11)
[2021-04-14 23:42:32.022563][00:00:19.710465][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.022563][00:00:19.710476][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.022563][00:00:19.710487][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.022563][00:00:19.710522][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.022563][00:00:19.710533][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.022563][00:00:19.710655][RIGCTRL:trace] write_block(): TX 11 bytes
[2021-04-14 23:42:32.022563][00:00:19.710677][RIGCTRL:trace] 0000 fe fe 50 e0 05 00 45 07 07 00 fd ..P...E....
[2021-04-14 23:42:32.022563][00:00:19.710690][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.022563][00:00:19.710702][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.043580][00:00:19.731877][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:32.043580][00:00:19.731935][RIGCTRL:trace] 0000 fe fe 50 e0 05 00 45 07 07 00 fd ..P...E....
[2021-04-14 23:42:32.043580][00:00:19.731955][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:32.043580][00:00:19.731972][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.043580][00:00:19.731985][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.054833][00:00:19.742821][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.054833][00:00:19.742856][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.054833][00:00:19.742872][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.054833][00:00:19.742888][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.054833][00:00:19.742900][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.054833][00:00:19.742912][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.105713][00:00:19.793517][RIGCTRL:debug] icom.c(1133):icom_set_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793561][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.105713][00:00:19.793584][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.105713][00:00:19.793627][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=0
[2021-04-14 23:42:32.105713][00:00:19.793663][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793686][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.105713][00:00:19.793706][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.105713][00:00:19.793746][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=7074500
[2021-04-14 23:42:32.105713][00:00:19.793785][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793808][RIGCTRL:debug] rig.c(1859):rig_set_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793847][RIGCTRL:debug] icom_set_mode called vfo=currVFO, mode=USB, width=-1
[2021-04-14 23:42:32.105713][00:00:19.793877][RIGCTRL:debug] frame.c(432):rig2icom_mode entered
[2021-04-14 23:42:32.105713][00:00:19.793898][RIGCTRL:trace] rig2icom_mode: mode=4, width=-1
[2021-04-14 23:42:32.105713][00:00:19.793916][RIGCTRL:trace] rig2icom_mode: width==RIG_PASSBAND_NOCHANGE
[2021-04-14 23:42:32.105713][00:00:19.793935][RIGCTRL:debug] rig.c(2207):rig_get_mode entered
[2021-04-14 23:42:32.105713][00:00:19.793972][RIGCTRL:debug] rig.c(1512):rig_get_cache entered
[2021-04-14 23:42:32.105713][00:00:19.793994][RIGCTRL:trace] rig_get_cache: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.105713][00:00:19.794011][RIGCTRL:trace] rig_get_cache: vfo=Sub, freq=7074500
[2021-04-14 23:42:32.105713][00:00:19.794022][RIGCTRL:debug] rig.c(1621):rig_get_cache return(0)
[2021-04-14 23:42:32.105713][00:00:19.794033][RIGCTRL:trace] rig_get_mode: currVFO cache check age=0ms
[2021-04-14 23:42:32.105713][00:00:19.794055][RIGCTRL:trace] rig_get_mode: cache miss age mode=26788ms, width=26788ms
[2021-04-14 23:42:32.105713][00:00:19.794066][RIGCTRL:debug] icom_get_mode called vfo=currVFO
[2021-04-14 23:42:32.105713][00:00:19.794077][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.105713][00:00:19.794091][RIGCTRL:debug] icom_transaction: cmd=0x04, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:32.105713][00:00:19.794102][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.105713][00:00:19.794113][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.105713][00:00:19.794124][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:32.105713][00:00:19.794135][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.105713][00:00:19.794146][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.105713][00:00:19.794156][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.106466][00:00:19.794203][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.106466][00:00:19.794219][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.106466][00:00:19.794341][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:32.106466][00:00:19.794360][RIGCTRL:trace] 0000 fe fe 50 e0 04 fd ..P...
[2021-04-14 23:42:32.106466][00:00:19.794375][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.106466][00:00:19.794386][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.138235][00:00:19.826265][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.138235][00:00:19.826306][RIGCTRL:trace] 0000 fe fe 50 e0 04 fd ..P...
[2021-04-14 23:42:32.138235][00:00:19.826331][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.138235][00:00:19.826352][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.138235][00:00:19.826373][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.148744][00:00:19.836844][RIGCTRL:trace] read_string(): RX 8 characters
[2021-04-14 23:42:32.148744][00:00:19.836880][RIGCTRL:trace] 0000 fe fe e0 50 04 01 01 fd ...P....
[2021-04-14 23:42:32.148744][00:00:19.836895][RIGCTRL:debug] frame.c(409):read_icom_frame return(8)
[2021-04-14 23:42:32.148744][00:00:19.836913][RIGCTRL:trace] icom_one_transaction: frm_len=8, frm_len-1=fd, frm_len-2=01
[2021-04-14 23:42:32.148744][00:00:19.836926][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.148744][00:00:19.836937][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.148744][00:00:19.836954][RIGCTRL:trace] icom_get_mode: modebuf[0]=0x04, modebuf[1]=0x01, mode_len=2
[2021-04-14 23:42:32.148744][00:00:19.836965][RIGCTRL:debug] frame.c(565):icom2rig_mode entered
[2021-04-14 23:42:32.148744][00:00:19.836976][RIGCTRL:trace] icom2rig_mode: mode=0x01, pd=1
[2021-04-14 23:42:32.148744][00:00:19.836993][RIGCTRL:debug] rig.c(2429):rig_passband_wide entered
[2021-04-14 23:42:32.148744][00:00:19.837008][RIGCTRL:debug] rig.c(2453):rig_passband_wide return(0)
[2021-04-14 23:42:32.148744][00:00:19.837026][RIGCTRL:debug] rig.c(2331):rig_passband_normal entered
[2021-04-14 23:42:32.148744][00:00:19.837038][RIGCTRL:debug] rig_passband_normal: return filter#0, width=2400
[2021-04-14 23:42:32.148744][00:00:19.837049][RIGCTRL:debug] rig.c(2346):rig_passband_normal return(2400)
[2021-04-14 23:42:32.148744][00:00:19.837060][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:32.148744][00:00:19.837071][RIGCTRL:debug] rig_set_vfo called vfo=VFOB
[2021-04-14 23:42:32.148744][00:00:19.837083][RIGCTRL:error] rig_set_vfo: rig does not have VFOB
[2021-04-14 23:42:32.148744][00:00:19.837109][RIGCTRL:debug] rig.c(2497):rig_set_vfo return(-1)
[2021-04-14 23:42:32.148744][00:00:19.837126][RIGCTRL:debug] icom_get_dsp_flt called, mode=USB
[2021-04-14 23:42:32.148744][00:00:19.837139][RIGCTRL:trace] icom_get_mode(2142): vfosave=currVFO, currvfo=Sub
[2021-04-14 23:42:32.148744][00:00:19.837150][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:32.148744][00:00:19.837160][RIGCTRL:debug] rig_set_vfo called vfo=VFOA
[2021-04-14 23:42:32.148744][00:00:19.837170][RIGCTRL:error] rig_set_vfo: rig does not have VFOA
[2021-04-14 23:42:32.149493][00:00:19.837196][RIGCTRL:debug] rig.c(2497):rig_set_vfo return(-1)
[2021-04-14 23:42:32.149493][00:00:19.837211][RIGCTRL:trace] icom_get_mode: vfo=currVFO returning mode=USB, width=0
[2021-04-14 23:42:32.149493][00:00:19.837228][RIGCTRL:debug] icom.c(2157):icom_get_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837240][RIGCTRL:trace] rig_get_mode: retcode after get_mode=0
[2021-04-14 23:42:32.149493][00:00:19.837264][RIGCTRL:trace] rig_get_mode(2295): debug
[2021-04-14 23:42:32.149493][00:00:19.837285][RIGCTRL:trace] rig_get_mode(2303): debug
[2021-04-14 23:42:32.149493][00:00:19.837300][RIGCTRL:debug] rig.c(2331):rig_passband_normal entered
[2021-04-14 23:42:32.149493][00:00:19.837316][RIGCTRL:debug] rig_passband_normal: return filter#0, width=2400
[2021-04-14 23:42:32.149493][00:00:19.837327][RIGCTRL:debug] rig.c(2346):rig_passband_normal return(2400)
[2021-04-14 23:42:32.149493][00:00:19.837342][RIGCTRL:debug] rig.c(1345):set_cache_mode entered
[2021-04-14 23:42:32.149493][00:00:19.837377][RIGCTRL:debug] rig.c(1409):set_cache_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837400][RIGCTRL:debug] rig.c(2310):rig_get_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837412][RIGCTRL:debug] frame.c(556):rig2icom_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837424][RIGCTRL:debug] icom_set_mode: icmode=1, icmode_ext=-1
[2021-04-14 23:42:32.149493][00:00:19.837435][RIGCTRL:debug] icom_set_mode: #2 icmode=1, icmode_ext=-1
[2021-04-14 23:42:32.149493][00:00:19.837446][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.149493][00:00:19.837459][RIGCTRL:debug] icom_transaction: cmd=0x06, subcmd=0x01, payload_len=0
[2021-04-14 23:42:32.149493][00:00:19.837474][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.149493][00:00:19.837485][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.149493][00:00:19.837496][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:32.149493][00:00:19.837506][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.149493][00:00:19.837518][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.149493][00:00:19.837526][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.149493][00:00:19.837558][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.149493][00:00:19.837568][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.149493][00:00:19.837673][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:32.149493][00:00:19.837696][RIGCTRL:trace] 0000 fe fe 50 e0 06 01 fd ..P....
[2021-04-14 23:42:32.149493][00:00:19.837723][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.149493][00:00:19.837746][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.171257][00:00:19.859234][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:32.171257][00:00:19.859265][RIGCTRL:trace] 0000 fe fe 50 e0 06 01 fd ..P....
[2021-04-14 23:42:32.171257][00:00:19.859280][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:32.171257][00:00:19.859292][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.171257][00:00:19.859303][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.181766][00:00:19.869748][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.181766][00:00:19.869778][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.181766][00:00:19.869792][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.181766][00:00:19.869808][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.181766][00:00:19.869821][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.181766][00:00:19.869832][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.181766][00:00:19.869846][RIGCTRL:debug] icom.c(1870):icom_set_mode return(0)
[2021-04-14 23:42:32.181766][00:00:19.869859][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:32.181766][00:00:19.869870][RIGCTRL:debug] rig_set_vfo called vfo=Main
[2021-04-14 23:42:32.181766][00:00:19.869881][RIGCTRL:trace] vfo_fixup: vfo=Main
[2021-04-14 23:42:32.181766][00:00:19.869892][RIGCTRL:debug] rig.c(4326):rig_get_split_vfo entered
[2021-04-14 23:42:32.181766][00:00:19.869903][RIGCTRL:debug] rig.c(4342):rig_get_split_vfo return(-11)
[2021-04-14 23:42:32.181766][00:00:19.869916][RIGCTRL:trace] vfo_fixup: RIG_VFO_TX changed to Sub, split=1, satmode=0
[2021-04-14 23:42:32.181766][00:00:19.869927][RIGCTRL:trace] vfo_fixup: final vfo=Sub
[2021-04-14 23:42:32.181766][00:00:19.869938][RIGCTRL:debug] icom_set_vfo called vfo=Sub
[2021-04-14 23:42:32.181766][00:00:19.869949][RIGCTRL:trace] icom_set_vfo: line#2295
[2021-04-14 23:42:32.181766][00:00:19.869960][RIGCTRL:trace] icom_set_vfo: Sub asked for, ended up with vfo=Sub
[2021-04-14 23:42:32.181766][00:00:19.869971][RIGCTRL:trace] icom_set_vfo: line#2448
[2021-04-14 23:42:32.181766][00:00:19.869982][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.181766][00:00:19.869995][RIGCTRL:debug] icom_transaction: cmd=0x07, subcmd=0xd1, payload_len=0
[2021-04-14 23:42:32.181766][00:00:19.870006][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.181766][00:00:19.870017][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.181766][00:00:19.870027][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:32.181766][00:00:19.870038][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.181766][00:00:19.870049][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.181766][00:00:19.870059][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.181766][00:00:19.870090][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.181766][00:00:19.870100][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.182517][00:00:19.870227][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:32.182517][00:00:19.870247][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:32.182517][00:00:19.870261][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.182517][00:00:19.870273][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.203531][00:00:19.891526][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:32.203531][00:00:19.891556][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:32.203531][00:00:19.891571][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:32.203531][00:00:19.891583][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.203531][00:00:19.891594][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.214036][00:00:19.901986][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.214036][00:00:19.902017][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.214036][00:00:19.902031][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.214036][00:00:19.902048][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.214036][00:00:19.902060][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.214036][00:00:19.902072][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.214036][00:00:19.902088][RIGCTRL:trace] icom_set_vfo: line#2451
[2021-04-14 23:42:32.214036][00:00:19.902099][RIGCTRL:trace] icom_set_vfo: line#2474 curr_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902113][RIGCTRL:debug] icom.c(2475):icom_set_vfo return(0)
[2021-04-14 23:42:32.214036][00:00:19.902125][RIGCTRL:trace] rig_set_vfo: rig->state.current_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902136][RIGCTRL:debug] icom_get_freq called for Sub, curr_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902146][RIGCTRL:debug] icom_get_freq: using vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902158][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902168][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:32.214036][00:00:19.902179][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:32.214036][00:00:19.902190][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.214036][00:00:19.902204][RIGCTRL:debug] icom_transaction: cmd=0x03, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:32.214036][00:00:19.902215][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.214036][00:00:19.902227][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.214036][00:00:19.902238][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:32.214036][00:00:19.902249][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.214036][00:00:19.902261][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.214036][00:00:19.902270][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.214036][00:00:19.902302][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.214036][00:00:19.902312][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.214036][00:00:19.902420][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:32.214786][00:00:19.902444][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:32.214786][00:00:19.902464][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.214786][00:00:19.902475][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.267322][00:00:19.955285][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.267322][00:00:19.955329][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:32.267322][00:00:19.955354][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.267322][00:00:19.955376][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.267322][00:00:19.955395][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.277831][00:00:19.966044][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:32.277831][00:00:19.966087][RIGCTRL:trace] 0000 fe fe e0 50 03 00 45 07 07 00 fd ...P..E....
[2021-04-14 23:42:32.277831][00:00:19.966112][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:32.277831][00:00:19.966136][RIGCTRL:trace] icom_one_transaction: frm_len=11, frm_len-1=fd, frm_len-2=00
[2021-04-14 23:42:32.277831][00:00:19.966157][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.277831][00:00:19.966177][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.277831][00:00:19.966197][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966223][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:32.278580][00:00:19.966243][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:32.278580][00:00:19.966261][RIGCTRL:debug] from_bcd called
[2021-04-14 23:42:32.278580][00:00:19.966279][RIGCTRL:debug] icom_get_freq exit vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966300][RIGCTRL:debug] icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:32.278580][00:00:19.966332][RIGCTRL:trace] rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:32.278580][00:00:19.966355][RIGCTRL:trace] rig_set_vfo: return 0, vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966373][RIGCTRL:debug] rig.c(2568):rig_set_vfo return(0)
[2021-04-14 23:42:32.278580][00:00:19.966406][RIGCTRL:debug] icom.c(4996):icom_set_split_freq_mode return(0)
[2021-04-14 23:42:32.278580][00:00:19.966425][RIGCTRL:debug] rig.c(4122):rig_set_split_freq_mode return(0)
[2021-04-14 23:42:32.278580][00:00:19.966434][RIGCTRL:trace] rig_set_split_vfo split=1
[2021-04-14 23:42:32.278580][00:00:19.966461][RIGCTRL:debug] rig.c(4227):rig_set_split_vfo entered
[2021-04-14 23:42:32.278580][00:00:19.966478][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:32.278580][00:00:19.966494][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:32.278580][00:00:19.966515][RIGCTRL:debug] icom_set_split_vfo called vfo='currVFO', split=1, tx_vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966532][RIGCTRL:trace] icom_set_split_vfo: vfo clause 4
[2021-04-14 23:42:32.278580][00:00:19.966549][RIGCTRL:trace] icom_set_split_vfo: set_vfo because tx_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966571][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.278580][00:00:19.966591][RIGCTRL:debug] icom_transaction: cmd=0x0f, subcmd=0x01, payload_len=0
[2021-04-14 23:42:32.278580][00:00:19.966608][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.278580][00:00:19.966626][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.278580][00:00:19.966645][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:32.278580][00:00:19.966662][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.278580][00:00:19.966680][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.278580][00:00:19.966694][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.278580][00:00:19.966744][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.278580][00:00:19.966762][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.278580][00:00:19.966882][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:32.278580][00:00:19.966911][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:32.278580][00:00:19.966934][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.279330][00:00:19.967006][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.289838][00:00:19.977815][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:32.289838][00:00:19.977857][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:32.289838][00:00:19.977881][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:32.289838][00:00:19.977901][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.289838][00:00:19.977919][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.311088][00:00:19.999002][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.311088][00:00:19.999033][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.311088][00:00:19.999048][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.311088][00:00:19.999064][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.311088][00:00:19.999076][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.311088][00:00:19.999087][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.311088][00:00:19.999104][RIGCTRL:debug] icom_set_split_vfo: vfo=Sub curr_vfo=Sub rx_vfo=Sub tx_vfo=Sub split=1
[2021-04-14 23:42:32.311088][00:00:19.999115][RIGCTRL:debug] icom.c(5345):icom_set_split_vfo return(0)
[2021-04-14 23:42:32.311088][00:00:19.999127][RIGCTRL:debug] rig.c(4256):rig_set_split_vfo return(0)
[2021-04-14 23:42:32.311088][00:00:19.999231][SYSLOG:trace] #: 3 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 7074500Hz} Mode: 3; SPLIT: on; PTT: off)
| 1.0 | IC-756 split mode stays on VFOB after - [2021-04-14 23:42:31.869233][00:00:19.557463][RIGCTRL:trace] #: 2 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 0Hz} Mode: 3; SPLIT: off; PTT: off)
[2021-04-14 23:42:31.869233][00:00:19.557492][RIGCTRL:trace] #: 3 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 7074500Hz} Mode: 3; SPLIT: on; PTT: off)
[2021-04-14 23:42:31.869233][00:00:19.557518][RIGCTRL:trace] txf: 7074500 reversed: false
[2021-04-14 23:42:31.869233][00:00:19.557533][RIGCTRL:trace] RX VFO=Main TX VFO=Sub
[2021-04-14 23:42:31.869233][00:00:19.557552][RIGCTRL:trace] rig_set_split_vfo split=1
[2021-04-14 23:42:31.869233][00:00:19.557590][RIGCTRL:debug] rig.c(4227):rig_set_split_vfo entered
[2021-04-14 23:42:31.869233][00:00:19.557614][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:31.869233][00:00:19.557634][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:31.869233][00:00:19.557665][RIGCTRL:debug] icom_set_split_vfo called vfo='currVFO', split=1, tx_vfo=Sub, curr_vfo=Main
[2021-04-14 23:42:31.869233][00:00:19.557688][RIGCTRL:trace] icom_set_split_vfo: vfo clause 4
[2021-04-14 23:42:31.869233][00:00:19.557709][RIGCTRL:trace] icom_set_split_vfo: set_vfo because tx_vfo=Sub
[2021-04-14 23:42:31.869233][00:00:19.557731][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.869233][00:00:19.557757][RIGCTRL:debug] icom_transaction: cmd=0x0f, subcmd=0x01, payload_len=0
[2021-04-14 23:42:31.869233][00:00:19.557780][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.869233][00:00:19.557803][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.869985][00:00:19.557827][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:31.869985][00:00:19.557848][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.869985][00:00:19.557869][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.869985][00:00:19.557886][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.869985][00:00:19.557937][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.869985][00:00:19.557955][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.869985][00:00:19.558066][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:31.869985][00:00:19.558089][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:31.869985][00:00:19.558108][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.869985][00:00:19.558125][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.870735][00:00:19.558931][SYSLOG:trace] #: 2 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 7074000Hz} Mode: 3; SPLIT: unknown; PTT: off)
[2021-04-14 23:42:31.890999][00:00:19.579220][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:31.890999][00:00:19.579259][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:31.890999][00:00:19.579284][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:31.890999][00:00:19.579305][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.890999][00:00:19.579325][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.890999][00:00:19.579571][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:31.891748][00:00:19.579596][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:31.891748][00:00:19.579619][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:31.891748][00:00:19.579645][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:31.891748][00:00:19.579668][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:31.891748][00:00:19.579691][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:31.891748][00:00:19.579731][RIGCTRL:debug] icom_set_split_vfo: vfo=Main curr_vfo=Main rx_vfo=Main tx_vfo=Sub split=1
[2021-04-14 23:42:31.891748][00:00:19.579754][RIGCTRL:debug] icom.c(5345):icom_set_split_vfo return(0)
[2021-04-14 23:42:31.891748][00:00:19.579779][RIGCTRL:debug] rig.c(4256):rig_set_split_vfo return(0)
[2021-04-14 23:42:31.891748][00:00:19.579797][RIGCTRL:trace] rig_set_split_freq_mode freq=7074500 mode = USB
[2021-04-14 23:42:31.891748][00:00:19.579827][RIGCTRL:debug] rig.c(4071):rig_set_split_freq_mode entered
[2021-04-14 23:42:31.891748][00:00:19.579860][RIGCTRL:debug] rig_set_split_freq_mode: vfo=VFOB, tx_freq=7074500, tx_mode=USB, tx_width=-1
[2021-04-14 23:42:31.891748][00:00:19.579894][RIGCTRL:debug] icom_set_split_freq_mode called vfo=VFOB
[2021-04-14 23:42:31.891748][00:00:19.579908][RIGCTRL:debug] icom_set_split_freq_mode: curr_vfo=Main
[2021-04-14 23:42:31.891748][00:00:19.579921][RIGCTRL:debug] icom_set_split_freq_mode: before get_split_vfos rx_vfo=Main tx_vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.579935][RIGCTRL:debug] icom_get_split_vfos called
[2021-04-14 23:42:31.891748][00:00:19.579948][RIGCTRL:trace] icom_get_split_vfos: VFO_HAS_MAIN_SUB_ONLY, split=1, rx=Main, tx=Sub
[2021-04-14 23:42:31.891748][00:00:19.579960][RIGCTRL:debug] icom.c(4201):icom_get_split_vfos return(0)
[2021-04-14 23:42:31.891748][00:00:19.579971][RIGCTRL:debug] icom_set_split_freq_mode: after get_split_vfos rx_vfo=Main tx_vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.579982][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:31.891748][00:00:19.579993][RIGCTRL:debug] rig_set_vfo called vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580004][RIGCTRL:trace] vfo_fixup: vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580015][RIGCTRL:debug] rig.c(4326):rig_get_split_vfo entered
[2021-04-14 23:42:31.891748][00:00:19.580026][RIGCTRL:debug] rig.c(4342):rig_get_split_vfo return(-11)
[2021-04-14 23:42:31.891748][00:00:19.580039][RIGCTRL:trace] vfo_fixup: RIG_VFO_TX changed to Sub, split=1, satmode=0
[2021-04-14 23:42:31.891748][00:00:19.580049][RIGCTRL:trace] vfo_fixup: final vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580060][RIGCTRL:debug] icom_set_vfo called vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580071][RIGCTRL:trace] icom_set_vfo: VFO changing from Main to Sub
[2021-04-14 23:42:31.891748][00:00:19.580085][RIGCTRL:trace] icom_set_vfo: line#2295
[2021-04-14 23:42:31.891748][00:00:19.580100][RIGCTRL:trace] icom_set_vfo: Sub asked for, ended up with vfo=Sub
[2021-04-14 23:42:31.891748][00:00:19.580111][RIGCTRL:trace] icom_set_vfo: line#2448
[2021-04-14 23:42:31.891748][00:00:19.580122][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.891748][00:00:19.580135][RIGCTRL:debug] icom_transaction: cmd=0x07, subcmd=0xd1, payload_len=0
[2021-04-14 23:42:31.891748][00:00:19.580146][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.891748][00:00:19.580158][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.891748][00:00:19.580169][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:31.891748][00:00:19.580179][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.891748][00:00:19.580190][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.891748][00:00:19.580199][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.891748][00:00:19.580234][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.891748][00:00:19.580249][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.892499][00:00:19.580413][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:31.892499][00:00:19.580439][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:31.892499][00:00:19.580458][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.892499][00:00:19.580475][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.913515][00:00:19.601668][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:31.913515][00:00:19.601703][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:31.913515][00:00:19.601719][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:31.913515][00:00:19.601732][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.913515][00:00:19.601743][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.924536][00:00:19.612617][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:31.924536][00:00:19.612648][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:31.924536][00:00:19.612663][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:31.924536][00:00:19.612678][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:31.924536][00:00:19.612691][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:31.924536][00:00:19.612702][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:31.924536][00:00:19.612713][RIGCTRL:trace] icom_set_vfo: line#2451
[2021-04-14 23:42:31.924536][00:00:19.612724][RIGCTRL:trace] icom_set_vfo: line#2474 curr_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612735][RIGCTRL:debug] icom.c(2475):icom_set_vfo return(0)
[2021-04-14 23:42:31.924536][00:00:19.612746][RIGCTRL:trace] rig_set_vfo: rig->state.current_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612758][RIGCTRL:debug] icom_get_freq called for Sub, curr_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612768][RIGCTRL:debug] icom_get_freq: using vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612779][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.924536][00:00:19.612789][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:31.924536][00:00:19.612800][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:31.924536][00:00:19.612811][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.924536][00:00:19.612828][RIGCTRL:debug] icom_transaction: cmd=0x03, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:31.924536][00:00:19.612839][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.924536][00:00:19.612850][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.924536][00:00:19.612861][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:31.924536][00:00:19.612871][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.924536][00:00:19.612882][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.924536][00:00:19.612892][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.924536][00:00:19.612922][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.924536][00:00:19.612933][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.924536][00:00:19.613023][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:31.924536][00:00:19.613043][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:31.924536][00:00:19.613058][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.924536][00:00:19.613069][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.946300][00:00:19.634381][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:31.946300][00:00:19.634414][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:31.946300][00:00:19.634429][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:31.946300][00:00:19.634443][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.946300][00:00:19.634454][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:31.956809][00:00:19.645119][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:31.956809][00:00:19.645160][RIGCTRL:trace] 0000 fe fe e0 50 03 00 45 07 07 00 fd ...P..E....
[2021-04-14 23:42:31.956809][00:00:19.645184][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:31.956809][00:00:19.645208][RIGCTRL:trace] icom_one_transaction: frm_len=11, frm_len-1=fd, frm_len-2=00
[2021-04-14 23:42:31.956809][00:00:19.645228][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:31.956809][00:00:19.645247][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:31.956809][00:00:19.645267][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.956809][00:00:19.645285][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:31.956809][00:00:19.645304][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:31.957562][00:00:19.645346][RIGCTRL:debug] from_bcd called
[2021-04-14 23:42:31.957562][00:00:19.645376][RIGCTRL:debug] icom_get_freq exit vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645397][RIGCTRL:debug] icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:31.957562][00:00:19.645428][RIGCTRL:trace] rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:31.957562][00:00:19.645455][RIGCTRL:trace] rig_set_vfo: return 0, vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645475][RIGCTRL:debug] rig.c(2568):rig_set_vfo return(0)
[2021-04-14 23:42:31.957562][00:00:19.645504][RIGCTRL:debug] rig_set_freq called vfo=currVFO, freq=7074500
[2021-04-14 23:42:31.957562][00:00:19.645524][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645542][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:31.957562][00:00:19.645560][RIGCTRL:trace] rig_set_freq: TARGETABLE_FREQ vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645585][RIGCTRL:debug] icom_set_freq called currVFO=7074500.000000
[2021-04-14 23:42:31.957562][00:00:19.645605][RIGCTRL:trace] icom_set_freq: currVFO asked for so vfo set to Sub
[2021-04-14 23:42:31.957562][00:00:19.645622][RIGCTRL:trace] icom_set_freq: set_vfo_curr=Sub
[2021-04-14 23:42:31.957562][00:00:19.645639][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645656][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:31.957562][00:00:19.645675][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:31.957562][00:00:19.645694][RIGCTRL:debug] rig_get_freq called vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645730][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645748][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:31.957562][00:00:19.645783][RIGCTRL:debug] rig.c(1512):rig_get_cache entered
[2021-04-14 23:42:31.957562][00:00:19.645803][RIGCTRL:trace] rig_get_cache: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645847][RIGCTRL:trace] rig_get_cache: vfo=Sub, freq=0
[2021-04-14 23:42:31.957562][00:00:19.645878][RIGCTRL:debug] rig.c(1621):rig_get_cache return(0)
[2021-04-14 23:42:31.957562][00:00:19.645897][RIGCTRL:trace] rig_get_freq: cache check1 age=26640ms
[2021-04-14 23:42:31.957562][00:00:19.645935][RIGCTRL:trace] rig_get_freq: cache miss age=26640ms, cached_vfo=currVFO, asked_vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645956][RIGCTRL:debug] icom_get_freq called for currVFO, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.645975][RIGCTRL:debug] icom_get_freq: using vfo=currVFO
[2021-04-14 23:42:31.957562][00:00:19.645993][RIGCTRL:trace] set_vfo_curr: vfo=currVFO, curr_vfo=Sub
[2021-04-14 23:42:31.957562][00:00:19.646010][RIGCTRL:trace] set_vfo_curr: Asking for currVFO, currVFO=Sub
[2021-04-14 23:42:31.957562][00:00:19.646029][RIGCTRL:debug] icom.c(7776):set_vfo_curr return(0)
[2021-04-14 23:42:31.957562][00:00:19.646047][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:31.957562][00:00:19.646068][RIGCTRL:debug] icom_transaction: cmd=0x03, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:31.958309][00:00:19.646090][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:31.958309][00:00:19.646110][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:31.958309][00:00:19.646130][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:31.958309][00:00:19.646144][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:31.958309][00:00:19.646155][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:31.958309][00:00:19.646165][RIGCTRL:debug] tcflush
[2021-04-14 23:42:31.958309][00:00:19.646199][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:31.958309][00:00:19.646210][RIGCTRL:debug] write_block called
[2021-04-14 23:42:31.958309][00:00:19.646304][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:31.958309][00:00:19.646320][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:31.958309][00:00:19.646337][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:31.958309][00:00:19.646356][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.011304][00:00:19.699353][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.011304][00:00:19.699390][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:32.011304][00:00:19.699414][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.011304][00:00:19.699434][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.011304][00:00:19.699453][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.021813][00:00:19.710007][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:32.021813][00:00:19.710044][RIGCTRL:trace] 0000 fe fe e0 50 03 00 45 07 07 00 fd ...P..E....
[2021-04-14 23:42:32.021813][00:00:19.710059][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:32.021813][00:00:19.710076][RIGCTRL:trace] icom_one_transaction: frm_len=11, frm_len-1=fd, frm_len-2=00
[2021-04-14 23:42:32.021813][00:00:19.710088][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.021813][00:00:19.710099][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.021813][00:00:19.710113][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.021813][00:00:19.710124][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:32.021813][00:00:19.710136][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:32.021813][00:00:19.710146][RIGCTRL:debug] from_bcd called
[2021-04-14 23:42:32.021813][00:00:19.710162][RIGCTRL:debug] icom_get_freq exit vfo=currVFO, curr_vfo=Sub
[2021-04-14 23:42:32.021813][00:00:19.710173][RIGCTRL:debug] icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:32.021813][00:00:19.710199][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.021813][00:00:19.710210][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.021813][00:00:19.710236][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=7074500
[2021-04-14 23:42:32.021813][00:00:19.710259][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.021813][00:00:19.710293][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.021813][00:00:19.710304][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.022563][00:00:19.710338][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=7074500
[2021-04-14 23:42:32.022563][00:00:19.710362][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.022563][00:00:19.710382][RIGCTRL:debug] rig.c(2065):rig_get_freq return(0)
[2021-04-14 23:42:32.022563][00:00:19.710393][RIGCTRL:debug] to_bcd called
[2021-04-14 23:42:32.022563][00:00:19.710407][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.022563][00:00:19.710421][RIGCTRL:debug] icom_transaction: cmd=0x05, subcmd=0xffffffff, payload_len=5
[2021-04-14 23:42:32.022563][00:00:19.710432][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.022563][00:00:19.710443][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.022563][00:00:19.710454][RIGCTRL:debug] frame.c(90):make_cmd_frame return(11)
[2021-04-14 23:42:32.022563][00:00:19.710465][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.022563][00:00:19.710476][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.022563][00:00:19.710487][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.022563][00:00:19.710522][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.022563][00:00:19.710533][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.022563][00:00:19.710655][RIGCTRL:trace] write_block(): TX 11 bytes
[2021-04-14 23:42:32.022563][00:00:19.710677][RIGCTRL:trace] 0000 fe fe 50 e0 05 00 45 07 07 00 fd ..P...E....
[2021-04-14 23:42:32.022563][00:00:19.710690][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.022563][00:00:19.710702][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.043580][00:00:19.731877][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:32.043580][00:00:19.731935][RIGCTRL:trace] 0000 fe fe 50 e0 05 00 45 07 07 00 fd ..P...E....
[2021-04-14 23:42:32.043580][00:00:19.731955][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:32.043580][00:00:19.731972][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.043580][00:00:19.731985][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.054833][00:00:19.742821][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.054833][00:00:19.742856][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.054833][00:00:19.742872][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.054833][00:00:19.742888][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.054833][00:00:19.742900][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.054833][00:00:19.742912][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.105713][00:00:19.793517][RIGCTRL:debug] icom.c(1133):icom_set_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793561][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.105713][00:00:19.793584][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.105713][00:00:19.793627][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=0
[2021-04-14 23:42:32.105713][00:00:19.793663][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793686][RIGCTRL:debug] rig.c(1416):set_cache_freq entered
[2021-04-14 23:42:32.105713][00:00:19.793706][RIGCTRL:trace] set_cache_freq: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.105713][00:00:19.793746][RIGCTRL:trace] set_cache_freq: set vfo=Sub to freq=7074500
[2021-04-14 23:42:32.105713][00:00:19.793785][RIGCTRL:debug] rig.c(1506):set_cache_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793808][RIGCTRL:debug] rig.c(1859):rig_set_freq return(0)
[2021-04-14 23:42:32.105713][00:00:19.793847][RIGCTRL:debug] icom_set_mode called vfo=currVFO, mode=USB, width=-1
[2021-04-14 23:42:32.105713][00:00:19.793877][RIGCTRL:debug] frame.c(432):rig2icom_mode entered
[2021-04-14 23:42:32.105713][00:00:19.793898][RIGCTRL:trace] rig2icom_mode: mode=4, width=-1
[2021-04-14 23:42:32.105713][00:00:19.793916][RIGCTRL:trace] rig2icom_mode: width==RIG_PASSBAND_NOCHANGE
[2021-04-14 23:42:32.105713][00:00:19.793935][RIGCTRL:debug] rig.c(2207):rig_get_mode entered
[2021-04-14 23:42:32.105713][00:00:19.793972][RIGCTRL:debug] rig.c(1512):rig_get_cache entered
[2021-04-14 23:42:32.105713][00:00:19.793994][RIGCTRL:trace] rig_get_cache: vfo=currVFO, current_vfo=Sub
[2021-04-14 23:42:32.105713][00:00:19.794011][RIGCTRL:trace] rig_get_cache: vfo=Sub, freq=7074500
[2021-04-14 23:42:32.105713][00:00:19.794022][RIGCTRL:debug] rig.c(1621):rig_get_cache return(0)
[2021-04-14 23:42:32.105713][00:00:19.794033][RIGCTRL:trace] rig_get_mode: currVFO cache check age=0ms
[2021-04-14 23:42:32.105713][00:00:19.794055][RIGCTRL:trace] rig_get_mode: cache miss age mode=26788ms, width=26788ms
[2021-04-14 23:42:32.105713][00:00:19.794066][RIGCTRL:debug] icom_get_mode called vfo=currVFO
[2021-04-14 23:42:32.105713][00:00:19.794077][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.105713][00:00:19.794091][RIGCTRL:debug] icom_transaction: cmd=0x04, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:32.105713][00:00:19.794102][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.105713][00:00:19.794113][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.105713][00:00:19.794124][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:32.105713][00:00:19.794135][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.105713][00:00:19.794146][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.105713][00:00:19.794156][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.106466][00:00:19.794203][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.106466][00:00:19.794219][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.106466][00:00:19.794341][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:32.106466][00:00:19.794360][RIGCTRL:trace] 0000 fe fe 50 e0 04 fd ..P...
[2021-04-14 23:42:32.106466][00:00:19.794375][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.106466][00:00:19.794386][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.138235][00:00:19.826265][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.138235][00:00:19.826306][RIGCTRL:trace] 0000 fe fe 50 e0 04 fd ..P...
[2021-04-14 23:42:32.138235][00:00:19.826331][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.138235][00:00:19.826352][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.138235][00:00:19.826373][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.148744][00:00:19.836844][RIGCTRL:trace] read_string(): RX 8 characters
[2021-04-14 23:42:32.148744][00:00:19.836880][RIGCTRL:trace] 0000 fe fe e0 50 04 01 01 fd ...P....
[2021-04-14 23:42:32.148744][00:00:19.836895][RIGCTRL:debug] frame.c(409):read_icom_frame return(8)
[2021-04-14 23:42:32.148744][00:00:19.836913][RIGCTRL:trace] icom_one_transaction: frm_len=8, frm_len-1=fd, frm_len-2=01
[2021-04-14 23:42:32.148744][00:00:19.836926][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.148744][00:00:19.836937][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.148744][00:00:19.836954][RIGCTRL:trace] icom_get_mode: modebuf[0]=0x04, modebuf[1]=0x01, mode_len=2
[2021-04-14 23:42:32.148744][00:00:19.836965][RIGCTRL:debug] frame.c(565):icom2rig_mode entered
[2021-04-14 23:42:32.148744][00:00:19.836976][RIGCTRL:trace] icom2rig_mode: mode=0x01, pd=1
[2021-04-14 23:42:32.148744][00:00:19.836993][RIGCTRL:debug] rig.c(2429):rig_passband_wide entered
[2021-04-14 23:42:32.148744][00:00:19.837008][RIGCTRL:debug] rig.c(2453):rig_passband_wide return(0)
[2021-04-14 23:42:32.148744][00:00:19.837026][RIGCTRL:debug] rig.c(2331):rig_passband_normal entered
[2021-04-14 23:42:32.148744][00:00:19.837038][RIGCTRL:debug] rig_passband_normal: return filter#0, width=2400
[2021-04-14 23:42:32.148744][00:00:19.837049][RIGCTRL:debug] rig.c(2346):rig_passband_normal return(2400)
[2021-04-14 23:42:32.148744][00:00:19.837060][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:32.148744][00:00:19.837071][RIGCTRL:debug] rig_set_vfo called vfo=VFOB
[2021-04-14 23:42:32.148744][00:00:19.837083][RIGCTRL:error] rig_set_vfo: rig does not have VFOB
[2021-04-14 23:42:32.148744][00:00:19.837109][RIGCTRL:debug] rig.c(2497):rig_set_vfo return(-1)
[2021-04-14 23:42:32.148744][00:00:19.837126][RIGCTRL:debug] icom_get_dsp_flt called, mode=USB
[2021-04-14 23:42:32.148744][00:00:19.837139][RIGCTRL:trace] icom_get_mode(2142): vfosave=currVFO, currvfo=Sub
[2021-04-14 23:42:32.148744][00:00:19.837150][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:32.148744][00:00:19.837160][RIGCTRL:debug] rig_set_vfo called vfo=VFOA
[2021-04-14 23:42:32.148744][00:00:19.837170][RIGCTRL:error] rig_set_vfo: rig does not have VFOA
[2021-04-14 23:42:32.149493][00:00:19.837196][RIGCTRL:debug] rig.c(2497):rig_set_vfo return(-1)
[2021-04-14 23:42:32.149493][00:00:19.837211][RIGCTRL:trace] icom_get_mode: vfo=currVFO returning mode=USB, width=0
[2021-04-14 23:42:32.149493][00:00:19.837228][RIGCTRL:debug] icom.c(2157):icom_get_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837240][RIGCTRL:trace] rig_get_mode: retcode after get_mode=0
[2021-04-14 23:42:32.149493][00:00:19.837264][RIGCTRL:trace] rig_get_mode(2295): debug
[2021-04-14 23:42:32.149493][00:00:19.837285][RIGCTRL:trace] rig_get_mode(2303): debug
[2021-04-14 23:42:32.149493][00:00:19.837300][RIGCTRL:debug] rig.c(2331):rig_passband_normal entered
[2021-04-14 23:42:32.149493][00:00:19.837316][RIGCTRL:debug] rig_passband_normal: return filter#0, width=2400
[2021-04-14 23:42:32.149493][00:00:19.837327][RIGCTRL:debug] rig.c(2346):rig_passband_normal return(2400)
[2021-04-14 23:42:32.149493][00:00:19.837342][RIGCTRL:debug] rig.c(1345):set_cache_mode entered
[2021-04-14 23:42:32.149493][00:00:19.837377][RIGCTRL:debug] rig.c(1409):set_cache_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837400][RIGCTRL:debug] rig.c(2310):rig_get_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837412][RIGCTRL:debug] frame.c(556):rig2icom_mode return(0)
[2021-04-14 23:42:32.149493][00:00:19.837424][RIGCTRL:debug] icom_set_mode: icmode=1, icmode_ext=-1
[2021-04-14 23:42:32.149493][00:00:19.837435][RIGCTRL:debug] icom_set_mode: #2 icmode=1, icmode_ext=-1
[2021-04-14 23:42:32.149493][00:00:19.837446][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.149493][00:00:19.837459][RIGCTRL:debug] icom_transaction: cmd=0x06, subcmd=0x01, payload_len=0
[2021-04-14 23:42:32.149493][00:00:19.837474][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.149493][00:00:19.837485][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.149493][00:00:19.837496][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:32.149493][00:00:19.837506][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.149493][00:00:19.837518][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.149493][00:00:19.837526][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.149493][00:00:19.837558][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.149493][00:00:19.837568][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.149493][00:00:19.837673][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:32.149493][00:00:19.837696][RIGCTRL:trace] 0000 fe fe 50 e0 06 01 fd ..P....
[2021-04-14 23:42:32.149493][00:00:19.837723][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.149493][00:00:19.837746][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.171257][00:00:19.859234][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:32.171257][00:00:19.859265][RIGCTRL:trace] 0000 fe fe 50 e0 06 01 fd ..P....
[2021-04-14 23:42:32.171257][00:00:19.859280][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:32.171257][00:00:19.859292][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.171257][00:00:19.859303][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.181766][00:00:19.869748][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.181766][00:00:19.869778][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.181766][00:00:19.869792][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.181766][00:00:19.869808][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.181766][00:00:19.869821][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.181766][00:00:19.869832][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.181766][00:00:19.869846][RIGCTRL:debug] icom.c(1870):icom_set_mode return(0)
[2021-04-14 23:42:32.181766][00:00:19.869859][RIGCTRL:debug] rig.c(2482):rig_set_vfo entered
[2021-04-14 23:42:32.181766][00:00:19.869870][RIGCTRL:debug] rig_set_vfo called vfo=Main
[2021-04-14 23:42:32.181766][00:00:19.869881][RIGCTRL:trace] vfo_fixup: vfo=Main
[2021-04-14 23:42:32.181766][00:00:19.869892][RIGCTRL:debug] rig.c(4326):rig_get_split_vfo entered
[2021-04-14 23:42:32.181766][00:00:19.869903][RIGCTRL:debug] rig.c(4342):rig_get_split_vfo return(-11)
[2021-04-14 23:42:32.181766][00:00:19.869916][RIGCTRL:trace] vfo_fixup: RIG_VFO_TX changed to Sub, split=1, satmode=0
[2021-04-14 23:42:32.181766][00:00:19.869927][RIGCTRL:trace] vfo_fixup: final vfo=Sub
[2021-04-14 23:42:32.181766][00:00:19.869938][RIGCTRL:debug] icom_set_vfo called vfo=Sub
[2021-04-14 23:42:32.181766][00:00:19.869949][RIGCTRL:trace] icom_set_vfo: line#2295
[2021-04-14 23:42:32.181766][00:00:19.869960][RIGCTRL:trace] icom_set_vfo: Sub asked for, ended up with vfo=Sub
[2021-04-14 23:42:32.181766][00:00:19.869971][RIGCTRL:trace] icom_set_vfo: line#2448
[2021-04-14 23:42:32.181766][00:00:19.869982][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.181766][00:00:19.869995][RIGCTRL:debug] icom_transaction: cmd=0x07, subcmd=0xd1, payload_len=0
[2021-04-14 23:42:32.181766][00:00:19.870006][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.181766][00:00:19.870017][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.181766][00:00:19.870027][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:32.181766][00:00:19.870038][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.181766][00:00:19.870049][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.181766][00:00:19.870059][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.181766][00:00:19.870090][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.181766][00:00:19.870100][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.182517][00:00:19.870227][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:32.182517][00:00:19.870247][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:32.182517][00:00:19.870261][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.182517][00:00:19.870273][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.203531][00:00:19.891526][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:32.203531][00:00:19.891556][RIGCTRL:trace] 0000 fe fe 50 e0 07 d1 fd ..P....
[2021-04-14 23:42:32.203531][00:00:19.891571][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:32.203531][00:00:19.891583][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.203531][00:00:19.891594][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.214036][00:00:19.901986][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.214036][00:00:19.902017][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.214036][00:00:19.902031][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.214036][00:00:19.902048][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.214036][00:00:19.902060][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.214036][00:00:19.902072][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.214036][00:00:19.902088][RIGCTRL:trace] icom_set_vfo: line#2451
[2021-04-14 23:42:32.214036][00:00:19.902099][RIGCTRL:trace] icom_set_vfo: line#2474 curr_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902113][RIGCTRL:debug] icom.c(2475):icom_set_vfo return(0)
[2021-04-14 23:42:32.214036][00:00:19.902125][RIGCTRL:trace] rig_set_vfo: rig->state.current_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902136][RIGCTRL:debug] icom_get_freq called for Sub, curr_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902146][RIGCTRL:debug] icom_get_freq: using vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902158][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.214036][00:00:19.902168][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:32.214036][00:00:19.902179][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:32.214036][00:00:19.902190][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.214036][00:00:19.902204][RIGCTRL:debug] icom_transaction: cmd=0x03, subcmd=0xffffffff, payload_len=0
[2021-04-14 23:42:32.214036][00:00:19.902215][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.214036][00:00:19.902227][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.214036][00:00:19.902238][RIGCTRL:debug] frame.c(90):make_cmd_frame return(6)
[2021-04-14 23:42:32.214036][00:00:19.902249][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.214036][00:00:19.902261][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.214036][00:00:19.902270][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.214036][00:00:19.902302][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.214036][00:00:19.902312][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.214036][00:00:19.902420][RIGCTRL:trace] write_block(): TX 6 bytes
[2021-04-14 23:42:32.214786][00:00:19.902444][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:32.214786][00:00:19.902464][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.214786][00:00:19.902475][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.267322][00:00:19.955285][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.267322][00:00:19.955329][RIGCTRL:trace] 0000 fe fe 50 e0 03 fd ..P...
[2021-04-14 23:42:32.267322][00:00:19.955354][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.267322][00:00:19.955376][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.267322][00:00:19.955395][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.277831][00:00:19.966044][RIGCTRL:trace] read_string(): RX 11 characters
[2021-04-14 23:42:32.277831][00:00:19.966087][RIGCTRL:trace] 0000 fe fe e0 50 03 00 45 07 07 00 fd ...P..E....
[2021-04-14 23:42:32.277831][00:00:19.966112][RIGCTRL:debug] frame.c(409):read_icom_frame return(11)
[2021-04-14 23:42:32.277831][00:00:19.966136][RIGCTRL:trace] icom_one_transaction: frm_len=11, frm_len-1=fd, frm_len-2=00
[2021-04-14 23:42:32.277831][00:00:19.966157][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.277831][00:00:19.966177][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.277831][00:00:19.966197][RIGCTRL:trace] set_vfo_curr: vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966223][RIGCTRL:trace] set_vfo_curr: curr_vfo now=Sub
[2021-04-14 23:42:32.278580][00:00:19.966243][RIGCTRL:debug] icom.c(7828):set_vfo_curr return(0)
[2021-04-14 23:42:32.278580][00:00:19.966261][RIGCTRL:debug] from_bcd called
[2021-04-14 23:42:32.278580][00:00:19.966279][RIGCTRL:debug] icom_get_freq exit vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966300][RIGCTRL:debug] icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:32.278580][00:00:19.966332][RIGCTRL:trace] rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
rig_set_vfo: retcode from rig_get_freq = Command completed successfully
icom_get_freq exit vfo=Sub, curr_vfo=Sub
icom.c(1388):icom_get_freq return(0)
icom.c(1388):icom_get_freq return(0)
[2021-04-14 23:42:32.278580][00:00:19.966355][RIGCTRL:trace] rig_set_vfo: return 0, vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966373][RIGCTRL:debug] rig.c(2568):rig_set_vfo return(0)
[2021-04-14 23:42:32.278580][00:00:19.966406][RIGCTRL:debug] icom.c(4996):icom_set_split_freq_mode return(0)
[2021-04-14 23:42:32.278580][00:00:19.966425][RIGCTRL:debug] rig.c(4122):rig_set_split_freq_mode return(0)
[2021-04-14 23:42:32.278580][00:00:19.966434][RIGCTRL:trace] rig_set_split_vfo split=1
[2021-04-14 23:42:32.278580][00:00:19.966461][RIGCTRL:debug] rig.c(4227):rig_set_split_vfo entered
[2021-04-14 23:42:32.278580][00:00:19.966478][RIGCTRL:trace] vfo_fixup: vfo=currVFO
[2021-04-14 23:42:32.278580][00:00:19.966494][RIGCTRL:trace] vfo_fixup: Leaving currVFO alone
[2021-04-14 23:42:32.278580][00:00:19.966515][RIGCTRL:debug] icom_set_split_vfo called vfo='currVFO', split=1, tx_vfo=Sub, curr_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966532][RIGCTRL:trace] icom_set_split_vfo: vfo clause 4
[2021-04-14 23:42:32.278580][00:00:19.966549][RIGCTRL:trace] icom_set_split_vfo: set_vfo because tx_vfo=Sub
[2021-04-14 23:42:32.278580][00:00:19.966571][RIGCTRL:debug] frame.c(325):icom_transaction entered
[2021-04-14 23:42:32.278580][00:00:19.966591][RIGCTRL:debug] icom_transaction: cmd=0x0f, subcmd=0x01, payload_len=0
[2021-04-14 23:42:32.278580][00:00:19.966608][RIGCTRL:debug] frame.c(119):icom_one_transaction entered
[2021-04-14 23:42:32.278580][00:00:19.966626][RIGCTRL:debug] frame.c(56):make_cmd_frame entered
[2021-04-14 23:42:32.278580][00:00:19.966645][RIGCTRL:debug] frame.c(90):make_cmd_frame return(7)
[2021-04-14 23:42:32.278580][00:00:19.966662][RIGCTRL:trace] rig_flush: called for serial device
[2021-04-14 23:42:32.278580][00:00:19.966680][RIGCTRL:debug] serial.c(629):serial_flush entered
[2021-04-14 23:42:32.278580][00:00:19.966694][RIGCTRL:debug] tcflush
[2021-04-14 23:42:32.278580][00:00:19.966744][RIGCTRL:debug] serial.c(661):serial_flush return(0)
[2021-04-14 23:42:32.278580][00:00:19.966762][RIGCTRL:debug] write_block called
[2021-04-14 23:42:32.278580][00:00:19.966882][RIGCTRL:trace] write_block(): TX 7 bytes
[2021-04-14 23:42:32.278580][00:00:19.966911][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:32.278580][00:00:19.966934][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.279330][00:00:19.967006][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.289838][00:00:19.977815][RIGCTRL:trace] read_string(): RX 7 characters
[2021-04-14 23:42:32.289838][00:00:19.977857][RIGCTRL:trace] 0000 fe fe 50 e0 0f 01 fd ..P....
[2021-04-14 23:42:32.289838][00:00:19.977881][RIGCTRL:debug] frame.c(409):read_icom_frame return(7)
[2021-04-14 23:42:32.289838][00:00:19.977901][RIGCTRL:debug] frame.c(375):read_icom_frame entered
[2021-04-14 23:42:32.289838][00:00:19.977919][RIGCTRL:trace] read_string called, rxmax=80
[2021-04-14 23:42:32.311088][00:00:19.999002][RIGCTRL:trace] read_string(): RX 6 characters
[2021-04-14 23:42:32.311088][00:00:19.999033][RIGCTRL:trace] 0000 fe fe e0 50 fb fd ...P..
[2021-04-14 23:42:32.311088][00:00:19.999048][RIGCTRL:debug] frame.c(409):read_icom_frame return(6)
[2021-04-14 23:42:32.311088][00:00:19.999064][RIGCTRL:trace] icom_one_transaction: frm_len=6, frm_len-1=fd, frm_len-2=fb
[2021-04-14 23:42:32.311088][00:00:19.999076][RIGCTRL:debug] frame.c(303):icom_one_transaction return(0)
[2021-04-14 23:42:32.311088][00:00:19.999087][RIGCTRL:debug] frame.c(355):icom_transaction return(0)
[2021-04-14 23:42:32.311088][00:00:19.999104][RIGCTRL:debug] icom_set_split_vfo: vfo=Sub curr_vfo=Sub rx_vfo=Sub tx_vfo=Sub split=1
[2021-04-14 23:42:32.311088][00:00:19.999115][RIGCTRL:debug] icom.c(5345):icom_set_split_vfo return(0)
[2021-04-14 23:42:32.311088][00:00:19.999127][RIGCTRL:debug] rig.c(4256):rig_set_split_vfo return(0)
[2021-04-14 23:42:32.311088][00:00:19.999231][SYSLOG:trace] #: 3 Transceiver::TransceiverState(online: yes Frequency {7074000Hz, 7074500Hz} Mode: 3; SPLIT: on; PTT: off)
| test | ic split mode stays on vfob after transceiver transceiverstate online yes frequency mode split off ptt off transceiver transceiverstate online yes frequency mode split on ptt off txf reversed false rx vfo main tx vfo sub rig set split vfo split rig c rig set split vfo entered vfo fixup vfo currvfo vfo fixup leaving currvfo alone icom set split vfo called vfo currvfo split tx vfo sub curr vfo main icom set split vfo vfo clause icom set split vfo set vfo because tx vfo sub frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax transceiver transceiverstate online yes frequency mode split unknown ptt off read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fb fd p frame c read icom frame return icom one transaction frm len frm len fd frm len fb frame c icom one transaction return frame c icom transaction return icom set split vfo vfo main curr vfo main rx vfo main tx vfo sub split icom c icom set split vfo return rig c rig set split vfo return rig set split freq mode freq mode usb rig c rig set split freq mode entered rig set split freq mode vfo vfob tx freq tx mode usb tx width icom set split freq mode called vfo vfob icom set split freq mode curr vfo main icom set split freq mode before get split vfos rx vfo main tx vfo sub icom get split vfos called icom get split vfos vfo has main sub only split rx main tx sub icom c icom get split vfos return icom set split freq mode after get split vfos rx vfo main tx vfo sub rig c rig set vfo entered rig set vfo called vfo sub vfo fixup vfo sub rig c rig get split vfo entered rig c rig get split vfo return vfo fixup rig vfo tx changed to sub split satmode vfo fixup final vfo sub icom set vfo called vfo sub icom set vfo vfo changing from main to sub icom set vfo line icom set vfo sub asked for ended up with vfo sub icom set vfo line frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fb fd p frame c read icom frame return icom one transaction frm len frm len fd frm len fb frame c icom one transaction return frame c icom transaction return icom set vfo line icom set vfo line curr vfo sub icom c icom set vfo return rig set vfo rig state current vfo sub icom get freq called for sub curr vfo sub icom get freq using vfo sub set vfo curr vfo sub curr vfo sub set vfo curr curr vfo now sub icom c set vfo curr return frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p e frame c read icom frame return icom one transaction frm len frm len fd frm len frame c icom one transaction return frame c icom transaction return set vfo curr vfo sub curr vfo sub set vfo curr curr vfo now sub icom c set vfo curr return from bcd called icom get freq exit vfo sub curr vfo sub icom c icom get freq return rig set vfo retcode from rig get freq command completed successfully icom get freq exit vfo sub curr vfo sub icom c icom get freq return rig set vfo retcode from rig get freq command completed successfully icom get freq exit vfo sub curr vfo sub icom c icom get freq return icom c icom get freq return rig set vfo return vfo sub rig c rig set vfo return rig set freq called vfo currvfo freq vfo fixup vfo currvfo vfo fixup leaving currvfo alone rig set freq targetable freq vfo currvfo icom set freq called currvfo icom set freq currvfo asked for so vfo set to sub icom set freq set vfo curr sub set vfo curr vfo sub curr vfo sub set vfo curr curr vfo now sub icom c set vfo curr return rig get freq called vfo currvfo vfo fixup vfo currvfo vfo fixup leaving currvfo alone rig c rig get cache entered rig get cache vfo currvfo current vfo sub rig get cache vfo sub freq rig c rig get cache return rig get freq cache age rig get freq cache miss age cached vfo currvfo asked vfo currvfo icom get freq called for currvfo curr vfo sub icom get freq using vfo currvfo set vfo curr vfo currvfo curr vfo sub set vfo curr asking for currvfo currvfo sub icom c set vfo curr return frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p e frame c read icom frame return icom one transaction frm len frm len fd frm len frame c icom one transaction return frame c icom transaction return set vfo curr vfo sub curr vfo sub set vfo curr curr vfo now sub icom c set vfo curr return from bcd called icom get freq exit vfo currvfo curr vfo sub icom c icom get freq return rig c set cache freq entered set cache freq vfo currvfo current vfo sub set cache freq set vfo sub to freq rig c set cache freq return rig c set cache freq entered set cache freq vfo currvfo current vfo sub set cache freq set vfo sub to freq rig c set cache freq return rig c rig get freq return to bcd called frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p e frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p e frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fb fd p frame c read icom frame return icom one transaction frm len frm len fd frm len fb frame c icom one transaction return frame c icom transaction return icom c icom set freq return rig c set cache freq entered set cache freq vfo currvfo current vfo sub set cache freq set vfo sub to freq rig c set cache freq return rig c set cache freq entered set cache freq vfo currvfo current vfo sub set cache freq set vfo sub to freq rig c set cache freq return rig c rig set freq return icom set mode called vfo currvfo mode usb width frame c mode entered mode mode width mode width rig passband nochange rig c rig get mode entered rig c rig get cache entered rig get cache vfo currvfo current vfo sub rig get cache vfo sub freq rig c rig get cache return rig get mode currvfo cache check age rig get mode cache miss age mode width icom get mode called vfo currvfo frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return icom one transaction frm len frm len fd frm len frame c icom one transaction return frame c icom transaction return icom get mode modebuf modebuf mode len frame c mode entered mode mode pd rig c rig passband wide entered rig c rig passband wide return rig c rig passband normal entered rig passband normal return filter width rig c rig passband normal return rig c rig set vfo entered rig set vfo called vfo vfob rig set vfo rig does not have vfob rig c rig set vfo return icom get dsp flt called mode usb icom get mode vfosave currvfo currvfo sub rig c rig set vfo entered rig set vfo called vfo vfoa rig set vfo rig does not have vfoa rig c rig set vfo return icom get mode vfo currvfo returning mode usb width icom c icom get mode return rig get mode retcode after get mode rig get mode debug rig get mode debug rig c rig passband normal entered rig passband normal return filter width rig c rig passband normal return rig c set cache mode entered rig c set cache mode return rig c rig get mode return frame c mode return icom set mode icmode icmode ext icom set mode icmode icmode ext frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fb fd p frame c read icom frame return icom one transaction frm len frm len fd frm len fb frame c icom one transaction return frame c icom transaction return icom c icom set mode return rig c rig set vfo entered rig set vfo called vfo main vfo fixup vfo main rig c rig get split vfo entered rig c rig get split vfo return vfo fixup rig vfo tx changed to sub split satmode vfo fixup final vfo sub icom set vfo called vfo sub icom set vfo line icom set vfo sub asked for ended up with vfo sub icom set vfo line frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fb fd p frame c read icom frame return icom one transaction frm len frm len fd frm len fb frame c icom one transaction return frame c icom transaction return icom set vfo line icom set vfo line curr vfo sub icom c icom set vfo return rig set vfo rig state current vfo sub icom get freq called for sub curr vfo sub icom get freq using vfo sub set vfo curr vfo sub curr vfo sub set vfo curr curr vfo now sub icom c set vfo curr return frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p e frame c read icom frame return icom one transaction frm len frm len fd frm len frame c icom one transaction return frame c icom transaction return set vfo curr vfo sub curr vfo sub set vfo curr curr vfo now sub icom c set vfo curr return from bcd called icom get freq exit vfo sub curr vfo sub icom c icom get freq return rig set vfo retcode from rig get freq command completed successfully icom get freq exit vfo sub curr vfo sub icom c icom get freq return rig set vfo retcode from rig get freq command completed successfully icom get freq exit vfo sub curr vfo sub icom c icom get freq return icom c icom get freq return rig set vfo return vfo sub rig c rig set vfo return icom c icom set split freq mode return rig c rig set split freq mode return rig set split vfo split rig c rig set split vfo entered vfo fixup vfo currvfo vfo fixup leaving currvfo alone icom set split vfo called vfo currvfo split tx vfo sub curr vfo sub icom set split vfo vfo clause icom set split vfo set vfo because tx vfo sub frame c icom transaction entered icom transaction cmd subcmd payload len frame c icom one transaction entered frame c make cmd frame entered frame c make cmd frame return rig flush called for serial device serial c serial flush entered tcflush serial c serial flush return write block called write block tx bytes fe fe fd p frame c read icom frame entered read string called rxmax read string rx characters fe fe fd p frame c read icom frame return frame c read icom frame entered read string called rxmax read string rx characters fe fe fb fd p frame c read icom frame return icom one transaction frm len frm len fd frm len fb frame c icom one transaction return frame c icom transaction return icom set split vfo vfo sub curr vfo sub rx vfo sub tx vfo sub split icom c icom set split vfo return rig c rig set split vfo return transceiver transceiverstate online yes frequency mode split on ptt off | 1 |
65,848 | 14,761,952,465 | IssuesEvent | 2021-01-09 01:07:54 | billmcchesney1/pacbot | https://api.github.com/repos/billmcchesney1/pacbot | opened | CVE-2020-36187 (Medium) detected in multiple libraries | security vulnerability | ## CVE-2020-36187 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.8.7.jar</b>, <b>jackson-databind-2.9.4.jar</b>, <b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/api/pacman-api-config/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-config-2.0.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/jobs/pacman-cloud-notifications/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.7.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/jobs/azure-discovery/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/commons/pac-batch-commons/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-1.11.636.jar (Root Library)
- aws-java-sdk-core-1.11.636.jar
- :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187>CVE-2020-36187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2997">https://github.com/FasterXML/jackson-databind/issues/2997</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","isTransitiveDependency":true,"dependencyTree":"org.springframework.cloud:spring-cloud-starter-config:2.0.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.7","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.4","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.7.2","isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk:1.11.636;com.amazonaws:aws-java-sdk-core:1.11.636;com.fasterxml.jackson.core:jackson-databind:2.6.7.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"vulnerabilityIdentifier":"CVE-2020-36187","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187","cvss3Severity":"medium","cvss3Score":"4.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-36187 (Medium) detected in multiple libraries - ## CVE-2020-36187 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.8.7.jar</b>, <b>jackson-databind-2.9.4.jar</b>, <b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/api/pacman-api-config/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-config-2.0.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/jobs/pacman-cloud-notifications/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.7.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/jobs/azure-discovery/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: pacbot/commons/pac-batch-commons/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-1.11.636.jar (Root Library)
- aws-java-sdk-core-1.11.636.jar
- :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187>CVE-2020-36187</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2997">https://github.com/FasterXML/jackson-databind/issues/2997</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","isTransitiveDependency":true,"dependencyTree":"org.springframework.cloud:spring-cloud-starter-config:2.0.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.7","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.4","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.7.2","isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk:1.11.636;com.amazonaws:aws-java-sdk-core:1.11.636;com.fasterxml.jackson.core:jackson-databind:2.6.7.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"vulnerabilityIdentifier":"CVE-2020-36187","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36187","cvss3Severity":"medium","cvss3Score":"4.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pacbot api pacman api config pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud starter config release jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pacbot jobs pacman cloud notifications pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pacbot jobs azure discovery pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pacbot commons pac batch commons pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy aws java sdk jar root library aws java sdk core jar x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp datasources sharedpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp datasources sharedpooldatasource vulnerabilityurl | 0 |
321,009 | 27,498,928,324 | IssuesEvent | 2023-03-05 13:23:07 | CakeWP/block-options | https://api.github.com/repos/CakeWP/block-options | opened | Mobile site broken after last update | wordpress-support needs-testing |
## Support
**Version:** 1.34.5
After this latest update, the main body of our mobile site is missing. The only part of the page that renders (incompletely), is the widgets area.
Disabling this plugin brings back the body of the page. I’m not seeing any rendering issue on the desktop site.
Is there anything obvious I can check or try to remedy this ?
Thanks!
The page I need help with: _\[[log in](https://login.wordpress.org/?redirect_to=https%3A%2F%2Fwordpress.org%2Fsupport%2Ftopic%2Fmobile-site-broken-after-last-update%2F&locale=en_US) to see the link\]_
## Details
- **Support Author**: akovia
- **Support Link**: https://wordpress.org/support/topic/mobile-site-broken-after-last-update/
- **Latest Activity**: 30 minutes ago
- **Spinup Sandbox Site**: https://tastewp.com/new/?pre-installed-plugin-slug=block-options
**Note:** This support issue is created automatically via GitHub action. | 1.0 | Mobile site broken after last update -
## Support
**Version:** 1.34.5
After this latest update, the main body of our mobile site is missing. The only part of the page that renders (incompletely), is the widgets area.
Disabling this plugin brings back the body of the page. I’m not seeing any rendering issue on the desktop site.
Is there anything obvious I can check or try to remedy this ?
Thanks!
The page I need help with: _\[[log in](https://login.wordpress.org/?redirect_to=https%3A%2F%2Fwordpress.org%2Fsupport%2Ftopic%2Fmobile-site-broken-after-last-update%2F&locale=en_US) to see the link\]_
## Details
- **Support Author**: akovia
- **Support Link**: https://wordpress.org/support/topic/mobile-site-broken-after-last-update/
- **Latest Activity**: 30 minutes ago
- **Spinup Sandbox Site**: https://tastewp.com/new/?pre-installed-plugin-slug=block-options
**Note:** This support issue is created automatically via GitHub action. | test | mobile site broken after last update support version after this latest update the main body of our mobile site is missing the only part of the page that renders incompletely is the widgets area disabling this plugin brings back the body of the page i’m not seeing any rendering issue on the desktop site is there anything obvious i can check or try to remedy this thanks the page i need help with to see the link details support author akovia support link latest activity minutes ago spinup sandbox site note this support issue is created automatically via github action | 1 |
80,986 | 7,762,778,894 | IssuesEvent | 2018-06-01 14:32:52 | lintol/lintol-frontend | https://api.github.com/repos/lintol/lintol-frontend | closed | Reports - long file name pushing separator line | 0.11 ready for test | If the resource of a report has a very long file name it is pushing the separator line out of alignment with the other rows. Probably should have some padding too.

| 1.0 | Reports - long file name pushing separator line - If the resource of a report has a very long file name it is pushing the separator line out of alignment with the other rows. Probably should have some padding too.

| test | reports long file name pushing separator line if the resource of a report has a very long file name it is pushing the separator line out of alignment with the other rows probably should have some padding too | 1 |
86,503 | 15,755,671,708 | IssuesEvent | 2021-03-31 02:11:33 | SmartBear/ready-msazure-plugin | https://api.github.com/repos/SmartBear/ready-msazure-plugin | opened | CVE-2019-17359 (High) detected in bcprov-jdk15-1.44.jar, bcprov-jdk15-1.45.jar | security vulnerability | ## CVE-2019-17359 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bcprov-jdk15-1.44.jar</b>, <b>bcprov-jdk15-1.45.jar</b></p></summary>
<p>
<details><summary><b>bcprov-jdk15-1.44.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: ready-msazure-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/bouncycastle/bcprov-jdk15/144/bcprov-jdk15-144.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- ready-api-soapui-1.3.0.jar
- :x: **bcprov-jdk15-1.44.jar** (Vulnerable Library)
</details>
<details><summary><b>bcprov-jdk15-1.45.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: ready-msazure-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15/1.45/bcprov-jdk15-1.45.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- ready-api-soapui-1.3.0.jar
- vt-password-3.1.2.jar
- vt-crypt-2.1.4.jar
- :x: **bcprov-jdk15-1.45.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.
<p>Publish Date: 2019-10-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359>CVE-2019-17359</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359</a></p>
<p>Release Date: 2019-10-08</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.64</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15","packageVersion":"1.44","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.3.0;com.smartbear:ready-api-soapui:1.3.0;org.bouncycastle:bcprov-jdk15:1.44","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.64"},{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15","packageVersion":"1.45","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.3.0;com.smartbear:ready-api-soapui:1.3.0;edu.vt.middleware:vt-password:3.1.2;edu.vt.middleware:vt-crypt:2.1.4;org.bouncycastle:bcprov-jdk15:1.45","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.64"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-17359","vulnerabilityDetails":"The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-17359 (High) detected in bcprov-jdk15-1.44.jar, bcprov-jdk15-1.45.jar - ## CVE-2019-17359 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bcprov-jdk15-1.44.jar</b>, <b>bcprov-jdk15-1.45.jar</b></p></summary>
<p>
<details><summary><b>bcprov-jdk15-1.44.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: ready-msazure-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/bouncycastle/bcprov-jdk15/144/bcprov-jdk15-144.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- ready-api-soapui-1.3.0.jar
- :x: **bcprov-jdk15-1.44.jar** (Vulnerable Library)
</details>
<details><summary><b>bcprov-jdk15-1.45.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: ready-msazure-plugin/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15/1.45/bcprov-jdk15-1.45.jar</p>
<p>
Dependency Hierarchy:
- ready-api-soapui-pro-1.3.0.jar (Root Library)
- ready-api-soapui-1.3.0.jar
- vt-password-3.1.2.jar
- vt-crypt-2.1.4.jar
- :x: **bcprov-jdk15-1.45.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.
<p>Publish Date: 2019-10-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359>CVE-2019-17359</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359</a></p>
<p>Release Date: 2019-10-08</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.64</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15","packageVersion":"1.44","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.3.0;com.smartbear:ready-api-soapui:1.3.0;org.bouncycastle:bcprov-jdk15:1.44","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.64"},{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15","packageVersion":"1.45","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.smartbear:ready-api-soapui-pro:1.3.0;com.smartbear:ready-api-soapui:1.3.0;edu.vt.middleware:vt-password:3.1.2;edu.vt.middleware:vt-crypt:2.1.4;org.bouncycastle:bcprov-jdk15:1.45","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.64"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-17359","vulnerabilityDetails":"The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in bcprov jar bcprov jar cve high severity vulnerability vulnerable libraries bcprov jar bcprov jar bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk library home page a href path to dependency file ready msazure plugin pom xml path to vulnerable library home wss scanner repository bouncycastle bcprov bcprov jar dependency hierarchy ready api soapui pro jar root library ready api soapui jar x bcprov jar vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk library home page a href path to dependency file ready msazure plugin pom xml path to vulnerable library home wss scanner repository org bouncycastle bcprov bcprov jar dependency hierarchy ready api soapui pro jar root library ready api soapui jar vt password jar vt crypt jar x bcprov jar vulnerable library found in base branch master vulnerability details the asn parser in bouncy castle crypto aka bc java can trigger a large attempted memory allocation and resultant outofmemoryerror error via crafted asn data this is fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com smartbear ready api soapui pro com smartbear ready api soapui org bouncycastle bcprov isminimumfixversionavailable true minimumfixversion org bouncycastle bcprov packagetype java groupid org bouncycastle packagename bcprov packageversion packagefilepaths istransitivedependency true dependencytree com smartbear ready api soapui pro com smartbear ready api soapui edu vt middleware vt password edu vt middleware vt crypt org bouncycastle bcprov isminimumfixversionavailable true minimumfixversion org bouncycastle bcprov basebranches vulnerabilityidentifier cve vulnerabilitydetails the asn parser in bouncy castle crypto aka bc java can trigger a large attempted memory allocation and resultant outofmemoryerror error via crafted asn data this is fixed in vulnerabilityurl | 0 |
69,928 | 7,166,247,751 | IssuesEvent | 2018-01-29 16:40:25 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | dts board configuration is incompatible with "build all" kind of test | area: Boards area: Configuration System area: Device Tree area: Testing area: Testing Suite | As identified in #5698, "build all" kind of test conflicts with deployment of dts.
Indeed, when a SW element is converted to DTS, config pieces of Kconfig files are by passed with a
HAS_DTS_XXX flag, for instance:
`config HTS221_NAME
string
prompt "Driver name"
default "HTS221"
depends on HTS221 && !HAS_DTS_I2C_DEVICE`
As a consequence, whenever the SW element is activated but not configured in dts (like in "build all" tests), default configuration is no more available form Kconfig file and build fails | 2.0 | dts board configuration is incompatible with "build all" kind of test - As identified in #5698, "build all" kind of test conflicts with deployment of dts.
Indeed, when a SW element is converted to DTS, config pieces of Kconfig files are by passed with a
HAS_DTS_XXX flag, for instance:
`config HTS221_NAME
string
prompt "Driver name"
default "HTS221"
depends on HTS221 && !HAS_DTS_I2C_DEVICE`
As a consequence, whenever the SW element is activated but not configured in dts (like in "build all" tests), default configuration is no more available form Kconfig file and build fails | test | dts board configuration is incompatible with build all kind of test as identified in build all kind of test conflicts with deployment of dts indeed when a sw element is converted to dts config pieces of kconfig files are by passed with a has dts xxx flag for instance config name string prompt driver name default depends on has dts device as a consequence whenever the sw element is activated but not configured in dts like in build all tests default configuration is no more available form kconfig file and build fails | 1 |
41,249 | 5,345,349,449 | IssuesEvent | 2017-02-17 16:44:40 | TheScienceMuseum/collectionsonline | https://api.github.com/repos/TheScienceMuseum/collectionsonline | closed | Absolutely vs relative paths for image locations in JSON? | help wanted please-test priority-2 T1h | We seem to have switched to using absolute paths for the images, which is fine (_I'm guessing the carousel expects a full path and we didn't want to fork the code?_).
But just wanted to check:
- `"location_is_relative": true` is set even when we have an absolute path (see below) - if we're modifying the location in our JSON/API we should also update the `"location_is_relative' to reflect this or else it confuses consumers of our API. (**or even just remove it**).
- Is this consistent for all images? I previously noticed a mix of both, although now I obviously can't find an example with a relative path.
http://collection.sciencemuseum.org.uk/objects/co64127/phillips-economic-computer-analog-computer
**In our JSON API**
```
"processed": {
"large": {
"location": "http://smgco-images.s3.amazonaws.com/media/W/P/A/large_1995_0210__0001_.jpg",
"location_is_relative": true,
```
**In the index**
```
"processed": {
"large": {
"location": "W/P/A/large_1995_0210__0001_.jpg",
"location_is_relative": true,
```
| 1.0 | Absolutely vs relative paths for image locations in JSON? - We seem to have switched to using absolute paths for the images, which is fine (_I'm guessing the carousel expects a full path and we didn't want to fork the code?_).
But just wanted to check:
- `"location_is_relative": true` is set even when we have an absolute path (see below) - if we're modifying the location in our JSON/API we should also update the `"location_is_relative' to reflect this or else it confuses consumers of our API. (**or even just remove it**).
- Is this consistent for all images? I previously noticed a mix of both, although now I obviously can't find an example with a relative path.
http://collection.sciencemuseum.org.uk/objects/co64127/phillips-economic-computer-analog-computer
**In our JSON API**
```
"processed": {
"large": {
"location": "http://smgco-images.s3.amazonaws.com/media/W/P/A/large_1995_0210__0001_.jpg",
"location_is_relative": true,
```
**In the index**
```
"processed": {
"large": {
"location": "W/P/A/large_1995_0210__0001_.jpg",
"location_is_relative": true,
```
| test | absolutely vs relative paths for image locations in json we seem to have switched to using absolute paths for the images which is fine i m guessing the carousel expects a full path and we didn t want to fork the code but just wanted to check location is relative true is set even when we have an absolute path see below if we re modifying the location in our json api we should also update the location is relative to reflect this or else it confuses consumers of our api or even just remove it is this consistent for all images i previously noticed a mix of both although now i obviously can t find an example with a relative path in our json api processed large location location is relative true in the index processed large location w p a large jpg location is relative true | 1 |
105,653 | 9,099,166,009 | IssuesEvent | 2019-02-20 03:09:19 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Etcd snapshot container is not deployed after upgrade | kind/bug-qa priority/0 status/resolved status/to-test team/az version/2.0 | rancher/rancher:v2.1.2-rc9
Etcd snapshot container is not deployed after upgrading to v2.1.2-rc9. Rancher server logs do not show `rke up` being triggered for RKE clusters.
To reproduce:
* Deploy Rancher server v2.1.1
* Provision an RKE cluster (in this case, used Azure)
* Upgrade to v2.1.2-rc9
* Etcd snapshots are enabled in cluster yaml / UI
* ssh to etcd node
`docker ps` on etcd node shows no snapshot container
`/opt/rke/etcd-snapshots` does not exist
Expected:
* `etcd-rolling-snapshots` container to be running
* `/opt/rke/etcd-snapshots` exists | 1.0 | Etcd snapshot container is not deployed after upgrade - rancher/rancher:v2.1.2-rc9
Etcd snapshot container is not deployed after upgrading to v2.1.2-rc9. Rancher server logs do not show `rke up` being triggered for RKE clusters.
To reproduce:
* Deploy Rancher server v2.1.1
* Provision an RKE cluster (in this case, used Azure)
* Upgrade to v2.1.2-rc9
* Etcd snapshots are enabled in cluster yaml / UI
* ssh to etcd node
`docker ps` on etcd node shows no snapshot container
`/opt/rke/etcd-snapshots` does not exist
Expected:
* `etcd-rolling-snapshots` container to be running
* `/opt/rke/etcd-snapshots` exists | test | etcd snapshot container is not deployed after upgrade rancher rancher etcd snapshot container is not deployed after upgrading to rancher server logs do not show rke up being triggered for rke clusters to reproduce deploy rancher server provision an rke cluster in this case used azure upgrade to etcd snapshots are enabled in cluster yaml ui ssh to etcd node docker ps on etcd node shows no snapshot container opt rke etcd snapshots does not exist expected etcd rolling snapshots container to be running opt rke etcd snapshots exists | 1 |
222,114 | 7,428,352,425 | IssuesEvent | 2018-03-24 00:39:15 | jsonwebtoken/jsonwebtoken.github.io | https://api.github.com/repos/jsonwebtoken/jsonwebtoken.github.io | closed | jwt.io does not function in China | enhancement low-priority stage-3 | 1. Loads Facebook SDK / connect - causes some console errors here
2. Loads jQuery via googleapis.com CDN, which is also blocked and prevents the entire site functionality from working
<img width="1091" alt="screen shot 2017-06-16 at 09 33 41" src="https://user-images.githubusercontent.com/269860/27208409-12f86c48-5278-11e7-9a5f-74b3037f054f.png">
| 1.0 | jwt.io does not function in China - 1. Loads Facebook SDK / connect - causes some console errors here
2. Loads jQuery via googleapis.com CDN, which is also blocked and prevents the entire site functionality from working
<img width="1091" alt="screen shot 2017-06-16 at 09 33 41" src="https://user-images.githubusercontent.com/269860/27208409-12f86c48-5278-11e7-9a5f-74b3037f054f.png">
| non_test | jwt io does not function in china loads facebook sdk connect causes some console errors here loads jquery via googleapis com cdn which is also blocked and prevents the entire site functionality from working img width alt screen shot at src | 0 |
226,571 | 18,040,758,160 | IssuesEvent | 2021-09-18 02:26:39 | Andrey-1992/spacetagram-challenge | https://api.github.com/repos/Andrey-1992/spacetagram-challenge | opened | Cypres Test Suites | App Test | As a developer I should test:
- Main view section
- Api cards section
- Card information
- Like button
- Unlike Button | 1.0 | Cypres Test Suites - As a developer I should test:
- Main view section
- Api cards section
- Card information
- Like button
- Unlike Button | test | cypres test suites as a developer i should test main view section api cards section card information like button unlike button | 1 |
756,157 | 26,459,831,864 | IssuesEvent | 2023-01-16 16:36:13 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | pfconnector: dynreverse will stay active if the remote fails to init | Type: Bug Priority: High | **Describe the bug**
If the remote port binding fails to create, the dynreverse will still be stored and kept active in the cache. Moreover, it will never be cleared due to inactivity since the inactivity timeout will never be called since the remote never started
**Expected behavior**
If the remote fails to start, then the dynreverse should be cleared
| 1.0 | pfconnector: dynreverse will stay active if the remote fails to init - **Describe the bug**
If the remote port binding fails to create, the dynreverse will still be stored and kept active in the cache. Moreover, it will never be cleared due to inactivity since the inactivity timeout will never be called since the remote never started
**Expected behavior**
If the remote fails to start, then the dynreverse should be cleared
| non_test | pfconnector dynreverse will stay active if the remote fails to init describe the bug if the remote port binding fails to create the dynreverse will still be stored and kept active in the cache moreover it will never be cleared due to inactivity since the inactivity timeout will never be called since the remote never started expected behavior if the remote fails to start then the dynreverse should be cleared | 0 |
196,599 | 6,935,569,351 | IssuesEvent | 2017-12-03 10:46:10 | johndeverall/BehaviourCoder | https://api.github.com/repos/johndeverall/BehaviourCoder | opened | Application does not run | Priority: High Type: Bug | **Replication Steps**
1. Download the application
2. Find the application in the download location
3. Double click on the application to run it
4. The following log information results (attached)
[behaviour-coder-2017-12-03_23-40-41.369.log](https://github.com/johndeverall/BehaviourCoder/files/1524668/behaviour-coder-2017-12-03_23-40-41.369.log)
| 1.0 | Application does not run - **Replication Steps**
1. Download the application
2. Find the application in the download location
3. Double click on the application to run it
4. The following log information results (attached)
[behaviour-coder-2017-12-03_23-40-41.369.log](https://github.com/johndeverall/BehaviourCoder/files/1524668/behaviour-coder-2017-12-03_23-40-41.369.log)
| non_test | application does not run replication steps download the application find the application in the download location double click on the application to run it the following log information results attached | 0 |
182,219 | 14,109,225,003 | IssuesEvent | 2020-11-06 19:11:51 | hannakim91/city-settlr | https://api.github.com/repos/hannakim91/city-settlr | opened | Test: unit & integration for checkmark buttons/compare button | priority: 1 🚨 type: test 🔮 | - buttons render
- check buttons can be clicked - change image if checkmarked
- compare button can be clicked and routes user to comparison view of 2 cities
- error if 2 aren't clicked before trying to compare
- error handling if more than 2 are clicked? | 1.0 | Test: unit & integration for checkmark buttons/compare button - - buttons render
- check buttons can be clicked - change image if checkmarked
- compare button can be clicked and routes user to comparison view of 2 cities
- error if 2 aren't clicked before trying to compare
- error handling if more than 2 are clicked? | test | test unit integration for checkmark buttons compare button buttons render check buttons can be clicked change image if checkmarked compare button can be clicked and routes user to comparison view of cities error if aren t clicked before trying to compare error handling if more than are clicked | 1 |
177,909 | 13,752,619,302 | IssuesEvent | 2020-10-06 14:43:26 | ubtue/DatenProbleme | https://api.github.com/repos/ubtue/DatenProbleme | closed | ISSN 1745-5308 | The Expository times | Rezensionen | Fehlerquelle: Translator Zotero_SEMI-AUTO ready for testing | https://journals.sagepub.com/toc/exta/132/1
https://journals.sagepub.com/doi/full/10.1177/0014524620944817
Im Zotkat-Verfahren werden die Rezensionen nicht getagt.
Im Zotaut-Verfahren schon.

| 1.0 | ISSN 1745-5308 | The Expository times | Rezensionen - https://journals.sagepub.com/toc/exta/132/1
https://journals.sagepub.com/doi/full/10.1177/0014524620944817
Im Zotkat-Verfahren werden die Rezensionen nicht getagt.
Im Zotaut-Verfahren schon.

| test | issn the expository times rezensionen im zotkat verfahren werden die rezensionen nicht getagt im zotaut verfahren schon | 1 |
338,141 | 30,282,884,483 | IssuesEvent | 2023-07-08 09:44:53 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Network policy specifies that access to a namespace failed | kind/support kind/failing-test needs-sig lifecycle/rotten needs-triage | ### Which jobs are failing?
When using network policy to specify address to access a namespace, all pods in the space where the rule is located cannot be accessed through name.namespace.svc.cluster.local.
Kubernetes Version: v1.23.8
Calico Version: v3.23.2
Yaml file:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test
namespace: test1
spec:
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test2
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test1
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test2
podSelector:
matchLabels:
app: ng1
### Which tests are failing?
Namespaces that are not specified for access are accessible. Yaml file:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test
namespace: test1
spec:
egress:
- {}
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test1
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test2
podSelector:
matchLabels:
app: ng1
### Since when has it been failing?
Failed to specify the namespace to access
### Testgrid link
_No response_
### Reason for failure (if possible)
I have tried to specify the IP address and podSelector, which can be accessed.
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig <network>
| 1.0 | Network policy specifies that access to a namespace failed - ### Which jobs are failing?
When using network policy to specify address to access a namespace, all pods in the space where the rule is located cannot be accessed through name.namespace.svc.cluster.local.
Kubernetes Version: v1.23.8
Calico Version: v3.23.2
Yaml file:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test
namespace: test1
spec:
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test2
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test1
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test2
podSelector:
matchLabels:
app: ng1
### Which tests are failing?
Namespaces that are not specified for access are accessible. Yaml file:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test
namespace: test1
spec:
egress:
- {}
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test1
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: test2
podSelector:
matchLabels:
app: ng1
### Since when has it been failing?
Failed to specify the namespace to access
### Testgrid link
_No response_
### Reason for failure (if possible)
I have tried to specify the IP address and podSelector, which can be accessed.
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig <network>
| test | network policy specifies that access to a namespace failed which jobs are failing when using network policy to specify address to access a namespace all pods in the space where the rule is located cannot be accessed through name namespace svc cluster local kubernetes version calico version yaml file apiversion networking io kind networkpolicy metadata name allow test namespace spec egress to namespaceselector matchlabels kubernetes io metadata name ingress from namespaceselector matchlabels kubernetes io metadata name namespaceselector matchlabels kubernetes io metadata name podselector matchlabels app which tests are failing namespaces that are not specified for access are accessible yaml file: apiversion networking io kind networkpolicy metadata name allow test namespace spec egress ingress from namespaceselector matchlabels kubernetes io metadata name namespaceselector matchlabels kubernetes io metadata name podselector matchlabels app since when has it been failing failed to specify the namespace to access testgrid link no response reason for failure if possible i have tried to specify the ip address and podselector which can be accessed anything else we need to know no response relevant sig s sig | 1 |
431,693 | 30,246,812,781 | IssuesEvent | 2023-07-06 17:08:22 | project-chip/connectedhomeip | https://api.github.com/repos/project-chip/connectedhomeip | closed | [Documentation] Fix Dic Issue | documentation needs triage | ### Documentation issues
In docs\guides\esp32\README.md file, 'Parttiton' should be 'Partition'.
### Platform
_No response_
### Anything else?
_No response_ | 1.0 | [Documentation] Fix Dic Issue - ### Documentation issues
In docs\guides\esp32\README.md file, 'Parttiton' should be 'Partition'.
### Platform
_No response_
### Anything else?
_No response_ | non_test | fix dic issue documentation issues in docs guides readme md file parttiton should be partition platform no response anything else no response | 0 |
127,881 | 10,491,722,024 | IssuesEvent | 2019-09-25 11:48:46 | BEXIS2/Core | https://api.github.com/repos/BEXIS2/Core | closed | public features must be accessible even without registered users - Fail [D3] | Medium TestQuality Type: Bug Type:Bug bug resolution_Fixed | **Step 1 Pass**
Login in as a user
**Step 2 Fail**
open feature permissions page
**Step 3 Fail**
click checkbox next to the feature like "Search"
**Step 4 Fail**
log off user | 1.0 | public features must be accessible even without registered users - Fail [D3] - **Step 1 Pass**
Login in as a user
**Step 2 Fail**
open feature permissions page
**Step 3 Fail**
click checkbox next to the feature like "Search"
**Step 4 Fail**
log off user | test | public features must be accessible even without registered users fail step pass login in as a user step fail open feature permissions page step fail click checkbox next to the feature like search step fail log off user | 1 |
302,685 | 26,159,107,670 | IssuesEvent | 2022-12-31 07:48:34 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: backupTPCC failed | C-test-failure O-robot O-roachtest branch-master release-blocker | roachtest.backupTPCC [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147258?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147258?buildTab=artifacts#/backupTPCC) on master @ [0725273ac7f789ba8ed78aacaf73cc953ca47fe8](https://github.com/cockroachdb/cockroach/commits/0725273ac7f789ba8ed78aacaf73cc953ca47fe8):
```
test artifacts and logs in: /artifacts/backupTPCC/run_1
(monitor.go:127).Wait: monitor failure: monitor task failed: pq: Use of BACKUP with incremental requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=true</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #94486 roachtest: backupTPCC failed [C-test-failure O-roachtest O-robot T-disaster-recovery branch-release-22.2 release-blocker]
- #94478 roachtest: backupTPCC failed [C-test-failure O-roachtest O-robot T-disaster-recovery branch-release-22.1 release-blocker]
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*backupTPCC.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: backupTPCC failed - roachtest.backupTPCC [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147258?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8147258?buildTab=artifacts#/backupTPCC) on master @ [0725273ac7f789ba8ed78aacaf73cc953ca47fe8](https://github.com/cockroachdb/cockroach/commits/0725273ac7f789ba8ed78aacaf73cc953ca47fe8):
```
test artifacts and logs in: /artifacts/backupTPCC/run_1
(monitor.go:127).Wait: monitor failure: monitor task failed: pq: Use of BACKUP with incremental requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=true</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #94486 roachtest: backupTPCC failed [C-test-failure O-roachtest O-robot T-disaster-recovery branch-release-22.2 release-blocker]
- #94478 roachtest: backupTPCC failed [C-test-failure O-roachtest O-robot T-disaster-recovery branch-release-22.1 release-blocker]
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*backupTPCC.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest backuptpcc failed roachtest backuptpcc with on master test artifacts and logs in artifacts backuptpcc run monitor go wait monitor failure monitor task failed pq use of backup with incremental requires an enterprise license your evaluation license expired on december if you re interested in getting a new license please contact subscriptions cockroachlabs com and we can help you out parameters roachtest cloud gce roachtest cpu roachtest encrypted true roachtest fs roachtest localssd true roachtest ssd help see see same failure on other branches roachtest backuptpcc failed roachtest backuptpcc failed cc cockroachdb disaster recovery | 1 |
158,617 | 12,420,591,790 | IssuesEvent | 2020-05-23 12:49:45 | nrwl/nx | https://api.github.com/repos/nrwl/nx | closed | Cannot run jest tests with `--maxWorkers=50%` option via @nrwl/jest:jest builder | community scope: testing tools type: feature | The `@nrwl/jest` builder should support both number and string inputs for `maxWorkers` config option. Passing `--maxWorkers=50%` to `ng test` or `nx affected:test` fails with schema validation, but the [Jest CLI Options documentation](https://jestjs.io/docs/en/cli#--maxworkersnumstring) specify that both a number and string are acceptable arguments.
## Expected Behavior
Both `ng test -- --maxWorkers=50%` and `nx affected:test -- --maxWorkers=50%` run tests and do not throw schema validation errors.
## Current Behavior
Both `ng test -- --maxWorkers=50%` and `nx affected:test -- --maxWorkers=50%` result in the following error:
```
Cannot parse arguments. See below for the reasons.
Argument --maxWorkers could not be parsed using value "50%".Valid type(s) is: number
```
## Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
### Steps to Reproduce
1. `npx create-nx-workspace@latest`
2. Choose a workspace that has tests out of the gate (I chose Angular)
3. Run any test using `"builder": "@nrwl/jest:jest"` and pass `-- --maxWorkers=50%` flag to the command.
### Context
The schema that is now out of date
https://github.com/nrwl/nx/blob/54d06f0fc95e60e55cad23dd28ba665f1592fdc6/packages/jest/src/builders/jest/schema.json#L56-L59
The original PR that implemented this functionality
https://github.com/nrwl/nx/pull/757
The latest jest docs for this functionality
https://jestjs.io/docs/en/cli#--maxworkersnumstring
Minimal reproduction repo
https://github.com/wrslatz/nx-playground/tree/e836200fe7484c6ac022066d13d534f5eb95808b | 1.0 | Cannot run jest tests with `--maxWorkers=50%` option via @nrwl/jest:jest builder - The `@nrwl/jest` builder should support both number and string inputs for `maxWorkers` config option. Passing `--maxWorkers=50%` to `ng test` or `nx affected:test` fails with schema validation, but the [Jest CLI Options documentation](https://jestjs.io/docs/en/cli#--maxworkersnumstring) specify that both a number and string are acceptable arguments.
## Expected Behavior
Both `ng test -- --maxWorkers=50%` and `nx affected:test -- --maxWorkers=50%` run tests and do not throw schema validation errors.
## Current Behavior
Both `ng test -- --maxWorkers=50%` and `nx affected:test -- --maxWorkers=50%` result in the following error:
```
Cannot parse arguments. See below for the reasons.
Argument --maxWorkers could not be parsed using value "50%".Valid type(s) is: number
```
## Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
### Steps to Reproduce
1. `npx create-nx-workspace@latest`
2. Choose a workspace that has tests out of the gate (I chose Angular)
3. Run any test using `"builder": "@nrwl/jest:jest"` and pass `-- --maxWorkers=50%` flag to the command.
### Context
The schema that is now out of date
https://github.com/nrwl/nx/blob/54d06f0fc95e60e55cad23dd28ba665f1592fdc6/packages/jest/src/builders/jest/schema.json#L56-L59
The original PR that implemented this functionality
https://github.com/nrwl/nx/pull/757
The latest jest docs for this functionality
https://jestjs.io/docs/en/cli#--maxworkersnumstring
Minimal reproduction repo
https://github.com/wrslatz/nx-playground/tree/e836200fe7484c6ac022066d13d534f5eb95808b | test | cannot run jest tests with maxworkers option via nrwl jest jest builder the nrwl jest builder should support both number and string inputs for maxworkers config option passing maxworkers to ng test or nx affected test fails with schema validation but the specify that both a number and string are acceptable arguments expected behavior both ng test maxworkers and nx affected test maxworkers run tests and do not throw schema validation errors current behavior both ng test maxworkers and nx affected test maxworkers result in the following error cannot parse arguments see below for the reasons argument maxworkers could not be parsed using value valid type s is number failure information for bugs please help provide information about the failure if this is a bug if it is not a bug please remove the rest of this template steps to reproduce npx create nx workspace latest choose a workspace that has tests out of the gate i chose angular run any test using builder nrwl jest jest and pass maxworkers flag to the command context the schema that is now out of date the original pr that implemented this functionality the latest jest docs for this functionality minimal reproduction repo | 1 |
243,667 | 20,513,311,500 | IssuesEvent | 2022-03-01 09:12:31 | SciencesPoDRIS/archelec4 | https://api.github.com/repos/SciencesPoDRIS/archelec4 | closed | Viz Pyramide des âges | améliorations Fait : à tester/valider | - [x] une pyramide des âges par genre (double vertical barchart)
- [x] ajouter une répartition total par genre
- [x] passer la config dans le CSS
- [x] changer le style des barres de total couleur en bordure | 1.0 | Viz Pyramide des âges - - [x] une pyramide des âges par genre (double vertical barchart)
- [x] ajouter une répartition total par genre
- [x] passer la config dans le CSS
- [x] changer le style des barres de total couleur en bordure | test | viz pyramide des âges une pyramide des âges par genre double vertical barchart ajouter une répartition total par genre passer la config dans le css changer le style des barres de total couleur en bordure | 1 |
797,220 | 28,141,083,160 | IssuesEvent | 2023-04-02 00:00:10 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | Segmentation fault on grpc timer thread (might be related to keepalive) | kind/bug lang/core priority/P2 disposition/requires reporter action untriaged | ### What version of gRPC and what language are you using?
grpc version: [v1.39.1](https://github.com/grpc/grpc/tree/v1.39.1)
langauge: C++
### What operating system (Linux, Windows,...) and version?
CentOS Linux release 7.9.2009
### What runtime / compiler are you using (e.g. python version or version of gcc)
g++ (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
### What did you do?
I think my problem can be reproduced when two conditions are roughly satisfied.
#### 1. grpc channel argument
- The scheme of the channel is `dns`.
- The load balancing policy of the channel is `pick_first`.
- Set up keepalive
- `GRPC_ARG_KEEPALIVE_TIME_MS`: 10s
- `GRPC_ARG_KEEPALIVE_TIMEOUT_MS`: 2s
#### 2. Usage pattern with grpc channel
- A process(my application) connects to around 480 targets.
- A process spawns around 800 threads. (My application and targets run on each different node.)
- My application makes a channel per target.
- 80 threads are a set. the number of set is 10.
- The threads in each set look at the same 48 targets.
- Each thread has 48 channels.
### What did you expect to see?
I want my application to be alive under the above situation.
### What did you see instead?
There are two points for me to have seen.
#### 1. My application left some logs as follows before happening crash.
```
E0112 11:48:58.804390268 24168 chttp2_transport.cc:2903] keepalive_ping_end state error: 0 (expect: 1)
E0112 11:51:19.605714132 86757 chttp2_transport.cc:2903] keepalive_ping_end state error: 2 (expect: 1)
```
when I checked grpc codes related to the log, it seemed to be a log left by keepalive timeout.
https://github.com/grpc/grpc/blob/2d6b8f61cfdd1c4d2d7c1aae65a4fbf00e3e0981/src/core/ext/transport/chttp2/transport/chttp2_transport.cc#L2818-L2837
https://github.com/grpc/grpc/blob/2d6b8f61cfdd1c4d2d7c1aae65a4fbf00e3e0981/src/core/ext/transport/chttp2/transport/chttp2_transport.cc#L2885-L2908
#### 2. My application crashed due to a segfault as follows.
```
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `{ executable path }/gina-1.5.7-arch-centos7-dot2-npp7-l25-x8'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007facef5545e0 in grpc_core::kNoopRefcount () from { library path }/npp-7.4.0-20230111-expr_gina_sp_d36ae397-arch-centos7-x86_64-dts7/lib/libnpp.so.7
Missing separate debuginfos, use: debuginfo-install cyrus-sasl-lib-2.1.26-23.el7.x86_64 krb5_nhn-workstation-1.6.1-32.6.x86_64 libcom_err-1.42.9-19.el7.x86_64 libcurl-7.29.0-59.el7_9.1.x86_64 libicu-50.2-4.el7_7.x86_64 libidn-1.28-4.el7.x86_64 libselinux-2.5-15.el7.x86_64 libssh2-1.8.0-4.el7.x86_64 libxml2-2.9.1-6.el7.5.x86_64 nspr-4.25.0-2.el7_9.x86_64 nss-3.53.1-7.el7_9.x86_64 nss-util-3.53.1-1.el7_9.x86_64 openldap-2.4.44-23.el7_9.x86_64 openssl-libs-1.0.2k-21.el7_9.x86_64 pcre-8.32-17.el7.x86_64 snappy-1.1.0-3.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-19.el7_9.x86_64
(gdb) bt
#0 0x00007facef5545e0 in grpc_core::kNoopRefcount () from { library path }/npp-7.4.0-20230111-expr_gina_sp_d36ae397-arch-centos7-x86_64-dts7/lib/libnpp.so.7
#1 0x00007facee88b981 in exec_ctx_run (closure=<optimized out>, closure=<optimized out>, error=0x7faceee84be0) at /src/extern/grpc/src/core/lib/iomgr/exec_ctx.cc:43
#2 grpc_core::ExecCtx::Flush (this=0x7fa6f4cf40b0) at /src/extern/grpc/src/core/lib/iomgr/exec_ctx.cc:165
#3 0x00007facee89c549 in run_some_timers () at /src/extern/grpc/src/core/lib/iomgr/timer_manager.cc:134
#4 timer_main_loop () at /src/extern/grpc/src/core/lib/iomgr/timer_manager.cc:237
#5 timer_thread (completed_thread_ptr=0x7fac4b2e31d0) at /src/extern/grpc/src/core/lib/iomgr/timer_manager.cc:284
#6 0x00007faceeaa0062 in operator() (__closure=0x0, v=<optimized out>) at /src/extern/grpc/src/core/lib/gprpp/thd_posix.cc:140
#7 grpc_core::(anonymous namespace)::ThreadInternalsPosix::<lambda(void*)>::_FUN(void *) () at /src/extern/grpc/src/core/lib/gprpp/thd_posix.cc:145
#8 0x00007faceeaa0062 in grpc_core::Thread::Thread (this=<optimized out>, thd_name=<optimized out>, thd_body=<optimized out>, arg=<optimized out>, success=<optimized out>, options=...)
from { library path }/npp-7.4.0-20230111-expr_gina_sp_d36ae397-arch-centos7-x86_64-dts7/lib/libnpp.so.7
#9 0x00007facecab5ea5 in start_thread (arg=0x7fa6f4cf6700) at pthread_create.c:307
#10 0x00007facea68f9fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
```
I think the core dump is related to keepalive timeout because grpc timer_thread causes a segfault.
### Anything else we should know about your project / environment?
#### hardware spec
- cpu: 40 physical cores, 80 logical cores
- mem: 192GB | 1.0 | Segmentation fault on grpc timer thread (might be related to keepalive) - ### What version of gRPC and what language are you using?
grpc version: [v1.39.1](https://github.com/grpc/grpc/tree/v1.39.1)
langauge: C++
### What operating system (Linux, Windows,...) and version?
CentOS Linux release 7.9.2009
### What runtime / compiler are you using (e.g. python version or version of gcc)
g++ (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
### What did you do?
I think my problem can be reproduced when two conditions are roughly satisfied.
#### 1. grpc channel argument
- The scheme of the channel is `dns`.
- The load balancing policy of the channel is `pick_first`.
- Set up keepalive
- `GRPC_ARG_KEEPALIVE_TIME_MS`: 10s
- `GRPC_ARG_KEEPALIVE_TIMEOUT_MS`: 2s
#### 2. Usage pattern with grpc channel
- A process(my application) connects to around 480 targets.
- A process spawns around 800 threads. (My application and targets run on each different node.)
- My application makes a channel per target.
- 80 threads are a set. the number of set is 10.
- The threads in each set look at the same 48 targets.
- Each thread has 48 channels.
### What did you expect to see?
I want my application to be alive under the above situation.
### What did you see instead?
There are two points for me to have seen.
#### 1. My application left some logs as follows before happening crash.
```
E0112 11:48:58.804390268 24168 chttp2_transport.cc:2903] keepalive_ping_end state error: 0 (expect: 1)
E0112 11:51:19.605714132 86757 chttp2_transport.cc:2903] keepalive_ping_end state error: 2 (expect: 1)
```
when I checked grpc codes related to the log, it seemed to be a log left by keepalive timeout.
https://github.com/grpc/grpc/blob/2d6b8f61cfdd1c4d2d7c1aae65a4fbf00e3e0981/src/core/ext/transport/chttp2/transport/chttp2_transport.cc#L2818-L2837
https://github.com/grpc/grpc/blob/2d6b8f61cfdd1c4d2d7c1aae65a4fbf00e3e0981/src/core/ext/transport/chttp2/transport/chttp2_transport.cc#L2885-L2908
#### 2. My application crashed due to a segfault as follows.
```
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `{ executable path }/gina-1.5.7-arch-centos7-dot2-npp7-l25-x8'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007facef5545e0 in grpc_core::kNoopRefcount () from { library path }/npp-7.4.0-20230111-expr_gina_sp_d36ae397-arch-centos7-x86_64-dts7/lib/libnpp.so.7
Missing separate debuginfos, use: debuginfo-install cyrus-sasl-lib-2.1.26-23.el7.x86_64 krb5_nhn-workstation-1.6.1-32.6.x86_64 libcom_err-1.42.9-19.el7.x86_64 libcurl-7.29.0-59.el7_9.1.x86_64 libicu-50.2-4.el7_7.x86_64 libidn-1.28-4.el7.x86_64 libselinux-2.5-15.el7.x86_64 libssh2-1.8.0-4.el7.x86_64 libxml2-2.9.1-6.el7.5.x86_64 nspr-4.25.0-2.el7_9.x86_64 nss-3.53.1-7.el7_9.x86_64 nss-util-3.53.1-1.el7_9.x86_64 openldap-2.4.44-23.el7_9.x86_64 openssl-libs-1.0.2k-21.el7_9.x86_64 pcre-8.32-17.el7.x86_64 snappy-1.1.0-3.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-19.el7_9.x86_64
(gdb) bt
#0 0x00007facef5545e0 in grpc_core::kNoopRefcount () from { library path }/npp-7.4.0-20230111-expr_gina_sp_d36ae397-arch-centos7-x86_64-dts7/lib/libnpp.so.7
#1 0x00007facee88b981 in exec_ctx_run (closure=<optimized out>, closure=<optimized out>, error=0x7faceee84be0) at /src/extern/grpc/src/core/lib/iomgr/exec_ctx.cc:43
#2 grpc_core::ExecCtx::Flush (this=0x7fa6f4cf40b0) at /src/extern/grpc/src/core/lib/iomgr/exec_ctx.cc:165
#3 0x00007facee89c549 in run_some_timers () at /src/extern/grpc/src/core/lib/iomgr/timer_manager.cc:134
#4 timer_main_loop () at /src/extern/grpc/src/core/lib/iomgr/timer_manager.cc:237
#5 timer_thread (completed_thread_ptr=0x7fac4b2e31d0) at /src/extern/grpc/src/core/lib/iomgr/timer_manager.cc:284
#6 0x00007faceeaa0062 in operator() (__closure=0x0, v=<optimized out>) at /src/extern/grpc/src/core/lib/gprpp/thd_posix.cc:140
#7 grpc_core::(anonymous namespace)::ThreadInternalsPosix::<lambda(void*)>::_FUN(void *) () at /src/extern/grpc/src/core/lib/gprpp/thd_posix.cc:145
#8 0x00007faceeaa0062 in grpc_core::Thread::Thread (this=<optimized out>, thd_name=<optimized out>, thd_body=<optimized out>, arg=<optimized out>, success=<optimized out>, options=...)
from { library path }/npp-7.4.0-20230111-expr_gina_sp_d36ae397-arch-centos7-x86_64-dts7/lib/libnpp.so.7
#9 0x00007facecab5ea5 in start_thread (arg=0x7fa6f4cf6700) at pthread_create.c:307
#10 0x00007facea68f9fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
```
I think the core dump is related to keepalive timeout because grpc timer_thread causes a segfault.
### Anything else we should know about your project / environment?
#### hardware spec
- cpu: 40 physical cores, 80 logical cores
- mem: 192GB | non_test | segmentation fault on grpc timer thread might be related to keepalive what version of grpc and what language are you using grpc version langauge c what operating system linux windows and version centos linux release what runtime compiler are you using e g python version or version of gcc g gcc red hat what did you do i think my problem can be reproduced when two conditions are roughly satisfied grpc channel argument the scheme of the channel is dns the load balancing policy of the channel is pick first set up keepalive grpc arg keepalive time ms grpc arg keepalive timeout ms usage pattern with grpc channel a process my application connects to around targets a process spawns around threads my application and targets run on each different node my application makes a channel per target threads are a set the number of set is the threads in each set look at the same targets each thread has channels what did you expect to see i want my application to be alive under the above situation what did you see instead there are two points for me to have seen my application left some logs as follows before happening crash transport cc keepalive ping end state error expect transport cc keepalive ping end state error expect when i checked grpc codes related to the log it seemed to be a log left by keepalive timeout my application crashed due to a segfault as follows using host libthread db library libthread db so core was generated by executable path gina arch program terminated with signal segmentation fault in grpc core knooprefcount from library path npp expr gina sp arch lib libnpp so missing separate debuginfos use debuginfo install cyrus sasl lib nhn workstation libcom err libcurl libicu libidn libselinux nspr nss nss util openldap openssl libs pcre snappy xz libs zlib gdb bt in grpc core knooprefcount from library path npp expr gina sp arch lib libnpp so in exec ctx run closure closure error at src extern grpc src core lib iomgr exec ctx cc grpc core execctx flush this at src extern grpc src core lib iomgr exec ctx cc in run some timers at src extern grpc src core lib iomgr timer manager cc timer main loop at src extern grpc src core lib iomgr timer manager cc timer thread completed thread ptr at src extern grpc src core lib iomgr timer manager cc in operator closure v at src extern grpc src core lib gprpp thd posix cc grpc core anonymous namespace threadinternalsposix fun void at src extern grpc src core lib gprpp thd posix cc in grpc core thread thread this thd name thd body arg success options from library path npp expr gina sp arch lib libnpp so in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s i think the core dump is related to keepalive timeout because grpc timer thread causes a segfault anything else we should know about your project environment hardware spec cpu physical cores logical cores mem | 0 |
292,180 | 25,205,339,905 | IssuesEvent | 2022-11-13 16:06:20 | Test-Automation-Crash-Course-24-10-22/team_05 | https://api.github.com/repos/Test-Automation-Crash-Course-24-10-22/team_05 | opened | Adding a new goods to an existing comparison list | TestCase | **Descriptions:**
This test case verifies adding a new product to an existing comparison list.
**Pre-conditions:**
1. Open https://rozetka.com.ua/ua/ website and log in.
2. Add one product of the “Планшети” category to the comparison list.
3. Open the comparison list of “Планшети” category.
**Test steps:**
| Step | Test Data | Expected result |
| ------------- | ------------- | ------------- |
| Click the "Додати ще модель" button above the products | | Open catalog with goods of category "Планшети" |
| Select and click on any other product among those offered in “Планшети” category | | Product information page is open |
| Next to the "Купити в кредит" button, click on the icon with scales on the right | | The message "Товар додано до порівняння" is displayed at the bottom of the screen. A green checkmark appeared on the scale icon |
| Click the scales icon in the header on the right | | Open window "Список порівнянь" |
| Select the “Планшети” category | | A page is opened with a comparison of the characteristics of the previously selected goods and the newly added one | | 1.0 | Adding a new goods to an existing comparison list - **Descriptions:**
This test case verifies adding a new product to an existing comparison list.
**Pre-conditions:**
1. Open https://rozetka.com.ua/ua/ website and log in.
2. Add one product of the “Планшети” category to the comparison list.
3. Open the comparison list of “Планшети” category.
**Test steps:**
| Step | Test Data | Expected result |
| ------------- | ------------- | ------------- |
| Click the "Додати ще модель" button above the products | | Open catalog with goods of category "Планшети" |
| Select and click on any other product among those offered in “Планшети” category | | Product information page is open |
| Next to the "Купити в кредит" button, click on the icon with scales on the right | | The message "Товар додано до порівняння" is displayed at the bottom of the screen. A green checkmark appeared on the scale icon |
| Click the scales icon in the header on the right | | Open window "Список порівнянь" |
| Select the “Планшети” category | | A page is opened with a comparison of the characteristics of the previously selected goods and the newly added one | | test | adding a new goods to an existing comparison list descriptions this test case verifies adding a new product to an existing comparison list pre conditions open website and log in add one product of the “планшети” category to the comparison list open the comparison list of “планшети” category test steps step test data expected result click the додати ще модель button above the products open catalog with goods of category планшети select and click on any other product among those offered in “планшети” category product information page is open next to the купити в кредит button click on the icon with scales on the right the message товар додано до порівняння is displayed at the bottom of the screen a green checkmark appeared on the scale icon click the scales icon in the header on the right open window список порівнянь select the “планшети” category a page is opened with a comparison of the characteristics of the previously selected goods and the newly added one | 1 |
142,357 | 21,719,037,837 | IssuesEvent | 2022-05-10 21:10:12 | microsoft/azuredatastudio | https://api.github.com/repos/microsoft/azuredatastudio | closed | Table Designer - Can't change type of period columns after setting "Generate Always As" setting | Bug Triage: Done Area - Designer | 1. Add new column
2. In the column properties set "Generated Always As" to Row start or row end
Now the type field is made readonly. Is there a reason for this? I can make the column a non-period column again, change the type and then make it a period column just fine. | 1.0 | Table Designer - Can't change type of period columns after setting "Generate Always As" setting - 1. Add new column
2. In the column properties set "Generated Always As" to Row start or row end
Now the type field is made readonly. Is there a reason for this? I can make the column a non-period column again, change the type and then make it a period column just fine. | non_test | table designer can t change type of period columns after setting generate always as setting add new column in the column properties set generated always as to row start or row end now the type field is made readonly is there a reason for this i can make the column a non period column again change the type and then make it a period column just fine | 0 |
338,985 | 30,334,165,847 | IssuesEvent | 2023-07-11 08:31:14 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix general_functions.test_tensorflow_norm | TensorFlow Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix general_functions.test_tensorflow_norm - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5516194185/jobs/10057284821"><img src=https://img.shields.io/badge/-success-success></a>
| test | fix general functions test tensorflow norm numpy a href src tensorflow a href src jax a href src torch a href src paddle a href src | 1 |
314,902 | 9,604,273,829 | IssuesEvent | 2019-05-10 19:29:45 | dojot/dojot | https://api.github.com/repos/dojot/dojot | opened | Service become zombie when a flow node is configured wrong | Priority:Critical Team:Backend Type:Bug | When a **change node** is configured wrong (value is passed in place of an object), persister become zombie.
Although all services are UP, dojot is not operational: persister doesn't receive messages.
- logs:
```
persister_1 | [10/05/19 - 18:10:10] |messenger| INFO: Emitting new event message for subject device-data@admin
persister_1 | [10/05/19 - 18:10:10] |persister| INFO: Received data: {'metadata': {'deviceid': 'a15de8', 'tenant': 'admin', 'timestamp': 1557511809435}, 'attrs': {'bool': True}}
persister_1 | [10/05/19 - 18:10:10] |persister| DEBUG: got data event b'{"metadata":{"deviceid":"a15de8","tenant":"admin","timestamp":1557511809435},"attrs":{"bool":true}}'
persister_1 | [10/05/19 - 18:10:11] |messenger| INFO: Emitting new event message for subject device-data@admin
persister_1 | [10/05/19 - 18:10:11] |persister| INFO: Received data: {'attrs': 'teste do nó device template', 'metadata': {'timestamp': 1557511810457, 'tenant': 'admin', 'deviceid': 'd9c5c0'}}
persister_1 | [10/05/19 - 18:10:11] |persister| DEBUG: got data event b'{"attrs":"teste do n\xc3\xb3 device template","metadata":{"timestamp":1557511810457,"tenant":"admin","deviceid":"d9c5c0"}}'
```
```
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: Got a message in topic 4048ed43-0920-42d5-a849-17ca17f49300.
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: There are 1 callbacks registered.
flowbroker_1 | <18:10:10 10/05/2019> -- |messenger -- DEBUG: Received message: { value: <Buffer 7b 22 6d 65 74 61 64 61 74 61 22 3a 7b 22 64 65 76 69 63 65 69 64 22 3a 22 61 3
1 35 64 65 38 22 2c 22 74 65 6e 61 6e 74 22 3a 22 61 64 6d 69 6e 22 2c ... >,
flowbroker_1 | size: 99,
flowbroker_1 | key: null,
flowbroker_1 | topic: '4048ed43-0920-42d5-a849-17ca17f49300',
flowbroker_1 | offset: 2,
flowbroker_1 | partition: 0,
flowbroker_1 | timestamp: 1557511809435 }
flowbroker_1 | <18:10:10 10/05/2019> -- |messenger -- DEBUG: Emitting new event message for subject device-data@admin
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: Queued event [object Object]
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: Pre-processing event {"source":"device","message":"{\"metadata\":{\"deviceid\":\"a15de8\",\"tenant\":\"admin\",\"timestamp\":155
7511809435},\"attrs\":{\"bool\":true}}"}
flowbroker_1 | retriving data related to admin:a15de8 from cache
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: [ingestor] got new device event: { metadata: { tenant: 'admin', timestamp: 1557511809435 },
flowbroker_1 | event: 'publish',
flowbroker_1 | data: { attrs: { bool: true, serial: 'indefinido' }, id: 'a15de8' } }
flowbroker_1 | [executor] will handle node switch
flowbroker_1 | debug: Executing switch node...
flowbroker_1 | debug: ... switch node was successfully executed.
flowbroker_1 | [executor] hop (switch) result: [{"payload":{"bool":true,"serial":"indefinido"}}]
flowbroker_1 | [executor] will handle node change
flowbroker_1 | debug: Executing change node...
flowbroker_1 | debug: ... change node was successfully executed.
flowbroker_1 | [executor] hop (change) result: [{"payload":{"bool":true,"serial":"indefinido"},"saida":"teste do nó device template"}]
flowbroker_1 | [executor] will handle node multi device out
flowbroker_1 | debug: Executing multi-device-out node...
flowbroker_1 | debug: Updating device...
flowbroker_1 | debug: Message is: { attrs: 'teste do nó device template',
flowbroker_1 | metadata: { timestamp: 1557511810457, tenant: 'admin' } }
flowbroker_1 | <18:10:10 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'dojot.device-manager.device': { admin: '6be83756-dbff-4
064-acd7-d216b99a1d59' },
flowbroker_1 | 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' },
flowbroker_1 | 'dojot.notifications': { admin: '4cbde50a-6b02-4644-97bd-806875eb4f97' } }
flowbroker_1 | debug: ... device was updated.
flowbroker_1 | debug: ... multi-device-out node was successfully executed.
flowbroker_1 | [executor] hop (multi device out) result: undefined
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: Got a message in topic 4048ed43-0920-42d5-a849-17ca17f49300.
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: There are 1 callbacks registered.
flowbroker_1 | <18:10:11 10/05/2019> -- |messenger -- DEBUG: Received message: { value: <Buffer 7b 22 61 74 74 72 73 22 3a 22 74 65 73 74 65 20 64 6f 20 6e c3 b3 20 64 65 76 6
9 63 65 20 74 65 6d 70 6c 61 74 65 22 2c 22 6d 65 74 61 64 61 74 61 22 ... >,
flowbroker_1 | size: 116,
flowbroker_1 | key: null,
flowbroker_1 | topic: '4048ed43-0920-42d5-a849-17ca17f49300',
flowbroker_1 | offset: 3,
flowbroker_1 | partition: 0,
flowbroker_1 | timestamp: 1557511810458 }
flowbroker_1 | <18:10:11 10/05/2019> -- |messenger -- DEBUG: Emitting new event message for subject device-data@admin
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: Queued event [object Object]
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: Pre-processing event {"source":"device","message":"{\"attrs\":\"teste do nó device template\",\"metadata\":{\"timestamp\":155751
1810457,\"tenant\":\"admin\",\"deviceid\":\"d9c5c0\"}}"}
flowbroker_1 | retriving data related to admin:d9c5c0 from cache
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: [ingestor] got new device event: { metadata: { timestamp: 1557511810457, tenant: 'admin' },
flowbroker_1 | event: 'publish',
flowbroker_1 | data: { attrs: 'teste do nó device template', id: 'd9c5c0' } }
flowbroker_1 | will ignore /red/keymap.json
flowbroker_1 | debug: asJson: [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Objec
t],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object].
flowbroker_1 | Received reply [{"payload":{"payload":{}},"requestId":"ff726289-2520-435d-a727-d26b77f70764"}]
```
```
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Authenticating MQTT client
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- |iotagent -- INFO: Getting device from http://device-manager:5000/device/a15de8
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- INFO: Parameters are: url: http://device-manager:5000/device/a15de8, headers: { authorization: 'Bearer and0IHNjaGVtYQ==.eyJzZXJ2aWNlIjoiYWRtaW4iLCJ1c2VybmFtZSI6ImlvdGFnZW50LW1xdHQifQ==.ZHVtbXkgc2lnbmF0dXJl' }, method: get, TAG
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Connection authorized for
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557511809432,"msg":"client connected","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- INFO: client up
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Authorizing MQTT client admin:a15de8 to publish to /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Expected topic is /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Device published on topic /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Authorized client admin:a15de8 to publish to topic /admin/a15de8/attrs
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557511809433,"msg":"closed","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- INFO: client down
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Published data: {"bool":true}, client: admin:a15de8, topic: /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' } }
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Authenticating MQTT client
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- |iotagent -- INFO: Getting device from http://device-manager:5000/device/a15de8
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- INFO: Parameters are: url: http://device-manager:5000/device/a15de8, headers: { authorization: 'Bearer and0IHNjaGVtYQ==.eyJzZXJ2aWNlIjoiYWRtaW4iLCJ1c2VybmFtZSI6ImlvdGFnZW50LW1xdHQifQ==.ZHVtbXkgc2lnbmF0dXJl' }, method: get, TAG
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Connection authorized for
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512217799,"msg":"client connected","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- INFO: client up
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Authorizing MQTT client admin:a15de8 to publish to /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Expected topic is /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Device published on topic /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Authorized client admin:a15de8 to publish to topic /admin/a15de8/attrs
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512217801,"msg":"closed","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- INFO: client down
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Published data: {"bool":true}, client: admin:a15de8, topic: /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' } }
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Authenticating MQTT client
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- |iotagent -- INFO: Getting device from http://device-manager:5000/device/a15de8
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- INFO: Parameters are: url: http://device-manager:5000/device/a15de8, headers: { authorization: 'Bearer and0IHNjaGVtYQ==.eyJzZXJ2aWNlIjoiYWRtaW4iLCJ1c2VybmFtZSI6ImlvdGFnZW50LW1xdHQifQ==.ZHVtbXkgc2lnbmF0dXJl' }, method: get, TAG
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Connection authorized for
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512601059,"msg":"client connected","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- INFO: client up
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Authorizing MQTT client admin:a15de8 to publish to /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Expected topic is /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Device published on topic /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Authorized client admin:a15de8 to publish to topic /admin/a15de8/attrs
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512601061,"msg":"closed","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- INFO: client down
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Published data: {"int":0}, client: admin:a15de8, topic: /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' } }
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: ignoring internal message
```
- flow created:
```
flowbroker_1 | debug: Creating new flow...
flowbroker_1 | debug: Checking 'enabled' field...
flowbroker_1 | debug: ... 'enabled' field was checked.
flowbroker_1 | debug: Parsing new flow...
flowbroker_1 | debug: New flow: ParsedFlow {
flowbroker_1 | heads: [],
flowbroker_1 | devices: [],
flowbroker_1 | templates: [],
flowbroker_1 | nodes: {},
flowbroker_1 | red:
flowbroker_1 | [ { id: 'Aa9bcdea6d132f', type: 'tab', label: 'Flow 1' },
flowbroker_1 | { id: 'A3c2de66e2570fa',
flowbroker_1 | type: 'device template in',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: '',
flowbroker_1 | device_template: { id: 8 },
flowbroker_1 | status: 'false',
flowbroker_1 | device_template_id: 8,
flowbroker_1 | x: 147.9444580078125,
flowbroker_1 | y: 120.68750762939453,
flowbroker_1 | wires: [ [ 'A24f051ccffebae' ] ] },
flowbroker_1 | { id: 'Ad58df69ab18508',
flowbroker_1 | type: 'change',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: '',
flowbroker_1 | rules:
flowbroker_1 | [ { t: 'set',
flowbroker_1 | p: 'saida',
flowbroker_1 | pt: 'msg',
flowbroker_1 | to: 'teste do nó device template',
flowbroker_1 | tot: 'str' } ],
flowbroker_1 | action: '',
flowbroker_1 | property: '',
flowbroker_1 | from: '',
flowbroker_1 | to: '',
flowbroker_1 | reg: false,
flowbroker_1 | x: 489.9548873901367,
flowbroker_1 | y: 262.7638854980469,
flowbroker_1 | wires: [ [ 'A129a1707890dd9' ] ] },
flowbroker_1 | { id: 'A24f051ccffebae',
flowbroker_1 | type: 'switch',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: 'TRUE',
flowbroker_1 | property: 'payload.bool',
flowbroker_1 | propertyType: 'msg',
flowbroker_1 | rules: [ { t: 'true' } ],
flowbroker_1 | checkall: 'true',
flowbroker_1 | outputs: 1,
flowbroker_1 | x: 318.9479522705078,
flowbroker_1 | y: 179.75348281860352,
flowbroker_1 | wires: [ [ 'Ad58df69ab18508' ] ] },
flowbroker_1 | { id: 'A129a1707890dd9',
flowbroker_1 | type: 'multi device out',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: '',
flowbroker_1 | device_source: 'configured',
flowbroker_1 | devices_source_dynamic: '',
flowbroker_1 | devices_source_dynamicFieldType: 'msg',
flowbroker_1 | devices_source_configured: [ 'd9c5c0' ],
flowbroker_1 | attrs: 'saida',
flowbroker_1 | _devices_loaded: true,
flowbroker_1 | x: 737.9617919921875,
flowbroker_1 | y: 334.52433013916016,
flowbroker_1 | wires: [] } ] }
flowbroker_1 | debug: Ignoring 'tab' node.
flowbroker_1 | debug: ... flow was successfully parsed.
flowbroker_1 | debug: Inserting flow into the database...
flowbroker_1 | debug: ... new flow was successfully inserted into the database.
```
**Affected Version**: 61.1-20190423 | 1.0 | Service become zombie when a flow node is configured wrong - When a **change node** is configured wrong (value is passed in place of an object), persister become zombie.
Although all services are UP, dojot is not operational: persister doesn't receive messages.
- logs:
```
persister_1 | [10/05/19 - 18:10:10] |messenger| INFO: Emitting new event message for subject device-data@admin
persister_1 | [10/05/19 - 18:10:10] |persister| INFO: Received data: {'metadata': {'deviceid': 'a15de8', 'tenant': 'admin', 'timestamp': 1557511809435}, 'attrs': {'bool': True}}
persister_1 | [10/05/19 - 18:10:10] |persister| DEBUG: got data event b'{"metadata":{"deviceid":"a15de8","tenant":"admin","timestamp":1557511809435},"attrs":{"bool":true}}'
persister_1 | [10/05/19 - 18:10:11] |messenger| INFO: Emitting new event message for subject device-data@admin
persister_1 | [10/05/19 - 18:10:11] |persister| INFO: Received data: {'attrs': 'teste do nó device template', 'metadata': {'timestamp': 1557511810457, 'tenant': 'admin', 'deviceid': 'd9c5c0'}}
persister_1 | [10/05/19 - 18:10:11] |persister| DEBUG: got data event b'{"attrs":"teste do n\xc3\xb3 device template","metadata":{"timestamp":1557511810457,"tenant":"admin","deviceid":"d9c5c0"}}'
```
```
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: Got a message in topic 4048ed43-0920-42d5-a849-17ca17f49300.
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: There are 1 callbacks registered.
flowbroker_1 | <18:10:10 10/05/2019> -- |messenger -- DEBUG: Received message: { value: <Buffer 7b 22 6d 65 74 61 64 61 74 61 22 3a 7b 22 64 65 76 69 63 65 69 64 22 3a 22 61 3
1 35 64 65 38 22 2c 22 74 65 6e 61 6e 74 22 3a 22 61 64 6d 69 6e 22 2c ... >,
flowbroker_1 | size: 99,
flowbroker_1 | key: null,
flowbroker_1 | topic: '4048ed43-0920-42d5-a849-17ca17f49300',
flowbroker_1 | offset: 2,
flowbroker_1 | partition: 0,
flowbroker_1 | timestamp: 1557511809435 }
flowbroker_1 | <18:10:10 10/05/2019> -- |messenger -- DEBUG: Emitting new event message for subject device-data@admin
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: Queued event [object Object]
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: Pre-processing event {"source":"device","message":"{\"metadata\":{\"deviceid\":\"a15de8\",\"tenant\":\"admin\",\"timestamp\":155
7511809435},\"attrs\":{\"bool\":true}}"}
flowbroker_1 | retriving data related to admin:a15de8 from cache
flowbroker_1 | <18:10:10 10/05/2019> -- DEBUG: [ingestor] got new device event: { metadata: { tenant: 'admin', timestamp: 1557511809435 },
flowbroker_1 | event: 'publish',
flowbroker_1 | data: { attrs: { bool: true, serial: 'indefinido' }, id: 'a15de8' } }
flowbroker_1 | [executor] will handle node switch
flowbroker_1 | debug: Executing switch node...
flowbroker_1 | debug: ... switch node was successfully executed.
flowbroker_1 | [executor] hop (switch) result: [{"payload":{"bool":true,"serial":"indefinido"}}]
flowbroker_1 | [executor] will handle node change
flowbroker_1 | debug: Executing change node...
flowbroker_1 | debug: ... change node was successfully executed.
flowbroker_1 | [executor] hop (change) result: [{"payload":{"bool":true,"serial":"indefinido"},"saida":"teste do nó device template"}]
flowbroker_1 | [executor] will handle node multi device out
flowbroker_1 | debug: Executing multi-device-out node...
flowbroker_1 | debug: Updating device...
flowbroker_1 | debug: Message is: { attrs: 'teste do nó device template',
flowbroker_1 | metadata: { timestamp: 1557511810457, tenant: 'admin' } }
flowbroker_1 | <18:10:10 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'dojot.device-manager.device': { admin: '6be83756-dbff-4
064-acd7-d216b99a1d59' },
flowbroker_1 | 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' },
flowbroker_1 | 'dojot.notifications': { admin: '4cbde50a-6b02-4644-97bd-806875eb4f97' } }
flowbroker_1 | debug: ... device was updated.
flowbroker_1 | debug: ... multi-device-out node was successfully executed.
flowbroker_1 | [executor] hop (multi device out) result: undefined
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: Got a message in topic 4048ed43-0920-42d5-a849-17ca17f49300.
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: There are 1 callbacks registered.
flowbroker_1 | <18:10:11 10/05/2019> -- |messenger -- DEBUG: Received message: { value: <Buffer 7b 22 61 74 74 72 73 22 3a 22 74 65 73 74 65 20 64 6f 20 6e c3 b3 20 64 65 76 6
9 63 65 20 74 65 6d 70 6c 61 74 65 22 2c 22 6d 65 74 61 64 61 74 61 22 ... >,
flowbroker_1 | size: 116,
flowbroker_1 | key: null,
flowbroker_1 | topic: '4048ed43-0920-42d5-a849-17ca17f49300',
flowbroker_1 | offset: 3,
flowbroker_1 | partition: 0,
flowbroker_1 | timestamp: 1557511810458 }
flowbroker_1 | <18:10:11 10/05/2019> -- |messenger -- DEBUG: Emitting new event message for subject device-data@admin
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: Queued event [object Object]
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: Pre-processing event {"source":"device","message":"{\"attrs\":\"teste do nó device template\",\"metadata\":{\"timestamp\":155751
1810457,\"tenant\":\"admin\",\"deviceid\":\"d9c5c0\"}}"}
flowbroker_1 | retriving data related to admin:d9c5c0 from cache
flowbroker_1 | <18:10:11 10/05/2019> -- DEBUG: [ingestor] got new device event: { metadata: { timestamp: 1557511810457, tenant: 'admin' },
flowbroker_1 | event: 'publish',
flowbroker_1 | data: { attrs: 'teste do nó device template', id: 'd9c5c0' } }
flowbroker_1 | will ignore /red/keymap.json
flowbroker_1 | debug: asJson: [object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Objec
t],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object].
flowbroker_1 | Received reply [{"payload":{"payload":{}},"requestId":"ff726289-2520-435d-a727-d26b77f70764"}]
```
```
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Authenticating MQTT client
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- |iotagent -- INFO: Getting device from http://device-manager:5000/device/a15de8
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- INFO: Parameters are: url: http://device-manager:5000/device/a15de8, headers: { authorization: 'Bearer and0IHNjaGVtYQ==.eyJzZXJ2aWNlIjoiYWRtaW4iLCJ1c2VybmFtZSI6ImlvdGFnZW50LW1xdHQifQ==.ZHVtbXkgc2lnbmF0dXJl' }, method: get, TAG
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Connection authorized for
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557511809432,"msg":"client connected","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- INFO: client up
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Authorizing MQTT client admin:a15de8 to publish to /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Expected topic is /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Device published on topic /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Authorized client admin:a15de8 to publish to topic /admin/a15de8/attrs
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557511809433,"msg":"closed","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- INFO: client down
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: Published data: {"bool":true}, client: admin:a15de8, topic: /admin/a15de8/attrs
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' } }
iotagent-mqtt_1 | <18:10:09 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Authenticating MQTT client
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- |iotagent -- INFO: Getting device from http://device-manager:5000/device/a15de8
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- INFO: Parameters are: url: http://device-manager:5000/device/a15de8, headers: { authorization: 'Bearer and0IHNjaGVtYQ==.eyJzZXJ2aWNlIjoiYWRtaW4iLCJ1c2VybmFtZSI6ImlvdGFnZW50LW1xdHQifQ==.ZHVtbXkgc2lnbmF0dXJl' }, method: get, TAG
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Connection authorized for
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512217799,"msg":"client connected","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- INFO: client up
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Authorizing MQTT client admin:a15de8 to publish to /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Expected topic is /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Device published on topic /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Authorized client admin:a15de8 to publish to topic /admin/a15de8/attrs
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512217801,"msg":"closed","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- INFO: client down
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: Published data: {"bool":true}, client: admin:a15de8, topic: /admin/a15de8/attrs
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' } }
iotagent-mqtt_1 | <18:16:57 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Authenticating MQTT client
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- |iotagent -- INFO: Getting device from http://device-manager:5000/device/a15de8
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- INFO: Parameters are: url: http://device-manager:5000/device/a15de8, headers: { authorization: 'Bearer and0IHNjaGVtYQ==.eyJzZXJ2aWNlIjoiYWRtaW4iLCJ1c2VybmFtZSI6ImlvdGFnZW50LW1xdHQifQ==.ZHVtbXkgc2lnbmF0dXJl' }, method: get, TAG
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Connection authorized for
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512601059,"msg":"client connected","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- INFO: client up
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Authorizing MQTT client admin:a15de8 to publish to /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Expected topic is /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Device published on topic /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Authorized client admin:a15de8 to publish to topic /admin/a15de8/attrs
iotagent-mqtt_1 | {"pid":10,"hostname":"dfb3e494cadc","name":"MoscaServer","level":30,"time":1557512601061,"msg":"closed","client":"admin:a15de8","v":1}
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- INFO: client down
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: ignoring internal message
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: Published data: {"int":0}, client: admin:a15de8, topic: /admin/a15de8/attrs
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- |messenger -- DEBUG: Trying to publish someting. Current producer topics are { 'device-data': { admin: '4048ed43-0920-42d5-a849-17ca17f49300' } }
iotagent-mqtt_1 | <18:23:21 10/05/2019> -- DEBUG: ignoring internal message
```
- flow created:
```
flowbroker_1 | debug: Creating new flow...
flowbroker_1 | debug: Checking 'enabled' field...
flowbroker_1 | debug: ... 'enabled' field was checked.
flowbroker_1 | debug: Parsing new flow...
flowbroker_1 | debug: New flow: ParsedFlow {
flowbroker_1 | heads: [],
flowbroker_1 | devices: [],
flowbroker_1 | templates: [],
flowbroker_1 | nodes: {},
flowbroker_1 | red:
flowbroker_1 | [ { id: 'Aa9bcdea6d132f', type: 'tab', label: 'Flow 1' },
flowbroker_1 | { id: 'A3c2de66e2570fa',
flowbroker_1 | type: 'device template in',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: '',
flowbroker_1 | device_template: { id: 8 },
flowbroker_1 | status: 'false',
flowbroker_1 | device_template_id: 8,
flowbroker_1 | x: 147.9444580078125,
flowbroker_1 | y: 120.68750762939453,
flowbroker_1 | wires: [ [ 'A24f051ccffebae' ] ] },
flowbroker_1 | { id: 'Ad58df69ab18508',
flowbroker_1 | type: 'change',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: '',
flowbroker_1 | rules:
flowbroker_1 | [ { t: 'set',
flowbroker_1 | p: 'saida',
flowbroker_1 | pt: 'msg',
flowbroker_1 | to: 'teste do nó device template',
flowbroker_1 | tot: 'str' } ],
flowbroker_1 | action: '',
flowbroker_1 | property: '',
flowbroker_1 | from: '',
flowbroker_1 | to: '',
flowbroker_1 | reg: false,
flowbroker_1 | x: 489.9548873901367,
flowbroker_1 | y: 262.7638854980469,
flowbroker_1 | wires: [ [ 'A129a1707890dd9' ] ] },
flowbroker_1 | { id: 'A24f051ccffebae',
flowbroker_1 | type: 'switch',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: 'TRUE',
flowbroker_1 | property: 'payload.bool',
flowbroker_1 | propertyType: 'msg',
flowbroker_1 | rules: [ { t: 'true' } ],
flowbroker_1 | checkall: 'true',
flowbroker_1 | outputs: 1,
flowbroker_1 | x: 318.9479522705078,
flowbroker_1 | y: 179.75348281860352,
flowbroker_1 | wires: [ [ 'Ad58df69ab18508' ] ] },
flowbroker_1 | { id: 'A129a1707890dd9',
flowbroker_1 | type: 'multi device out',
flowbroker_1 | z: 'Aa9bcdea6d132f',
flowbroker_1 | name: '',
flowbroker_1 | device_source: 'configured',
flowbroker_1 | devices_source_dynamic: '',
flowbroker_1 | devices_source_dynamicFieldType: 'msg',
flowbroker_1 | devices_source_configured: [ 'd9c5c0' ],
flowbroker_1 | attrs: 'saida',
flowbroker_1 | _devices_loaded: true,
flowbroker_1 | x: 737.9617919921875,
flowbroker_1 | y: 334.52433013916016,
flowbroker_1 | wires: [] } ] }
flowbroker_1 | debug: Ignoring 'tab' node.
flowbroker_1 | debug: ... flow was successfully parsed.
flowbroker_1 | debug: Inserting flow into the database...
flowbroker_1 | debug: ... new flow was successfully inserted into the database.
```
**Affected Version**: 61.1-20190423 | non_test | service become zombie when a flow node is configured wrong when a change node is configured wrong value is passed in place of an object persister become zombie although all services are up dojot is not operational persister doesn t receive messages logs persister messenger info emitting new event message for subject device data admin persister persister info received data metadata deviceid tenant admin timestamp attrs bool true persister persister debug got data event b metadata deviceid tenant admin timestamp attrs bool true persister messenger info emitting new event message for subject device data admin persister persister info received data attrs teste do nó device template metadata timestamp tenant admin deviceid persister persister debug got data event b attrs teste do n device template metadata timestamp tenant admin deviceid flowbroker debug got a message in topic flowbroker debug there are callbacks registered flowbroker messenger debug received message value buffer flowbroker size flowbroker key null flowbroker topic flowbroker offset flowbroker partition flowbroker timestamp flowbroker messenger debug emitting new event message for subject device data admin flowbroker debug queued event flowbroker debug pre processing event source device message metadata deviceid tenant admin timestamp attrs bool true flowbroker retriving data related to admin from cache flowbroker debug got new device event metadata tenant admin timestamp flowbroker event publish flowbroker data attrs bool true serial indefinido id flowbroker will handle node switch flowbroker debug executing switch node flowbroker debug switch node was successfully executed flowbroker hop switch result flowbroker will handle node change flowbroker debug executing change node flowbroker debug change node was successfully executed flowbroker hop change result flowbroker will handle node multi device out flowbroker debug executing multi device out node flowbroker debug updating device flowbroker debug message is attrs teste do nó device template flowbroker metadata timestamp tenant admin flowbroker messenger debug trying to publish someting current producer topics are dojot device manager device admin dbff flowbroker device data admin flowbroker dojot notifications admin flowbroker debug device was updated flowbroker debug multi device out node was successfully executed flowbroker hop multi device out result undefined flowbroker debug got a message in topic flowbroker debug there are callbacks registered flowbroker messenger debug received message value buffer flowbroker size flowbroker key null flowbroker topic flowbroker offset flowbroker partition flowbroker timestamp flowbroker messenger debug emitting new event message for subject device data admin flowbroker debug queued event flowbroker debug pre processing event source device message attrs teste do nó device template metadata timestamp tenant admin deviceid flowbroker retriving data related to admin from cache flowbroker debug got new device event metadata timestamp tenant admin flowbroker event publish flowbroker data attrs teste do nó device template id flowbroker will ignore red keymap json flowbroker debug asjson object objec t flowbroker received reply iotagent mqtt debug authenticating mqtt client iotagent mqtt iotagent info getting device from iotagent mqtt info parameters are url headers authorization bearer method get tag iotagent mqtt debug connection authorized for iotagent mqtt pid hostname name moscaserver level time msg client connected client admin v iotagent mqtt info client up iotagent mqtt debug authorizing mqtt client admin to publish to admin attrs iotagent mqtt debug expected topic is admin attrs iotagent mqtt debug device published on topic admin attrs iotagent mqtt debug authorized client admin to publish to topic admin attrs iotagent mqtt pid hostname name moscaserver level time msg closed client admin v iotagent mqtt info client down iotagent mqtt debug ignoring internal message iotagent mqtt debug published data bool true client admin topic admin attrs iotagent mqtt messenger debug trying to publish someting current producer topics are device data admin iotagent mqtt debug ignoring internal message iotagent mqtt debug authenticating mqtt client iotagent mqtt iotagent info getting device from iotagent mqtt info parameters are url headers authorization bearer method get tag iotagent mqtt debug connection authorized for iotagent mqtt pid hostname name moscaserver level time msg client connected client admin v iotagent mqtt info client up iotagent mqtt debug authorizing mqtt client admin to publish to admin attrs iotagent mqtt debug expected topic is admin attrs iotagent mqtt debug device published on topic admin attrs iotagent mqtt debug authorized client admin to publish to topic admin attrs iotagent mqtt pid hostname name moscaserver level time msg closed client admin v iotagent mqtt info client down iotagent mqtt debug ignoring internal message iotagent mqtt debug published data bool true client admin topic admin attrs iotagent mqtt messenger debug trying to publish someting current producer topics are device data admin iotagent mqtt debug ignoring internal message iotagent mqtt debug authenticating mqtt client iotagent mqtt iotagent info getting device from iotagent mqtt info parameters are url headers authorization bearer method get tag iotagent mqtt debug connection authorized for iotagent mqtt pid hostname name moscaserver level time msg client connected client admin v iotagent mqtt info client up iotagent mqtt debug authorizing mqtt client admin to publish to admin attrs iotagent mqtt debug expected topic is admin attrs iotagent mqtt debug device published on topic admin attrs iotagent mqtt debug authorized client admin to publish to topic admin attrs iotagent mqtt pid hostname name moscaserver level time msg closed client admin v iotagent mqtt info client down iotagent mqtt debug ignoring internal message iotagent mqtt debug published data int client admin topic admin attrs iotagent mqtt messenger debug trying to publish someting current producer topics are device data admin iotagent mqtt debug ignoring internal message flow created flowbroker debug creating new flow flowbroker debug checking enabled field flowbroker debug enabled field was checked flowbroker debug parsing new flow flowbroker debug new flow parsedflow flowbroker heads flowbroker devices flowbroker templates flowbroker nodes flowbroker red flowbroker id type tab label flow flowbroker id flowbroker type device template in flowbroker z flowbroker name flowbroker device template id flowbroker status false flowbroker device template id flowbroker x flowbroker y flowbroker wires flowbroker id flowbroker type change flowbroker z flowbroker name flowbroker rules flowbroker t set flowbroker p saida flowbroker pt msg flowbroker to teste do nó device template flowbroker tot str flowbroker action flowbroker property flowbroker from flowbroker to flowbroker reg false flowbroker x flowbroker y flowbroker wires flowbroker id flowbroker type switch flowbroker z flowbroker name true flowbroker property payload bool flowbroker propertytype msg flowbroker rules flowbroker checkall true flowbroker outputs flowbroker x flowbroker y flowbroker wires flowbroker id flowbroker type multi device out flowbroker z flowbroker name flowbroker device source configured flowbroker devices source dynamic flowbroker devices source dynamicfieldtype msg flowbroker devices source configured flowbroker attrs saida flowbroker devices loaded true flowbroker x flowbroker y flowbroker wires flowbroker debug ignoring tab node flowbroker debug flow was successfully parsed flowbroker debug inserting flow into the database flowbroker debug new flow was successfully inserted into the database affected version | 0 |
141,523 | 11,424,052,572 | IssuesEvent | 2020-02-03 16:58:45 | LiskHQ/lisk-sdk | https://api.github.com/repos/LiskHQ/lisk-sdk | closed | Add integration test of modules/chain.js | framework/node type: test | ### Description
`chain.js` should have it's own integration test.
Also, at the same time, disabled `synchronous_task.js` should be included in the chain.js integration test.
This should be the first integration test in Jest.
### Which version(s) does this affect? (Environment, OS, etc...)
2.2- | 1.0 | Add integration test of modules/chain.js - ### Description
`chain.js` should have it's own integration test.
Also, at the same time, disabled `synchronous_task.js` should be included in the chain.js integration test.
This should be the first integration test in Jest.
### Which version(s) does this affect? (Environment, OS, etc...)
2.2- | test | add integration test of modules chain js description chain js should have it s own integration test also at the same time disabled synchronous task js should be included in the chain js integration test this should be the first integration test in jest which version s does this affect environment os etc | 1 |
93,034 | 10,764,455,544 | IssuesEvent | 2019-11-01 08:20:21 | TeoShyanJie/ped | https://api.github.com/repos/TeoShyanJie/ped | opened | UserGuide instruction error | severity.Medium type.DocumentationBug | Error in user guide: add act n/Visit Gundam Museum a/Tokyo du/120 p/65543221

| 1.0 | UserGuide instruction error - Error in user guide: add act n/Visit Gundam Museum a/Tokyo du/120 p/65543221

| non_test | userguide instruction error error in user guide add act n visit gundam museum a tokyo du p | 0 |
685,418 | 23,454,862,158 | IssuesEvent | 2022-08-16 07:58:24 | lukasoppermann/hourglass | https://api.github.com/repos/lukasoppermann/hourglass | closed | Mark task as done | high priority | **As** user of taskd
**I want** to mark tasks as done
**So that** I can remove it from the normal view and move it to the archive
1. First gray item is end of archive animation (slides from left to right and checkmark slides in to, strike through lines strikes through item (responds to slide (%) state )).
2. once its at the end it slides back to front (fast) and then moves to the top of the archive
3. Last item is the view of an item that is archived (archived items are always in the back of the list)
**Acceptance Criteria**
- [ ] click checkbox to mark as done
- [ ] click done checkmark to mark as not done
- [ ] change bg color, font and strikethrough title
- [ ] remove play/pause button
- [ ] move to bottom (top of archived items)

| 1.0 | Mark task as done - **As** user of taskd
**I want** to mark tasks as done
**So that** I can remove it from the normal view and move it to the archive
1. First gray item is end of archive animation (slides from left to right and checkmark slides in to, strike through lines strikes through item (responds to slide (%) state )).
2. once its at the end it slides back to front (fast) and then moves to the top of the archive
3. Last item is the view of an item that is archived (archived items are always in the back of the list)
**Acceptance Criteria**
- [ ] click checkbox to mark as done
- [ ] click done checkmark to mark as not done
- [ ] change bg color, font and strikethrough title
- [ ] remove play/pause button
- [ ] move to bottom (top of archived items)

| non_test | mark task as done as user of taskd i want to mark tasks as done so that i can remove it from the normal view and move it to the archive first gray item is end of archive animation slides from left to right and checkmark slides in to strike through lines strikes through item responds to slide state once its at the end it slides back to front fast and then moves to the top of the archive last item is the view of an item that is archived archived items are always in the back of the list acceptance criteria click checkbox to mark as done click done checkmark to mark as not done change bg color font and strikethrough title remove play pause button move to bottom top of archived items | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.