Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5,536
| 8,391,345,478
|
IssuesEvent
|
2018-10-09 14:48:13
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
closed
|
Consider using dotcover from dotnet
|
type: process
|
We can use a dotnet CLI tool to run dotcover:
https://www.jetbrains.com/help/dotcover/Running_Coverage_Analysis_from_the_Command_LIne.html#using-dotnet-exe-to-run-coverage-analysis-of-unit-tests
That might make our coverage XML files a little simpler. (We still need them for exclusions etc.)
I'll investigate this when I get the chance.
|
1.0
|
Consider using dotcover from dotnet - We can use a dotnet CLI tool to run dotcover:
https://www.jetbrains.com/help/dotcover/Running_Coverage_Analysis_from_the_Command_LIne.html#using-dotnet-exe-to-run-coverage-analysis-of-unit-tests
That might make our coverage XML files a little simpler. (We still need them for exclusions etc.)
I'll investigate this when I get the chance.
|
process
|
consider using dotcover from dotnet we can use a dotnet cli tool to run dotcover that might make our coverage xml files a little simpler we still need them for exclusions etc i ll investigate this when i get the chance
| 1
|
153,366
| 12,141,651,875
|
IssuesEvent
|
2020-04-23 23:08:25
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
ccl/importccl: TestMultiNodeExportStmt failed
|
C-test-failure O-robot branch-master
|
[(ccl/importccl).TestMultiNodeExportStmt failed](https://viewLog.html?buildId=1648553&tab=buildLog) on [master@f2aec3eebbedf65911a8697375015ac11e6a7f45](https://github.com/cockroachdb/cockroach/commits/f2aec3eebbedf65911a8697375015ac11e6a7f45):
Fatal error:
```
F191217 16:47:26.335308 282160 storage/replica_init.go:276 [n5,s5,r9/2:/Table/1{3-4}] range descriptor ID (50) does not match replica's range ID (9)
```
Stack:
```
goroutine 282160 [running]:
github.com/cockroachdb/cockroach/pkg/util/log.getStacks(0xa4ab401, 0x0, 0x0, 0xda07a7)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/get_stacks.go:25 +0xbf
github.com/cockroachdb/cockroach/pkg/util/log.(*loggerT).outputLogEntry(0x9adda40, 0xc000000004, 0x922b1bf, 0x17, 0x114, 0xc004078ba0, 0x59)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:214 +0xc1c
github.com/cockroachdb/cockroach/pkg/util/log.addStructured(0x6bae660, 0xc001c02900, 0xc000000004, 0x2, 0x5d3267c, 0x3f, 0xc001bba330, 0x2, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/structured.go:66 +0x291
github.com/cockroachdb/cockroach/pkg/util/log.logDepth(0x6bae660, 0xc001c02900, 0x1, 0x4, 0x5d3267c, 0x3f, 0xc001bba330, 0x2, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:44 +0x9a
github.com/cockroachdb/cockroach/pkg/util/log.Fatalf(...)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:155
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).setDesc(0xc005657600, 0x6bae660, 0xc001c02900, 0xc0030ac770)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_init.go:276 +0x907
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).handleDescResult(...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_application_result.go:263
github.com/cockroachdb/cockroach/pkg/storage.(*replicaStateMachine).handleNonTrivialReplicatedEvalResult(0xc0056576d0, 0x6bae660, 0xc001c02900, 0xc009258fa0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_application_state_machine.go:1118 +0xa6f
github.com/cockroachdb/cockroach/pkg/storage.(*replicaStateMachine).ApplySideEffects(0xc0056576d0, 0x6beb5a0, 0xc006b81008, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_application_state_machine.go:1001 +0xb60
github.com/cockroachdb/cockroach/pkg/storage/apply.mapCheckedCmdIter(0x149c02613f50, 0xc0056578e8, 0xc001bbb048, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/apply/cmd.go:182 +0x129
github.com/cockroachdb/cockroach/pkg/storage/apply.(*Task).applyOneBatch(0xc001bbb578, 0x6bae660, 0xc000f7e9f0, 0x6beb660, 0xc005657888, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/apply/task.go:281 +0x272
github.com/cockroachdb/cockroach/pkg/storage/apply.(*Task).ApplyCommittedEntries(0xc001bbb578, 0x6bae660, 0xc000f7e9f0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/apply/task.go:247 +0x103
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).handleRaftReadyRaftMuLocked(0xc005657600, 0x6bae660, 0xc000f7e9f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_raft.go:721 +0x10fa
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).handleRaftReady(0xc005657600, 0x6bae660, 0xc000f7e9f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_raft.go:395 +0x1a0
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processReady(0xc000a58e00, 0x6bae660, 0xc002182a20, 0x9)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store_raft.go:487 +0x1a0
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc004146f30, 0x6bae660, 0xc002182a20)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:226 +0x31a
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x6bae660, 0xc002182a20)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x56
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc0027fc550, 0xc001d03860, 0xc0027fc530)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x160
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:190 +0xc4
```
<details><summary>Log preceding fatal error</summary><p>
```
W191217 16:47:24.574846 281285 storage/store_raft.go:496 [n4,s4,r23/2:/Table/2{7-8}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.584368 280757 storage/store_raft.go:496 [n3,s3,r23/3:/Table/2{7-8}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.590741 282160 storage/store_raft.go:496 [n5,s5,r9/2:/Table/1{3-4}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.627515 281327 storage/store_raft.go:496 [n4,s4,r49/2:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.630993 279371 storage/store_raft.go:496 [n1,s1,r9/1:/Table/1{3-4}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
I191217 16:47:24.652476 326734 storage/replica_command.go:1706 [n4,s4,r49/2:/{Table/53/1/1…-Max}] change replicas (add [(n3,s3):5VOTER_INCOMING] remove [(n2,s2):1VOTER_DEMOTING]): existing descriptor r49:/{Table/53/1/160-Max} [(n2,s2):1, (n4,s4):2, (n1,s1):4, (n3,s3):5LEARNER, next=6, gen=22, sticky=9223372036.854775807,2147483647]
W191217 16:47:24.653464 279374 storage/store_raft.go:496 [n1,s1,r49/4:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.655842 280735 storage/store_raft.go:496 [n3,s3,r49/5:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.657441 280381 storage/store_raft.go:496 [n2,s2,r49/1:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.660380 280388 storage/store_raft.go:496 [n2,s2,r9/4:/Table/1{3-4}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.662253 280644 storage/store_raft.go:496 [n3,s3,r2/5:/System/NodeLiveness{-Max}] handle raft ready: 0.8s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.667083 279378 storage/store_raft.go:496 [n1,s1,r2/1:/System/NodeLiveness{-Max}] handle raft ready: 0.7s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.673389 281329 storage/store_raft.go:496 [n4,s4,r2/4:/System/NodeLiveness{-Max}] handle raft ready: 0.7s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.678066 279368 storage/store_raft.go:496 [n1,s1,r3/1:/System/{NodeLive…-tsd}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
I191217 16:47:24.678771 333236 storage/replica_raft.go:247 [n5,s5,r9/2:/Table/1{3-4}] proposing SIMPLE(l5) ADD_REPLICA[(n1,s1):5LEARNER]: after=[(n3,s3):4 (n4,s4):2 (n5,s5):3 (n1,s1):5LEARNER] next=6
W191217 16:47:24.691402 282096 storage/store_raft.go:496 [n5,s5,r2/2:/System/NodeLiveness{-Max}] handle raft ready: 0.8s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.961990 280337 storage/store_raft.go:496 [n2,s2,r2/3:/System/NodeLiveness{-Max}] handle raft ready: 1.1s [applied=2, batches=1, state_assertions=0]
W191217 16:47:25.166190 279442 storage/node_liveness.go:559 [n1,liveness-hb] slow heartbeat took 2.0s
W191217 16:47:25.288747 280730 storage/store_raft.go:496 [n3,s3,r50/4:/Table/53/1/{20-40}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
I191217 16:47:25.325157 333772 storage/raft_snapshot_queue.go:169 [n2,raftsnapshot,s2,r4/2:/System{/tsd-tse}] skipping snapshot; replica is likely a learner in the process of being added: (n1,s1):5LEARNER
I191217 16:47:25.349035 332736 storage/store_snapshot.go:977 [n2,replicate,s2,r4/2:/System{/tsd-tse}] sending LEARNER snapshot f8d6ebc2 at applied index 504
W191217 16:47:25.473145 281279 storage/store_raft.go:496 [n4,s4,r50/2:/Table/53/1/{20-40}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.476041 282119 storage/store_raft.go:496 [n5,s5,r50/3:/Table/53/1/{20-40}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.546945 281317 storage/store_raft.go:496 [n4,s4,r2/4:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.552378 280639 storage/store_raft.go:496 [n3,s3,r2/5:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.578265 279364 storage/store_raft.go:496 [n1,s1,r2/1:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.589089 282169 storage/store_raft.go:496 [n5,s5,r2/2:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.615483 282160 storage/store_raft.go:496 [n5,s5,r9/2:/Table/1{3-4}] handle raft ready: 0.7s [applied=0, batches=0, state_assertions=0]
I191217 16:47:25.772094 333881 storage/raft_snapshot_queue.go:169 [n2,raftsnapshot,s2,r4/2:/System{/tsd-tse}] skipping snapshot; replica is likely a learner in the process of being added: (n1,s1):5LEARNER
W191217 16:47:25.789131 280337 storage/store_raft.go:496 [n2,s2,r2/3:/System/NodeLiveness{-Max}] handle raft ready: 0.8s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.813104 279372 storage/store_raft.go:496 [n1,s1,r1/1:/{Min-System/NodeL…}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.856392 282117 storage/store_raft.go:496 [n5,s5,r1/2:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
I191217 16:47:25.890453 281352 server/status/runtime.go:498 [n4] runtime stats: 2.3 GiB RSS, 2011 goroutines, 100 MiB/65 MiB/205 MiB GO alloc/idle/total, 229 MiB/285 MiB CGO alloc/total, 846.8 CGO/sec, 229.3/23.6 %(u/s)time, 0.4 %gc (7x), 2.3 MiB/2.3 MiB (r/w)net
W191217 16:47:25.906539 280641 storage/store_raft.go:496 [n3,s3,r1/4:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.915445 280326 storage/store_raft.go:496 [n2,s2,r1/5:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.958242 281321 storage/store_raft.go:496 [n4,s4,r1/3:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.000558 279368 storage/store_raft.go:496 [n1,s1,r3/1:/System/{NodeLive…-tsd}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.062801 281358 storage/node_liveness.go:559 [n4,liveness-hb] slow heartbeat took 2.7s
W191217 16:47:26.191796 282138 storage/store_raft.go:496 [n5,s5,r7/5:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.227317 280743 storage/store_raft.go:496 [n3,s3,r49/5:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=2, batches=1, state_assertions=0]
W191217 16:47:26.266267 281318 storage/store_raft.go:496 [n4,s4,r7/3:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.304410 279413 storage/store_raft.go:496 [n1,s1,r7/6:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.288328 279399 storage/store_raft.go:496 [n1,s1,r49/4:/{Table/53/1/1…-Max}] handle raft ready: 0.6s [applied=2, batches=1, state_assertions=0]
W191217 16:47:26.298672 280369 storage/store_raft.go:496 [n2,s2,r7/4:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.301634 280345 storage/store_raft.go:496 [n2,s2,r49/1:/{Table/53/1/1…-Max}] handle raft ready: 0.6s [applied=2, batches=1, state_assertions=0]
W191217 16:47:26.311807 280778 storage/store_raft.go:496 [n3,s3,r7/2:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.321983 281323 storage/store_raft.go:496 [n4,s4,r2/4:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.325091 279358 storage/store_raft.go:496 [n1,s1,r2/1:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.327961 280734 storage/store_raft.go:496 [n3,s3,r2/5:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.331549 282166 storage/store_raft.go:496 [n5,s5,r2/2:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
```
</p></details>
<details><summary>Repro</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestMultiNodeExportStmt PKG=./pkg/ccl/importccl TESTTIMEOUT=5m STRESSFLAGS=-timeout 5m' 2>&1
```
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
1.0
|
ccl/importccl: TestMultiNodeExportStmt failed - [(ccl/importccl).TestMultiNodeExportStmt failed](https://viewLog.html?buildId=1648553&tab=buildLog) on [master@f2aec3eebbedf65911a8697375015ac11e6a7f45](https://github.com/cockroachdb/cockroach/commits/f2aec3eebbedf65911a8697375015ac11e6a7f45):
Fatal error:
```
F191217 16:47:26.335308 282160 storage/replica_init.go:276 [n5,s5,r9/2:/Table/1{3-4}] range descriptor ID (50) does not match replica's range ID (9)
```
Stack:
```
goroutine 282160 [running]:
github.com/cockroachdb/cockroach/pkg/util/log.getStacks(0xa4ab401, 0x0, 0x0, 0xda07a7)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/get_stacks.go:25 +0xbf
github.com/cockroachdb/cockroach/pkg/util/log.(*loggerT).outputLogEntry(0x9adda40, 0xc000000004, 0x922b1bf, 0x17, 0x114, 0xc004078ba0, 0x59)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:214 +0xc1c
github.com/cockroachdb/cockroach/pkg/util/log.addStructured(0x6bae660, 0xc001c02900, 0xc000000004, 0x2, 0x5d3267c, 0x3f, 0xc001bba330, 0x2, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/structured.go:66 +0x291
github.com/cockroachdb/cockroach/pkg/util/log.logDepth(0x6bae660, 0xc001c02900, 0x1, 0x4, 0x5d3267c, 0x3f, 0xc001bba330, 0x2, 0x2)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:44 +0x9a
github.com/cockroachdb/cockroach/pkg/util/log.Fatalf(...)
/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:155
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).setDesc(0xc005657600, 0x6bae660, 0xc001c02900, 0xc0030ac770)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_init.go:276 +0x907
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).handleDescResult(...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_application_result.go:263
github.com/cockroachdb/cockroach/pkg/storage.(*replicaStateMachine).handleNonTrivialReplicatedEvalResult(0xc0056576d0, 0x6bae660, 0xc001c02900, 0xc009258fa0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_application_state_machine.go:1118 +0xa6f
github.com/cockroachdb/cockroach/pkg/storage.(*replicaStateMachine).ApplySideEffects(0xc0056576d0, 0x6beb5a0, 0xc006b81008, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_application_state_machine.go:1001 +0xb60
github.com/cockroachdb/cockroach/pkg/storage/apply.mapCheckedCmdIter(0x149c02613f50, 0xc0056578e8, 0xc001bbb048, 0x0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/apply/cmd.go:182 +0x129
github.com/cockroachdb/cockroach/pkg/storage/apply.(*Task).applyOneBatch(0xc001bbb578, 0x6bae660, 0xc000f7e9f0, 0x6beb660, 0xc005657888, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/apply/task.go:281 +0x272
github.com/cockroachdb/cockroach/pkg/storage/apply.(*Task).ApplyCommittedEntries(0xc001bbb578, 0x6bae660, 0xc000f7e9f0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/apply/task.go:247 +0x103
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).handleRaftReadyRaftMuLocked(0xc005657600, 0x6bae660, 0xc000f7e9f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_raft.go:721 +0x10fa
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).handleRaftReady(0xc005657600, 0x6bae660, 0xc000f7e9f0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica_raft.go:395 +0x1a0
github.com/cockroachdb/cockroach/pkg/storage.(*Store).processReady(0xc000a58e00, 0x6bae660, 0xc002182a20, 0x9)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/store_raft.go:487 +0x1a0
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc004146f30, 0x6bae660, 0xc002182a20)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:226 +0x31a
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x6bae660, 0xc002182a20)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:166 +0x56
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc0027fc550, 0xc001d03860, 0xc0027fc530)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x160
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:190 +0xc4
```
<details><summary>Log preceding fatal error</summary><p>
```
W191217 16:47:24.574846 281285 storage/store_raft.go:496 [n4,s4,r23/2:/Table/2{7-8}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.584368 280757 storage/store_raft.go:496 [n3,s3,r23/3:/Table/2{7-8}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.590741 282160 storage/store_raft.go:496 [n5,s5,r9/2:/Table/1{3-4}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.627515 281327 storage/store_raft.go:496 [n4,s4,r49/2:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.630993 279371 storage/store_raft.go:496 [n1,s1,r9/1:/Table/1{3-4}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
I191217 16:47:24.652476 326734 storage/replica_command.go:1706 [n4,s4,r49/2:/{Table/53/1/1…-Max}] change replicas (add [(n3,s3):5VOTER_INCOMING] remove [(n2,s2):1VOTER_DEMOTING]): existing descriptor r49:/{Table/53/1/160-Max} [(n2,s2):1, (n4,s4):2, (n1,s1):4, (n3,s3):5LEARNER, next=6, gen=22, sticky=9223372036.854775807,2147483647]
W191217 16:47:24.653464 279374 storage/store_raft.go:496 [n1,s1,r49/4:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.655842 280735 storage/store_raft.go:496 [n3,s3,r49/5:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.657441 280381 storage/store_raft.go:496 [n2,s2,r49/1:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.660380 280388 storage/store_raft.go:496 [n2,s2,r9/4:/Table/1{3-4}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:24.662253 280644 storage/store_raft.go:496 [n3,s3,r2/5:/System/NodeLiveness{-Max}] handle raft ready: 0.8s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.667083 279378 storage/store_raft.go:496 [n1,s1,r2/1:/System/NodeLiveness{-Max}] handle raft ready: 0.7s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.673389 281329 storage/store_raft.go:496 [n4,s4,r2/4:/System/NodeLiveness{-Max}] handle raft ready: 0.7s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.678066 279368 storage/store_raft.go:496 [n1,s1,r3/1:/System/{NodeLive…-tsd}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
I191217 16:47:24.678771 333236 storage/replica_raft.go:247 [n5,s5,r9/2:/Table/1{3-4}] proposing SIMPLE(l5) ADD_REPLICA[(n1,s1):5LEARNER]: after=[(n3,s3):4 (n4,s4):2 (n5,s5):3 (n1,s1):5LEARNER] next=6
W191217 16:47:24.691402 282096 storage/store_raft.go:496 [n5,s5,r2/2:/System/NodeLiveness{-Max}] handle raft ready: 0.8s [applied=2, batches=1, state_assertions=0]
W191217 16:47:24.961990 280337 storage/store_raft.go:496 [n2,s2,r2/3:/System/NodeLiveness{-Max}] handle raft ready: 1.1s [applied=2, batches=1, state_assertions=0]
W191217 16:47:25.166190 279442 storage/node_liveness.go:559 [n1,liveness-hb] slow heartbeat took 2.0s
W191217 16:47:25.288747 280730 storage/store_raft.go:496 [n3,s3,r50/4:/Table/53/1/{20-40}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
I191217 16:47:25.325157 333772 storage/raft_snapshot_queue.go:169 [n2,raftsnapshot,s2,r4/2:/System{/tsd-tse}] skipping snapshot; replica is likely a learner in the process of being added: (n1,s1):5LEARNER
I191217 16:47:25.349035 332736 storage/store_snapshot.go:977 [n2,replicate,s2,r4/2:/System{/tsd-tse}] sending LEARNER snapshot f8d6ebc2 at applied index 504
W191217 16:47:25.473145 281279 storage/store_raft.go:496 [n4,s4,r50/2:/Table/53/1/{20-40}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.476041 282119 storage/store_raft.go:496 [n5,s5,r50/3:/Table/53/1/{20-40}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.546945 281317 storage/store_raft.go:496 [n4,s4,r2/4:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.552378 280639 storage/store_raft.go:496 [n3,s3,r2/5:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.578265 279364 storage/store_raft.go:496 [n1,s1,r2/1:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.589089 282169 storage/store_raft.go:496 [n5,s5,r2/2:/System/NodeLiveness{-Max}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.615483 282160 storage/store_raft.go:496 [n5,s5,r9/2:/Table/1{3-4}] handle raft ready: 0.7s [applied=0, batches=0, state_assertions=0]
I191217 16:47:25.772094 333881 storage/raft_snapshot_queue.go:169 [n2,raftsnapshot,s2,r4/2:/System{/tsd-tse}] skipping snapshot; replica is likely a learner in the process of being added: (n1,s1):5LEARNER
W191217 16:47:25.789131 280337 storage/store_raft.go:496 [n2,s2,r2/3:/System/NodeLiveness{-Max}] handle raft ready: 0.8s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.813104 279372 storage/store_raft.go:496 [n1,s1,r1/1:/{Min-System/NodeL…}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.856392 282117 storage/store_raft.go:496 [n5,s5,r1/2:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
I191217 16:47:25.890453 281352 server/status/runtime.go:498 [n4] runtime stats: 2.3 GiB RSS, 2011 goroutines, 100 MiB/65 MiB/205 MiB GO alloc/idle/total, 229 MiB/285 MiB CGO alloc/total, 846.8 CGO/sec, 229.3/23.6 %(u/s)time, 0.4 %gc (7x), 2.3 MiB/2.3 MiB (r/w)net
W191217 16:47:25.906539 280641 storage/store_raft.go:496 [n3,s3,r1/4:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.915445 280326 storage/store_raft.go:496 [n2,s2,r1/5:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
W191217 16:47:25.958242 281321 storage/store_raft.go:496 [n4,s4,r1/3:/{Min-System/NodeL…}] handle raft ready: 0.7s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.000558 279368 storage/store_raft.go:496 [n1,s1,r3/1:/System/{NodeLive…-tsd}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.062801 281358 storage/node_liveness.go:559 [n4,liveness-hb] slow heartbeat took 2.7s
W191217 16:47:26.191796 282138 storage/store_raft.go:496 [n5,s5,r7/5:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.227317 280743 storage/store_raft.go:496 [n3,s3,r49/5:/{Table/53/1/1…-Max}] handle raft ready: 0.5s [applied=2, batches=1, state_assertions=0]
W191217 16:47:26.266267 281318 storage/store_raft.go:496 [n4,s4,r7/3:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.304410 279413 storage/store_raft.go:496 [n1,s1,r7/6:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.288328 279399 storage/store_raft.go:496 [n1,s1,r49/4:/{Table/53/1/1…-Max}] handle raft ready: 0.6s [applied=2, batches=1, state_assertions=0]
W191217 16:47:26.298672 280369 storage/store_raft.go:496 [n2,s2,r7/4:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.301634 280345 storage/store_raft.go:496 [n2,s2,r49/1:/{Table/53/1/1…-Max}] handle raft ready: 0.6s [applied=2, batches=1, state_assertions=0]
W191217 16:47:26.311807 280778 storage/store_raft.go:496 [n3,s3,r7/2:/Table/1{1-2}] handle raft ready: 0.8s [applied=0, batches=0, state_assertions=0]
W191217 16:47:26.321983 281323 storage/store_raft.go:496 [n4,s4,r2/4:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.325091 279358 storage/store_raft.go:496 [n1,s1,r2/1:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.327961 280734 storage/store_raft.go:496 [n3,s3,r2/5:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
W191217 16:47:26.331549 282166 storage/store_raft.go:496 [n5,s5,r2/2:/System/NodeLiveness{-Max}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
```
</p></details>
<details><summary>Repro</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestMultiNodeExportStmt PKG=./pkg/ccl/importccl TESTTIMEOUT=5m STRESSFLAGS=-timeout 5m' 2>&1
```
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
ccl importccl testmultinodeexportstmt failed on fatal error storage replica init go range descriptor id does not match replica s range id stack goroutine github com cockroachdb cockroach pkg util log getstacks go src github com cockroachdb cockroach pkg util log get stacks go github com cockroachdb cockroach pkg util log loggert outputlogentry go src github com cockroachdb cockroach pkg util log clog go github com cockroachdb cockroach pkg util log addstructured go src github com cockroachdb cockroach pkg util log structured go github com cockroachdb cockroach pkg util log logdepth go src github com cockroachdb cockroach pkg util log log go github com cockroachdb cockroach pkg util log fatalf go src github com cockroachdb cockroach pkg util log log go github com cockroachdb cockroach pkg storage replica setdesc go src github com cockroachdb cockroach pkg storage replica init go github com cockroachdb cockroach pkg storage replica handledescresult go src github com cockroachdb cockroach pkg storage replica application result go github com cockroachdb cockroach pkg storage replicastatemachine handlenontrivialreplicatedevalresult go src github com cockroachdb cockroach pkg storage replica application state machine go github com cockroachdb cockroach pkg storage replicastatemachine applysideeffects go src github com cockroachdb cockroach pkg storage replica application state machine go github com cockroachdb cockroach pkg storage apply mapcheckedcmditer go src github com cockroachdb cockroach pkg storage apply cmd go github com cockroachdb cockroach pkg storage apply task applyonebatch go src github com cockroachdb cockroach pkg storage apply task go github com cockroachdb cockroach pkg storage apply task applycommittedentries go src github com cockroachdb cockroach pkg storage apply task go github com cockroachdb cockroach pkg storage replica handleraftreadyraftmulocked go src github com cockroachdb cockroach pkg storage replica raft go github com cockroachdb cockroach pkg storage replica handleraftready go src github com cockroachdb cockroach pkg storage replica raft go github com cockroachdb cockroach pkg storage store processready go src github com cockroachdb cockroach pkg storage store raft go github com cockroachdb cockroach pkg storage raftscheduler worker go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg storage raftscheduler start go src github com cockroachdb cockroach pkg storage scheduler go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go log preceding fatal error storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage replica command go change replicas add remove existing descriptor table max storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage replica raft go proposing simple add replica after next storage store raft go handle raft ready storage store raft go handle raft ready storage node liveness go slow heartbeat took storage store raft go handle raft ready storage raft snapshot queue go skipping snapshot replica is likely a learner in the process of being added storage store snapshot go sending learner snapshot at applied index storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage raft snapshot queue go skipping snapshot replica is likely a learner in the process of being added storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready server status runtime go runtime stats gib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage node liveness go slow heartbeat took storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready storage store raft go handle raft ready repro parameters goflags json make stressrace tests testmultinodeexportstmt pkg pkg ccl importccl testtimeout stressflags timeout powered by
| 0
|
10,778
| 13,607,766,523
|
IssuesEvent
|
2020-09-23 00:23:39
|
RickFSA/Data_Science_Articles
|
https://api.github.com/repos/RickFSA/Data_Science_Articles
|
opened
|
Intro to Feature Selection
|
Preprocessing
|
Intro to Feature Selection Methods for Data Science included forward and backward feature selection
**Link:**
https://towardsdatascience.com/intro-to-feature-selection-methods-for-data-science-4cae2178a00a
**Author:**
Ryan Farmar, Ning Han, Madeline McCombe
|
1.0
|
Intro to Feature Selection - Intro to Feature Selection Methods for Data Science included forward and backward feature selection
**Link:**
https://towardsdatascience.com/intro-to-feature-selection-methods-for-data-science-4cae2178a00a
**Author:**
Ryan Farmar, Ning Han, Madeline McCombe
|
process
|
intro to feature selection intro to feature selection methods for data science included forward and backward feature selection link author ryan farmar ning han madeline mccombe
| 1
|
786,094
| 27,634,581,540
|
IssuesEvent
|
2023-03-10 13:31:43
|
biocommons/anyvar
|
https://api.github.com/repos/biocommons/anyvar
|
opened
|
Deploy demo version
|
priority:low
|
It could be useful for us to provide some kind of demo instance (eg reset data every week or something).
|
1.0
|
Deploy demo version - It could be useful for us to provide some kind of demo instance (eg reset data every week or something).
|
non_process
|
deploy demo version it could be useful for us to provide some kind of demo instance eg reset data every week or something
| 0
|
39,570
| 5,240,907,572
|
IssuesEvent
|
2017-01-31 14:27:20
|
ValveSoftware/Source-1-Games
|
https://api.github.com/repos/ValveSoftware/Source-1-Games
|
closed
|
[CS:S] the main branch and the prerelease branch srcds.exe wont launch
|
Counter-Strike: Source Need Retest Reviewed Windows
|
had me and a friend download the cs:s dedicated server and i'm sure the windows loading thing appears and disappears for us and it doesn't appear in the processes on windows task manager when we try to launch it, no errors occurred as far as we know.
|
1.0
|
[CS:S] the main branch and the prerelease branch srcds.exe wont launch - had me and a friend download the cs:s dedicated server and i'm sure the windows loading thing appears and disappears for us and it doesn't appear in the processes on windows task manager when we try to launch it, no errors occurred as far as we know.
|
non_process
|
the main branch and the prerelease branch srcds exe wont launch had me and a friend download the cs s dedicated server and i m sure the windows loading thing appears and disappears for us and it doesn t appear in the processes on windows task manager when we try to launch it no errors occurred as far as we know
| 0
|
137,916
| 20,257,021,095
|
IssuesEvent
|
2022-02-15 00:55:02
|
ParabolInc/parabol
|
https://api.github.com/repos/ParabolInc/parabol
|
closed
|
GitLab integration provider row(s) designed
|
design integrations Story Points: 8
|
See #3971 and #5564 for context.
----

With the previous design work we've done for the GitHub integration and [self-hosted services](https://github.com/ParabolInc/parabol/issues/5567#issuecomment-964577411), we now have enough patterns that we can reuse for the **GitLab integration.**
Let's use this issue to track any new or additional design work that we might need to do.
|
1.0
|
GitLab integration provider row(s) designed - See #3971 and #5564 for context.
----

With the previous design work we've done for the GitHub integration and [self-hosted services](https://github.com/ParabolInc/parabol/issues/5567#issuecomment-964577411), we now have enough patterns that we can reuse for the **GitLab integration.**
Let's use this issue to track any new or additional design work that we might need to do.
|
non_process
|
gitlab integration provider row s designed see and for context with the previous design work we ve done for the github integration and we now have enough patterns that we can reuse for the gitlab integration let s use this issue to track any new or additional design work that we might need to do
| 0
|
8,643
| 11,788,782,476
|
IssuesEvent
|
2020-03-17 16:05:32
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Introduce visual testing process for internal development of Desktop GUI
|
pkg/desktop-gui process: contributing process: tests stage: needs review type: chore
|
### Current behavior:
There was a recent css change made to the `desktop-gui` package of Cypress that caused some unexpected changes to the design/css of the `desktop-gui`.
Original PR: #4737
Following breaking changes (that we know of): #4959 #4912 #4888 #4795
The problem is that *none* of our internal tests caught these issues. They were only found after using the Desktop-GUI manually within development. So, it's possible there is still some stuff that's broken.
### Desired behavior:
Introduce a process into the Cypress development team to test visual changes in our own `desktop-gui` package.
Hopefully, after vetting and solidifying a process, we can expand this out to our other visual components, the `runner` and `reporter` packages.
This will also provide a great opportunity to put ourselves in the shoes of users of Cypress (like ourselves) so we can define the problem set and look through the solutions out there.
|
2.0
|
Introduce visual testing process for internal development of Desktop GUI - ### Current behavior:
There was a recent css change made to the `desktop-gui` package of Cypress that caused some unexpected changes to the design/css of the `desktop-gui`.
Original PR: #4737
Following breaking changes (that we know of): #4959 #4912 #4888 #4795
The problem is that *none* of our internal tests caught these issues. They were only found after using the Desktop-GUI manually within development. So, it's possible there is still some stuff that's broken.
### Desired behavior:
Introduce a process into the Cypress development team to test visual changes in our own `desktop-gui` package.
Hopefully, after vetting and solidifying a process, we can expand this out to our other visual components, the `runner` and `reporter` packages.
This will also provide a great opportunity to put ourselves in the shoes of users of Cypress (like ourselves) so we can define the problem set and look through the solutions out there.
|
process
|
introduce visual testing process for internal development of desktop gui current behavior there was a recent css change made to the desktop gui package of cypress that caused some unexpected changes to the design css of the desktop gui original pr following breaking changes that we know of the problem is that none of our internal tests caught these issues they were only found after using the desktop gui manually within development so it s possible there is still some stuff that s broken desired behavior introduce a process into the cypress development team to test visual changes in our own desktop gui package hopefully after vetting and solidifying a process we can expand this out to our other visual components the runner and reporter packages this will also provide a great opportunity to put ourselves in the shoes of users of cypress like ourselves so we can define the problem set and look through the solutions out there
| 1
|
4,229
| 7,182,224,547
|
IssuesEvent
|
2018-02-01 09:04:38
|
LOVDnl/LOVD3
|
https://api.github.com/repos/LOVDnl/LOVD3
|
closed
|
Don't duplicate the "Variant effect" fields unnecessarily.
|
cat: submission process feature request
|
When variants have just one transcript annotated, the variant effect fields for both the genomic variant and the variant on the transcript should contain the same value, but still are required to be filled in. To have this separation is only useful for variants without transcripts (thus they need variant effect fields for the genomic variant) and variants with more than one transcript (so the genomic variant effect fields can contain the worst score of all effect fields for all transcripts).
We should devise a way to prevent people from having to fill in the same value twice.
Keep in mind:
- When the field is hidden or otherwise controlled by us, make the VOG effectid fields the worst score of the VOT effectid fields.
- However, if there is already a value there that doesn't correspond to the VOT effectid fields, don't overwrite it, but the user should have the chance to fix this.
- Do we need to fix current values during the next LOVD upgrade? Like, put all VOG effectid fields that are set to "unknown" to match the VOT effectid fields?
|
1.0
|
Don't duplicate the "Variant effect" fields unnecessarily. - When variants have just one transcript annotated, the variant effect fields for both the genomic variant and the variant on the transcript should contain the same value, but still are required to be filled in. To have this separation is only useful for variants without transcripts (thus they need variant effect fields for the genomic variant) and variants with more than one transcript (so the genomic variant effect fields can contain the worst score of all effect fields for all transcripts).
We should devise a way to prevent people from having to fill in the same value twice.
Keep in mind:
- When the field is hidden or otherwise controlled by us, make the VOG effectid fields the worst score of the VOT effectid fields.
- However, if there is already a value there that doesn't correspond to the VOT effectid fields, don't overwrite it, but the user should have the chance to fix this.
- Do we need to fix current values during the next LOVD upgrade? Like, put all VOG effectid fields that are set to "unknown" to match the VOT effectid fields?
|
process
|
don t duplicate the variant effect fields unnecessarily when variants have just one transcript annotated the variant effect fields for both the genomic variant and the variant on the transcript should contain the same value but still are required to be filled in to have this separation is only useful for variants without transcripts thus they need variant effect fields for the genomic variant and variants with more than one transcript so the genomic variant effect fields can contain the worst score of all effect fields for all transcripts we should devise a way to prevent people from having to fill in the same value twice keep in mind when the field is hidden or otherwise controlled by us make the vog effectid fields the worst score of the vot effectid fields however if there is already a value there that doesn t correspond to the vot effectid fields don t overwrite it but the user should have the chance to fix this do we need to fix current values during the next lovd upgrade like put all vog effectid fields that are set to unknown to match the vot effectid fields
| 1
|
56,040
| 13,747,886,575
|
IssuesEvent
|
2020-10-06 08:17:54
|
FranciscoFornell/MIST
|
https://api.github.com/repos/FranciscoFornell/MIST
|
opened
|
Github (MIST): Some issue templates have wrongly formatted questions
|
:construction_worker: build
|
Check all the issue templates and reformat them.
|
1.0
|
Github (MIST): Some issue templates have wrongly formatted questions - Check all the issue templates and reformat them.
|
non_process
|
github mist some issue templates have wrongly formatted questions check all the issue templates and reformat them
| 0
|
8,253
| 11,422,149,375
|
IssuesEvent
|
2020-02-03 13:40:12
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
BigQuery: Release 1.24.0
|
api: bigquery type: process
|
Last release was December 16, and we've had quite a few enhancements and bug fixes since then. Let's cut a release before we do the repo move.
|
1.0
|
BigQuery: Release 1.24.0 - Last release was December 16, and we've had quite a few enhancements and bug fixes since then. Let's cut a release before we do the repo move.
|
process
|
bigquery release last release was december and we ve had quite a few enhancements and bug fixes since then let s cut a release before we do the repo move
| 1
|
10,009
| 13,043,865,249
|
IssuesEvent
|
2020-07-29 02:53:23
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `Repeat` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `Repeat` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `Repeat` from TiDB -
## Description
Port the scalar function `Repeat` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function repeat from tidb description port the scalar function repeat from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
256,264
| 27,557,410,513
|
IssuesEvent
|
2023-03-07 19:03:21
|
occmundial/atomic-icons
|
https://api.github.com/repos/occmundial/atomic-icons
|
closed
|
CVE-2018-16487 (Medium) detected in lodash-3.10.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2018-16487 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- svg-sprite-generator-0.0.7.tgz (Root Library)
- cheerio-0.19.0.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/occmundial/atomic-icons/commit/a45f942bde8b13f681e2eec357d0fab3d5b9967b">a45f942bde8b13f681e2eec357d0fab3d5b9967b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-16487>CVE-2018-16487</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-16487 (Medium) detected in lodash-3.10.1.tgz - autoclosed - ## CVE-2018-16487 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- svg-sprite-generator-0.0.7.tgz (Root Library)
- cheerio-0.19.0.tgz
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/occmundial/atomic-icons/commit/a45f942bde8b13f681e2eec357d0fab3d5b9967b">a45f942bde8b13f681e2eec357d0fab3d5b9967b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A prototype pollution vulnerability was found in lodash <4.17.11 where the functions merge, mergeWith, and defaultsDeep can be tricked into adding or modifying properties of Object.prototype.
<p>Publish Date: 2019-02-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-16487>CVE-2018-16487</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16487</a></p>
<p>Release Date: 2019-02-01</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in lodash tgz autoclosed cve medium severity vulnerability vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy svg sprite generator tgz root library cheerio tgz x lodash tgz vulnerable library found in head commit a href found in base branch main vulnerability details a prototype pollution vulnerability was found in lodash where the functions merge mergewith and defaultsdeep can be tricked into adding or modifying properties of object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
17,946
| 23,938,739,258
|
IssuesEvent
|
2022-09-11 15:57:15
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Highlight reconstruction dependent on viewport and export is also broken
|
bug: won't fix scope: image processing bug: pending no-issue-activity
|
**Describe the bug**
Highlight reconstruction with "reconstruct color" option can achieve great feats with incredibly little data. But my problem with it is now that what darktable shows differs from what it exports.
**To Reproduce**
Steps to reproduce the behavior:
1. Enable highlight reconstruction
2. Enable "reconstruct color"
3. Fiddle with the exposure, bring it down
4. Possibly see funky business on the sky
5. If not, zoom in and pan around the image; the viewport changes and some artifacts should show. At least on mine.
**Expected behavior**
The exported file should always show deterministically the best possible outcome, no artifacts because that's what you can get in DT darkroom mode. I don't really care if there are occasional artefacts while developing... of course it would help a lot if there were none.
**Screenshots**
Below a snapshot of what it looks like on darktable how it should be. Then a view how it usually looks when I pan around for a bit. Finally, the third shot shows how it is exported every time (export settings don't affect the outcome)



**Platform (please complete the following information):**
Gentoo linux + darktable 2.6.2
The source file DNG+XMP if this is sensor data dependent. Free to use!
[source_file_xmp.zip](https://github.com/darktable-org/darktable/files/3659088/source_file_xmp.zip)
|
1.0
|
Highlight reconstruction dependent on viewport and export is also broken - **Describe the bug**
Highlight reconstruction with "reconstruct color" option can achieve great feats with incredibly little data. But my problem with it is now that what darktable shows differs from what it exports.
**To Reproduce**
Steps to reproduce the behavior:
1. Enable highlight reconstruction
2. Enable "reconstruct color"
3. Fiddle with the exposure, bring it down
4. Possibly see funky business on the sky
5. If not, zoom in and pan around the image; the viewport changes and some artifacts should show. At least on mine.
**Expected behavior**
The exported file should always show deterministically the best possible outcome, no artifacts because that's what you can get in DT darkroom mode. I don't really care if there are occasional artefacts while developing... of course it would help a lot if there were none.
**Screenshots**
Below a snapshot of what it looks like on darktable how it should be. Then a view how it usually looks when I pan around for a bit. Finally, the third shot shows how it is exported every time (export settings don't affect the outcome)



**Platform (please complete the following information):**
Gentoo linux + darktable 2.6.2
The source file DNG+XMP if this is sensor data dependent. Free to use!
[source_file_xmp.zip](https://github.com/darktable-org/darktable/files/3659088/source_file_xmp.zip)
|
process
|
highlight reconstruction dependent on viewport and export is also broken describe the bug highlight reconstruction with reconstruct color option can achieve great feats with incredibly little data but my problem with it is now that what darktable shows differs from what it exports to reproduce steps to reproduce the behavior enable highlight reconstruction enable reconstruct color fiddle with the exposure bring it down possibly see funky business on the sky if not zoom in and pan around the image the viewport changes and some artifacts should show at least on mine expected behavior the exported file should always show deterministically the best possible outcome no artifacts because that s what you can get in dt darkroom mode i don t really care if there are occasional artefacts while developing of course it would help a lot if there were none screenshots below a snapshot of what it looks like on darktable how it should be then a view how it usually looks when i pan around for a bit finally the third shot shows how it is exported every time export settings don t affect the outcome platform please complete the following information gentoo linux darktable the source file dng xmp if this is sensor data dependent free to use
| 1
|
46,722
| 19,429,273,151
|
IssuesEvent
|
2021-12-21 10:02:08
|
Azure/azure-sdk-for-java
|
https://api.github.com/repos/Azure/azure-sdk-for-java
|
closed
|
[BUG] MetricDefinitions::listByResource returns not supported metrics
|
question Monitor Service Attention Mgmt Client customer-reported issue-addressed
|
**Describe the bug**
Calling MetricDefinitions::listByResource occasionally returns VirtualMachine metrics that are no longer supported resource metrics.
***Exception or Stack Trace***
`Failed to find metric configuration for provider: Microsoft.Compute, resource Type: virtualMachines, metric: OS Disk Target Bandwidth, Valid metrics: Percentage CPU,Network In,Network Out,Disk Read Bytes,Disk Write Bytes,Disk Read Operations/Sec,Disk Write Operations/Sec,CPU Credits Remaining,CPU Credits Consumed,Data Disk Read Bytes/sec,Data Disk Write Bytes/sec,Data Disk Read Operations/Sec,Data Disk Write Operations/Sec,Data Disk Queue Depth,Data Disk Bandwidth Consumed Percentage,Data Disk IOPS Consumed`
`Failed to find metric configuration for provider: Microsoft.Compute, resource Type: virtualMachines, metric: Data Disk Target Bandwidth, Valid metrics: Percentage CPU,Network In,Network Out,Disk Read Bytes,Disk Write Bytes,Disk Read Operations/Sec,Disk Write Operations/Sec,CPU Credits Remaining,CPU Credits Consumed,Data Disk Read Bytes/sec,Data Disk Write Bytes/sec,Data Disk Read Operations/Sec,Data Disk Write Operations/Sec,Data Disk Queue Depth,Data Disk Bandwidth Consumed Percentage,Data Disk IOPS Consum`
***Code Snippet***
`metricDefinitions().listByResource(resourceId)`
|
1.0
|
[BUG] MetricDefinitions::listByResource returns not supported metrics - **Describe the bug**
Calling MetricDefinitions::listByResource occasionally returns VirtualMachine metrics that are no longer supported resource metrics.
***Exception or Stack Trace***
`Failed to find metric configuration for provider: Microsoft.Compute, resource Type: virtualMachines, metric: OS Disk Target Bandwidth, Valid metrics: Percentage CPU,Network In,Network Out,Disk Read Bytes,Disk Write Bytes,Disk Read Operations/Sec,Disk Write Operations/Sec,CPU Credits Remaining,CPU Credits Consumed,Data Disk Read Bytes/sec,Data Disk Write Bytes/sec,Data Disk Read Operations/Sec,Data Disk Write Operations/Sec,Data Disk Queue Depth,Data Disk Bandwidth Consumed Percentage,Data Disk IOPS Consumed`
`Failed to find metric configuration for provider: Microsoft.Compute, resource Type: virtualMachines, metric: Data Disk Target Bandwidth, Valid metrics: Percentage CPU,Network In,Network Out,Disk Read Bytes,Disk Write Bytes,Disk Read Operations/Sec,Disk Write Operations/Sec,CPU Credits Remaining,CPU Credits Consumed,Data Disk Read Bytes/sec,Data Disk Write Bytes/sec,Data Disk Read Operations/Sec,Data Disk Write Operations/Sec,Data Disk Queue Depth,Data Disk Bandwidth Consumed Percentage,Data Disk IOPS Consum`
***Code Snippet***
`metricDefinitions().listByResource(resourceId)`
|
non_process
|
metricdefinitions listbyresource returns not supported metrics describe the bug calling metricdefinitions listbyresource occasionally returns virtualmachine metrics that are no longer supported resource metrics exception or stack trace failed to find metric configuration for provider microsoft compute resource type virtualmachines metric os disk target bandwidth valid metrics percentage cpu network in network out disk read bytes disk write bytes disk read operations sec disk write operations sec cpu credits remaining cpu credits consumed data disk read bytes sec data disk write bytes sec data disk read operations sec data disk write operations sec data disk queue depth data disk bandwidth consumed percentage data disk iops consumed failed to find metric configuration for provider microsoft compute resource type virtualmachines metric data disk target bandwidth valid metrics percentage cpu network in network out disk read bytes disk write bytes disk read operations sec disk write operations sec cpu credits remaining cpu credits consumed data disk read bytes sec data disk write bytes sec data disk read operations sec data disk write operations sec data disk queue depth data disk bandwidth consumed percentage data disk iops consum code snippet metricdefinitions listbyresource resourceid
| 0
|
72,258
| 19,097,478,466
|
IssuesEvent
|
2021-11-29 18:14:59
|
angular/angular-cli
|
https://api.github.com/repos/angular/angular-cli
|
closed
|
Cannot set a budget on lazy loaded modules with ng cli 6.0.0
|
type: bug/fix freq1: low severity3: broken comp: devkit/build-angular devkit/build-angular: browser
|
Defining budget for a lazy loaded module doesn't work with ng cli 6.0.0
### Versions
```
Angular CLI: 6.0.0
Node: 9.5.0
OS: win32 x64
Angular: 6.0.0
... animations, cli, common, compiler, compiler-cli, core, forms
... http, language-service, platform-browser
... platform-browser-dynamic, router, service-worker
Package Version
------------------------------------------------------------
@angular-devkit/architect 0.6.0
@angular-devkit/build-angular 0.6.0
@angular-devkit/build-ng-packagr 0.6.0
@angular-devkit/build-optimizer 0.6.0
@angular-devkit/core 0.6.0
@angular-devkit/schematics 0.6.0
@angular/pwa 0.6.0
@ngtools/json-schema 1.1.0
@ngtools/webpack 6.0.0
@schematics/angular 0.6.0
@schematics/package-update <error>
@schematics/update 0.6.0
ng-packagr 3.0.0-rc.2
rxjs 6.1.0
typescript 2.7.2
webpack 4.6.0
```
### Repro steps
* Defines an application with lazyloaded modules
* Declare in angular.json file a budget for one of the lazy loaded module (I tried several names, based on a lazy loaded module named _st-account_
```json
"budgets": [
{"type": "bundle", "name": "st-account", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "1.7d269d24f67b262c9dc9.js", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "StAccount", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "StAccount#Module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "StAccountModule", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account.module#StAccountModule", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account/src/lib/st-account.module#StAccountModule", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account/src/lib/st-account.module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account.module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account-module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account-module-ngfactory", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "1", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" }
]
```
* Run `ng build --prod`
### Observed behavior
No error or warning is shown.
### Desired behavior
An error or warning should be shown based on budget
### Mention any other details that might be useful (optional)
If I run `ng build --prod --named-chunks`, a warning/error is shown based on the `st-account-src-lib-st-account-module-ngfactory` budget rule.
|
2.0
|
Cannot set a budget on lazy loaded modules with ng cli 6.0.0 - Defining budget for a lazy loaded module doesn't work with ng cli 6.0.0
### Versions
```
Angular CLI: 6.0.0
Node: 9.5.0
OS: win32 x64
Angular: 6.0.0
... animations, cli, common, compiler, compiler-cli, core, forms
... http, language-service, platform-browser
... platform-browser-dynamic, router, service-worker
Package Version
------------------------------------------------------------
@angular-devkit/architect 0.6.0
@angular-devkit/build-angular 0.6.0
@angular-devkit/build-ng-packagr 0.6.0
@angular-devkit/build-optimizer 0.6.0
@angular-devkit/core 0.6.0
@angular-devkit/schematics 0.6.0
@angular/pwa 0.6.0
@ngtools/json-schema 1.1.0
@ngtools/webpack 6.0.0
@schematics/angular 0.6.0
@schematics/package-update <error>
@schematics/update 0.6.0
ng-packagr 3.0.0-rc.2
rxjs 6.1.0
typescript 2.7.2
webpack 4.6.0
```
### Repro steps
* Defines an application with lazyloaded modules
* Declare in angular.json file a budget for one of the lazy loaded module (I tried several names, based on a lazy loaded module named _st-account_
```json
"budgets": [
{"type": "bundle", "name": "st-account", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "1.7d269d24f67b262c9dc9.js", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "StAccount", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "StAccount#Module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "StAccountModule", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account.module#StAccountModule", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account/src/lib/st-account.module#StAccountModule", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account/src/lib/st-account.module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account.module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account-module", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "st-account-src-lib-st-account-module-ngfactory", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" },
{"type": "bundle", "name": "1", "baseline": "1kb", "maximumWarning": "1kb", "maximumError": "2kb" }
]
```
* Run `ng build --prod`
### Observed behavior
No error or warning is shown.
### Desired behavior
An error or warning should be shown based on budget
### Mention any other details that might be useful (optional)
If I run `ng build --prod --named-chunks`, a warning/error is shown based on the `st-account-src-lib-st-account-module-ngfactory` budget rule.
|
non_process
|
cannot set a budget on lazy loaded modules with ng cli defining budget for a lazy loaded module doesn t work with ng cli versions angular cli node os angular animations cli common compiler compiler cli core forms http language service platform browser platform browser dynamic router service worker package version angular devkit architect angular devkit build angular angular devkit build ng packagr angular devkit build optimizer angular devkit core angular devkit schematics angular pwa ngtools json schema ngtools webpack schematics angular schematics package update schematics update ng packagr rc rxjs typescript webpack repro steps defines an application with lazyloaded modules declare in angular json file a budget for one of the lazy loaded module i tried several names based on a lazy loaded module named st account json budgets type bundle name st account baseline maximumwarning maximumerror type bundle name js baseline maximumwarning maximumerror type bundle name staccount baseline maximumwarning maximumerror type bundle name staccount module baseline maximumwarning maximumerror type bundle name staccountmodule baseline maximumwarning maximumerror type bundle name st account src lib st account module staccountmodule baseline maximumwarning maximumerror type bundle name st account src lib st account module staccountmodule baseline maximumwarning maximumerror type bundle name st account src lib st account module baseline maximumwarning maximumerror type bundle name st account src lib st account module baseline maximumwarning maximumerror type bundle name st account src lib st account module baseline maximumwarning maximumerror type bundle name st account src lib st account module ngfactory baseline maximumwarning maximumerror type bundle name baseline maximumwarning maximumerror run ng build prod observed behavior no error or warning is shown desired behavior an error or warning should be shown based on budget mention any other details that might be useful optional if i run ng build prod named chunks a warning error is shown based on the st account src lib st account module ngfactory budget rule
| 0
|
19,380
| 25,516,668,654
|
IssuesEvent
|
2022-11-28 16:51:55
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/filter] OTTL config field does not match example
|
bug priority:p1 processor/filter
|
### Component(s)
processor/filter
### What happened?
## Description
Filter processor [OTTL config section](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor#ottl) has a table that lists `spans.span` as the correct config field for `Span` context. . The sample shown has `traces.span` instead though. The table also lists `logs.log` but the example shows `logs.log_record`.
This may also be a misunderstanding related to the docs but I had that the `config -> ottlcontext` was supposed to represent the necessary config fields for the filter processor.
### Collector version
latest
### Environment information
## Environment
Docs
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
[processor/filter] OTTL config field does not match example - ### Component(s)
processor/filter
### What happened?
## Description
Filter processor [OTTL config section](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor#ottl) has a table that lists `spans.span` as the correct config field for `Span` context. . The sample shown has `traces.span` instead though. The table also lists `logs.log` but the example shows `logs.log_record`.
This may also be a misunderstanding related to the docs but I had that the `config -> ottlcontext` was supposed to represent the necessary config fields for the filter processor.
### Collector version
latest
### Environment information
## Environment
Docs
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
process
|
ottl config field does not match example component s processor filter what happened description filter processor has a table that lists spans span as the correct config field for span context the sample shown has traces span instead though the table also lists logs log but the example shows logs log record this may also be a misunderstanding related to the docs but i had that the config ottlcontext was supposed to represent the necessary config fields for the filter processor collector version latest environment information environment docs opentelemetry collector configuration no response log output no response additional context no response
| 1
|
260,557
| 22,631,350,082
|
IssuesEvent
|
2022-06-30 14:53:32
|
mikeyjwilliams/news-scraper
|
https://api.github.com/repos/mikeyjwilliams/news-scraper
|
closed
|
as a tester I want to verify on correct website yahoo.com
|
test
|
# As a tester I want to verify on correct website yahoo.com
- why
- to make sure on correct end point for all other testing
- how to verify
- verify with assert
- verify URL or title contains yahoo
|
1.0
|
as a tester I want to verify on correct website yahoo.com - # As a tester I want to verify on correct website yahoo.com
- why
- to make sure on correct end point for all other testing
- how to verify
- verify with assert
- verify URL or title contains yahoo
|
non_process
|
as a tester i want to verify on correct website yahoo com as a tester i want to verify on correct website yahoo com why to make sure on correct end point for all other testing how to verify verify with assert verify url or title contains yahoo
| 0
|
5,814
| 8,651,156,912
|
IssuesEvent
|
2018-11-27 01:45:55
|
gfrebello/qs-trip-planning-procedure
|
https://api.github.com/repos/gfrebello/qs-trip-planning-procedure
|
closed
|
Code front end to fill user information
|
Priority:Very High Process:Implement Requirement
|
Create a page for the use to fill and confirm their personal information.
|
1.0
|
Code front end to fill user information - Create a page for the use to fill and confirm their personal information.
|
process
|
code front end to fill user information create a page for the use to fill and confirm their personal information
| 1
|
106,184
| 4,264,238,606
|
IssuesEvent
|
2016-07-12 06:00:56
|
Lord-Ptolemy/Rosalina-Bottings
|
https://api.github.com/repos/Lord-Ptolemy/Rosalina-Bottings
|
opened
|
Get the weather
|
Feature Request Low Priority
|
http://openweathermap.org/api
Allow people to look up the weather for whatever reason.
Hm, if only I knew there was going to be a thunderstorm today.
|
1.0
|
Get the weather - http://openweathermap.org/api
Allow people to look up the weather for whatever reason.
Hm, if only I knew there was going to be a thunderstorm today.
|
non_process
|
get the weather allow people to look up the weather for whatever reason hm if only i knew there was going to be a thunderstorm today
| 0
|
14,105
| 16,993,918,787
|
IssuesEvent
|
2021-07-01 02:12:56
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Wrong results with Intersection process
|
Bug Feedback Processing
|
I've processed data with the intersection algorithm for data that correspond to polygons.
The result of that intersection isn't complete, certain zones of the polygons are skipped, by reasons undetermined in the log file.
The result of execution shoulded create a polygon between de red line a black line, but this zone was skipped.

|
1.0
|
Wrong results with Intersection process - I've processed data with the intersection algorithm for data that correspond to polygons.
The result of that intersection isn't complete, certain zones of the polygons are skipped, by reasons undetermined in the log file.
The result of execution shoulded create a polygon between de red line a black line, but this zone was skipped.

|
process
|
wrong results with intersection process i ve processed data with the intersection algorithm for data that correspond to polygons the result of that intersection isn t complete certain zones of the polygons are skipped by reasons undetermined in the log file the result of execution shoulded create a polygon between de red line a black line but this zone was skipped
| 1
|
215,081
| 16,628,171,138
|
IssuesEvent
|
2021-06-03 12:26:38
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Snapshot `params` in Middleware tests
|
kind/tech topic: middleware topic: tests
|
The current tests do not snapshot the `params` the middlewares receive as input. As this is the "public" API of Middlewares, this would be useful.
|
1.0
|
Snapshot `params` in Middleware tests - The current tests do not snapshot the `params` the middlewares receive as input. As this is the "public" API of Middlewares, this would be useful.
|
non_process
|
snapshot params in middleware tests the current tests do not snapshot the params the middlewares receive as input as this is the public api of middlewares this would be useful
| 0
|
237,074
| 19,592,077,654
|
IssuesEvent
|
2022-01-05 14:03:44
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: X-Pack API Integration Tests.x-pack/test/api_integration/apis/security_solution/kpi_network·ts - apis SecuritySolution Endpoints Kpi Network With filebeat Make sure that we get KpiNetwork networkEvents data
|
failed-test Team:SIEM Team: SecuritySolution
|
A test failed on a tracked branch
```
Error: expected 0 to sort of equal 6157
at Assertion.assert (/dev/shm/workspace/parallel/4/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (/dev/shm/workspace/parallel/4/kibana/node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (test/api_integration/apis/security_solution/kpi_network.ts:149:45)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at Object.apply (/dev/shm/workspace/parallel/4/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '0',
expected: '6157',
showDiff: true
}
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/16640/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack API Integration Tests.x-pack/test/api_integration/apis/security_solution/kpi_network·ts","test.name":"apis SecuritySolution Endpoints Kpi Network With filebeat Make sure that we get KpiNetwork networkEvents data","test.failCount":1}} -->
|
1.0
|
Failing test: X-Pack API Integration Tests.x-pack/test/api_integration/apis/security_solution/kpi_network·ts - apis SecuritySolution Endpoints Kpi Network With filebeat Make sure that we get KpiNetwork networkEvents data - A test failed on a tracked branch
```
Error: expected 0 to sort of equal 6157
at Assertion.assert (/dev/shm/workspace/parallel/4/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (/dev/shm/workspace/parallel/4/kibana/node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (test/api_integration/apis/security_solution/kpi_network.ts:149:45)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at Object.apply (/dev/shm/workspace/parallel/4/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) {
actual: '0',
expected: '6157',
showDiff: true
}
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/16640/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack API Integration Tests.x-pack/test/api_integration/apis/security_solution/kpi_network·ts","test.name":"apis SecuritySolution Endpoints Kpi Network With filebeat Make sure that we get KpiNetwork networkEvents data","test.failCount":1}} -->
|
non_process
|
failing test x pack api integration tests x pack test api integration apis security solution kpi network·ts apis securitysolution endpoints kpi network with filebeat make sure that we get kpinetwork networkevents data a test failed on a tracked branch error expected to sort of equal at assertion assert dev shm workspace parallel kibana node modules kbn expect expect js at assertion eql dev shm workspace parallel kibana node modules kbn expect expect js at context test api integration apis security solution kpi network ts at runmicrotasks at processticksandrejections internal process task queues js at object apply dev shm workspace parallel kibana node modules kbn test target node functional test runner lib mocha wrap function js actual expected showdiff true first failure
| 0
|
755,474
| 26,430,134,605
|
IssuesEvent
|
2023-01-14 18:05:56
|
battlecode/galaxy
|
https://api.github.com/repos/battlecode/galaxy
|
opened
|
Put all submission deadlines for tournaments 30 seconds past the hour, and docs this too
|
type: doc type: feature priority: p1 critical module: backend
|
writing as an issue cuz i'm forgetful and want to be organized
|
1.0
|
Put all submission deadlines for tournaments 30 seconds past the hour, and docs this too - writing as an issue cuz i'm forgetful and want to be organized
|
non_process
|
put all submission deadlines for tournaments seconds past the hour and docs this too writing as an issue cuz i m forgetful and want to be organized
| 0
|
16,850
| 22,107,322,584
|
IssuesEvent
|
2022-06-01 18:02:54
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.57
|
enhancement P1 process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.57.0) for milestone
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
### Alternatives
_No response_
|
1.0
|
Release checklist 0.57 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on [relevant issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing [open](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.57.0) for milestone
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Integration
- [x] Deploy to VM
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
- [x] Migrations tested against mainnet clone
## Previewnet
- [x] Deploy to VM
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on nothing for milestone github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release integration deploy to vm performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests migrations tested against mainnet clone previewnet deploy to vm testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm alternatives no response
| 1
|
144,421
| 13,105,716,057
|
IssuesEvent
|
2020-08-04 12:41:15
|
lucrae/texlite
|
https://api.github.com/repos/lucrae/texlite
|
opened
|
Internal documentation
|
documentation
|
For the sake of clarity and future development, better internal documentation (full docstrings for **all** functions, type hinting, etc.) would be very good for the codebase, especially while it is still fairly small.
|
1.0
|
Internal documentation - For the sake of clarity and future development, better internal documentation (full docstrings for **all** functions, type hinting, etc.) would be very good for the codebase, especially while it is still fairly small.
|
non_process
|
internal documentation for the sake of clarity and future development better internal documentation full docstrings for all functions type hinting etc would be very good for the codebase especially while it is still fairly small
| 0
|
350,847
| 10,509,624,840
|
IssuesEvent
|
2019-09-27 11:29:49
|
ehimennlab/ProjectE-2019
|
https://api.github.com/repos/ehimennlab/ProjectE-2019
|
closed
|
LaravelにDB追加
|
Priority: high
|
## WHY
- LaravelアプリケーションでDB使いたい(mysql)
## WHAT
- laravelでmigrationできるように設定を変更する
|
1.0
|
LaravelにDB追加 - ## WHY
- LaravelアプリケーションでDB使いたい(mysql)
## WHAT
- laravelでmigrationできるように設定を変更する
|
non_process
|
laravelにdb追加 why laravelアプリケーションでdb使いたい(mysql) what laravelでmigrationできるように設定を変更する
| 0
|
17,968
| 23,983,468,299
|
IssuesEvent
|
2022-09-13 16:54:17
|
opensearch-project/data-prepper
|
https://api.github.com/repos/opensearch-project/data-prepper
|
closed
|
Parse CSV or TSV content in Events
|
enhancement plugin - processor
|
Some Events will have a single line from a comma-separated value (CSV) or tab-separated value (TSV) file. Data Prepper should be able to parse an individual CSV line and add fields to the Event for that CSV line.
Pipeline authors should have the option to have the each CSV/TSV line output either:
* Specific key names from column indices (e.g. "column 2 maps to status code")
* Output the columns into an array based on the index
- [x] #1613
- [x] #1614
- [x] #1615
- [x] #1616
- [x] #1617
- [x] #1619
|
1.0
|
Parse CSV or TSV content in Events - Some Events will have a single line from a comma-separated value (CSV) or tab-separated value (TSV) file. Data Prepper should be able to parse an individual CSV line and add fields to the Event for that CSV line.
Pipeline authors should have the option to have the each CSV/TSV line output either:
* Specific key names from column indices (e.g. "column 2 maps to status code")
* Output the columns into an array based on the index
- [x] #1613
- [x] #1614
- [x] #1615
- [x] #1616
- [x] #1617
- [x] #1619
|
process
|
parse csv or tsv content in events some events will have a single line from a comma separated value csv or tab separated value tsv file data prepper should be able to parse an individual csv line and add fields to the event for that csv line pipeline authors should have the option to have the each csv tsv line output either specific key names from column indices e g column maps to status code output the columns into an array based on the index
| 1
|
19,232
| 25,386,353,936
|
IssuesEvent
|
2022-11-21 22:16:31
|
googleapis/google-cloud-java
|
https://api.github.com/repos/googleapis/google-cloud-java
|
closed
|
Revisit whether Terraform setup can overtake the Kokoro integration tests
|
type: process priority: p3
|
In Jan 2023, let's revisit whether the Terraform setup can overtake the Kokoro integration tests. In November, we kept "Test: Terraform Integration" as an independent check because we were not sure about the reliability of the Terraform and the new GCP project. We didn't want it to disrupt our release workflow.
What statistics will help us to determine the readiness of the setup in Jan 2023?
|
1.0
|
Revisit whether Terraform setup can overtake the Kokoro integration tests - In Jan 2023, let's revisit whether the Terraform setup can overtake the Kokoro integration tests. In November, we kept "Test: Terraform Integration" as an independent check because we were not sure about the reliability of the Terraform and the new GCP project. We didn't want it to disrupt our release workflow.
What statistics will help us to determine the readiness of the setup in Jan 2023?
|
process
|
revisit whether terraform setup can overtake the kokoro integration tests in jan let s revisit whether the terraform setup can overtake the kokoro integration tests in november we kept test terraform integration as an independent check because we were not sure about the reliability of the terraform and the new gcp project we didn t want it to disrupt our release workflow what statistics will help us to determine the readiness of the setup in jan
| 1
|
309,369
| 26,661,111,524
|
IssuesEvent
|
2023-01-25 21:12:36
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Contratos - Dados dos contratos - Confins
|
generalization test development template - ABO (21) tag - Contratos subtag - Dados dos contratos
|
DoD: Realizar o teste de Generalização do validador da tag Contratos - Dados dos contratos para o Município de Confins.
|
1.0
|
Teste de generalizacao para a tag Contratos - Dados dos contratos - Confins - DoD: Realizar o teste de Generalização do validador da tag Contratos - Dados dos contratos para o Município de Confins.
|
non_process
|
teste de generalizacao para a tag contratos dados dos contratos confins dod realizar o teste de generalização do validador da tag contratos dados dos contratos para o município de confins
| 0
|
22,379
| 31,142,282,675
|
IssuesEvent
|
2023-08-16 01:43:59
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky test: AssertionError: expected stub to have been called with arguments Error: thrown from after:spec handler, but it was never called
|
process: flaky test topic: flake ❄️ E2E stale
|
### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/42175/workflows/8624a615-e24f-47e5-8126-e11b0acc78d8/jobs/1750016/tests#failed-test-0
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/server/test/unit/open_project_spec.js#L208
### Analysis
<img width="1144" alt="Screen Shot 2022-08-18 at 4 59 28 PM" src="https://user-images.githubusercontent.com/26726429/185514896-a7e2440c-25fa-40cb-a7d8-04ca9a23ebe5.png">
### Cypress Version
10.5.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
1.0
|
Flaky test: AssertionError: expected stub to have been called with arguments Error: thrown from after:spec handler, but it was never called - ### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/42175/workflows/8624a615-e24f-47e5-8126-e11b0acc78d8/jobs/1750016/tests#failed-test-0
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/server/test/unit/open_project_spec.js#L208
### Analysis
<img width="1144" alt="Screen Shot 2022-08-18 at 4 59 28 PM" src="https://user-images.githubusercontent.com/26726429/185514896-a7e2440c-25fa-40cb-a7d8-04ca9a23ebe5.png">
### Cypress Version
10.5.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
process
|
flaky test assertionerror expected stub to have been called with arguments error thrown from after spec handler but it was never called link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
| 1
|
592,765
| 17,929,694,551
|
IssuesEvent
|
2021-09-10 07:32:50
|
openshift/odo
|
https://api.github.com/repos/openshift/odo
|
closed
|
converted wildfly and dotnet application error out with permission denied
|
kind/bug priority/High area/regression
|
/kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:** MacOS
**Output of `odo version`:**
## How did you run odo exactly?
`"odo", "create", "--s2i", "wildfly", "wildfly-app"`
"odo", "create", "--s2i", "dotnet:2.0"
## Actual behavior
```
+ PV_MNT_PT=/opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] + ODO_UTILS_DIR=/opt/odo
INFO[2021-04-14T23:04:26Z] [odo] + '[' '!' -f /opt/app-root/conf/supervisor.conf ']'
INFO[2021-04-14T23:04:26Z] [odo] + cp -rp /opt/odo/conf /opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] cp: cannot create directory '/opt/app-root/conf': Permission denied
INFO[2021-04-14T23:04:26Z] [odo] ✗ Executing s2i-assemble command "/opt/odo/bin/s2i-setup && /opt/odo/bin/assemble-and-restart" [179ms]
INFO[2021-04-14T23:04:26Z] [odo] ✗ Failed to start component with name wildfly-app. Error: Failed to create the component: command execution failed: unable to execute the run command: unable to exec command [/bin/sh -c /opt/odo/bin/s2i-setup && /opt/odo/bin/assemble-and-restart]:
INFO[2021-04-14T23:04:26Z] [odo] + set -eo pipefail
INFO[2021-04-14T23:04:26Z] [odo] + PV_MNT_PT=/opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] + ODO_UTILS_DIR=/opt/odo
INFO[2021-04-14T23:04:26Z] [odo] + '[' '!' -f /opt/app-root/conf/supervisor.conf ']'
INFO[2021-04-14T23:04:26Z] [odo] + cp -rp /opt/odo/conf /opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] cp: cannot create directory '/opt/app-root/conf': Permission denied
INFO[2021-04-14T23:04:26Z] [odo] : error while streaming command: command terminated with exit code 1
INFO[2021-04-14T23:04:26Z] Deleting project: e2e-source-test36ipy
INFO[2021-04-14T23:04:26Z] Running odo with args [odo project delete e2e-source-test36ipy -f] `
```
```
Generating MSBuild file /opt/app-root/src/obj/asp-net-hello-world.csproj.nuget.g.targets.
[odo] Restore completed in 4.99 sec for /opt/app-root/src/asp-net-hello-world.csproj.
[odo] ---> Publishing application...
[odo] Microsoft (R) Build Engine version 15.4.8.50081 for .NET Core
[odo] Copyright (C) Microsoft Corporation. All rights reserved.
[odo]
[odo] asp-net-hello-world -> /opt/app-root/src/bin/Release/netcoreapp2.0/asp-net-hello-world.dll
[odo] asp-net-hello-world -> /opt/app-root/app/
[odo] chmod: changing permissions of '/opt/app-root/app/default-cmd.sh': Operation not permitted
[odo] : error while streaming command: command terminated with exit code 1
```
## Expected behavior
Component should be working
- [ ] Persistent volumes should be mounted into `/opt/app-root` and in `$DEPLOYMENTS_DIR`
- [ ] `$DEPLOYMENTS_DIR` can be inside `/opt/app-root`, in this case mount only one PV in `/opt/app-root`
- [ ] An init container should copy the content of the original `/opt/app-root` into the persistent volume mounted on this endpoint
- [ ] Declare a `preStart` event in the devfile to run this init container (blocked by #4901)
## Any logs, error output, etc?
|
1.0
|
converted wildfly and dotnet application error out with permission denied - /kind bug
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the Google group if you have a question rather than a bug or feature request.
The group is at: https://groups.google.com/forum/#!forum/odo-users
Thanks for understanding, and for contributing to the project!
-->
## What versions of software are you using?
**Operating System:** MacOS
**Output of `odo version`:**
## How did you run odo exactly?
`"odo", "create", "--s2i", "wildfly", "wildfly-app"`
"odo", "create", "--s2i", "dotnet:2.0"
## Actual behavior
```
+ PV_MNT_PT=/opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] + ODO_UTILS_DIR=/opt/odo
INFO[2021-04-14T23:04:26Z] [odo] + '[' '!' -f /opt/app-root/conf/supervisor.conf ']'
INFO[2021-04-14T23:04:26Z] [odo] + cp -rp /opt/odo/conf /opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] cp: cannot create directory '/opt/app-root/conf': Permission denied
INFO[2021-04-14T23:04:26Z] [odo] ✗ Executing s2i-assemble command "/opt/odo/bin/s2i-setup && /opt/odo/bin/assemble-and-restart" [179ms]
INFO[2021-04-14T23:04:26Z] [odo] ✗ Failed to start component with name wildfly-app. Error: Failed to create the component: command execution failed: unable to execute the run command: unable to exec command [/bin/sh -c /opt/odo/bin/s2i-setup && /opt/odo/bin/assemble-and-restart]:
INFO[2021-04-14T23:04:26Z] [odo] + set -eo pipefail
INFO[2021-04-14T23:04:26Z] [odo] + PV_MNT_PT=/opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] + ODO_UTILS_DIR=/opt/odo
INFO[2021-04-14T23:04:26Z] [odo] + '[' '!' -f /opt/app-root/conf/supervisor.conf ']'
INFO[2021-04-14T23:04:26Z] [odo] + cp -rp /opt/odo/conf /opt/app-root
INFO[2021-04-14T23:04:26Z] [odo] cp: cannot create directory '/opt/app-root/conf': Permission denied
INFO[2021-04-14T23:04:26Z] [odo] : error while streaming command: command terminated with exit code 1
INFO[2021-04-14T23:04:26Z] Deleting project: e2e-source-test36ipy
INFO[2021-04-14T23:04:26Z] Running odo with args [odo project delete e2e-source-test36ipy -f] `
```
```
Generating MSBuild file /opt/app-root/src/obj/asp-net-hello-world.csproj.nuget.g.targets.
[odo] Restore completed in 4.99 sec for /opt/app-root/src/asp-net-hello-world.csproj.
[odo] ---> Publishing application...
[odo] Microsoft (R) Build Engine version 15.4.8.50081 for .NET Core
[odo] Copyright (C) Microsoft Corporation. All rights reserved.
[odo]
[odo] asp-net-hello-world -> /opt/app-root/src/bin/Release/netcoreapp2.0/asp-net-hello-world.dll
[odo] asp-net-hello-world -> /opt/app-root/app/
[odo] chmod: changing permissions of '/opt/app-root/app/default-cmd.sh': Operation not permitted
[odo] : error while streaming command: command terminated with exit code 1
```
## Expected behavior
Component should be working
- [ ] Persistent volumes should be mounted into `/opt/app-root` and in `$DEPLOYMENTS_DIR`
- [ ] `$DEPLOYMENTS_DIR` can be inside `/opt/app-root`, in this case mount only one PV in `/opt/app-root`
- [ ] An init container should copy the content of the original `/opt/app-root` into the persistent volume mounted on this endpoint
- [ ] Declare a `preStart` event in the devfile to run this init container (blocked by #4901)
## Any logs, error output, etc?
|
non_process
|
converted wildfly and dotnet application error out with permission denied kind bug welcome we kindly ask you to fill out the issue template below use the google group if you have a question rather than a bug or feature request the group is at thanks for understanding and for contributing to the project what versions of software are you using operating system macos output of odo version how did you run odo exactly odo create wildfly wildfly app odo create dotnet actual behavior pv mnt pt opt app root info odo utils dir opt odo info info cp rp opt odo conf opt app root info cp cannot create directory opt app root conf permission denied info ✗ executing assemble command opt odo bin setup opt odo bin assemble and restart info ✗ failed to start component with name wildfly app error failed to create the component command execution failed unable to execute the run command unable to exec command info set eo pipefail info pv mnt pt opt app root info odo utils dir opt odo info info cp rp opt odo conf opt app root info cp cannot create directory opt app root conf permission denied info error while streaming command command terminated with exit code info deleting project source info running odo with args generating msbuild file opt app root src obj asp net hello world csproj nuget g targets restore completed in sec for opt app root src asp net hello world csproj publishing application microsoft r build engine version for net core copyright c microsoft corporation all rights reserved asp net hello world opt app root src bin release asp net hello world dll asp net hello world opt app root app chmod changing permissions of opt app root app default cmd sh operation not permitted error while streaming command command terminated with exit code expected behavior component should be working persistent volumes should be mounted into opt app root and in deployments dir deployments dir can be inside opt app root in this case mount only one pv in opt app root an init container should copy the content of the original opt app root into the persistent volume mounted on this endpoint declare a prestart event in the devfile to run this init container blocked by any logs error output etc
| 0
|
205,390
| 15,612,814,031
|
IssuesEvent
|
2021-03-19 15:43:53
|
COMP4350/Servus
|
https://api.github.com/repos/COMP4350/Servus
|
closed
|
Add Sprint 2 backend tests
|
backend testing
|
Go back and test the code written in sprint 2 for the backend (mongodb models + nodejs/express api)
|
1.0
|
Add Sprint 2 backend tests - Go back and test the code written in sprint 2 for the backend (mongodb models + nodejs/express api)
|
non_process
|
add sprint backend tests go back and test the code written in sprint for the backend mongodb models nodejs express api
| 0
|
250,181
| 18,874,638,553
|
IssuesEvent
|
2021-11-13 20:09:16
|
girlscript/winter-of-contributing
|
https://api.github.com/repos/girlscript/winter-of-contributing
|
closed
|
DSA 2.1.19.2 suffix trie and suffix array in strings
|
documentation GWOC21 Video Audio DSA Assigned
|
## Description
Explain suffix trie in strings with code.
## Task
- [ ] Audio
- [ ] Video
- [ ] Documentation
## Note
- While requesting the issue to be assigned, please mention your **NAME & BATCH NUMBER**.
- **Mention the type of contribution you want to make**.
- Kindly refer to the [Issue Guidelines](https://github.com/girlscript/winter-of-contributing/blob/DSA/DSA/CONTRIBUTING.md#issues).
- **Do not use the assign command for this issue. Follow the above procedure.**
- In case you see that the issue has already been assigned, try to request a different task within the same issue.
- Changes should be made inside the ````DSA/directory```` & ````DSA```` branch.
- Name of the file should be the same as the issue.
- This issue is only for 'GWOC' contributors of the 'DSA' Domain.
### Domain
DSA
### Type of Contribution
Audio, Video, Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project.
|
1.0
|
DSA 2.1.19.2 suffix trie and suffix array in strings - ## Description
Explain suffix trie in strings with code.
## Task
- [ ] Audio
- [ ] Video
- [ ] Documentation
## Note
- While requesting the issue to be assigned, please mention your **NAME & BATCH NUMBER**.
- **Mention the type of contribution you want to make**.
- Kindly refer to the [Issue Guidelines](https://github.com/girlscript/winter-of-contributing/blob/DSA/DSA/CONTRIBUTING.md#issues).
- **Do not use the assign command for this issue. Follow the above procedure.**
- In case you see that the issue has already been assigned, try to request a different task within the same issue.
- Changes should be made inside the ````DSA/directory```` & ````DSA```` branch.
- Name of the file should be the same as the issue.
- This issue is only for 'GWOC' contributors of the 'DSA' Domain.
### Domain
DSA
### Type of Contribution
Audio, Video, Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project.
|
non_process
|
dsa suffix trie and suffix array in strings description explain suffix trie in strings with code task audio video documentation note while requesting the issue to be assigned please mention your name batch number mention the type of contribution you want to make kindly refer to the do not use the assign command for this issue follow the above procedure in case you see that the issue has already been assigned try to request a different task within the same issue changes should be made inside the dsa directory dsa branch name of the file should be the same as the issue this issue is only for gwoc contributors of the dsa domain domain dsa type of contribution audio video documentation code of conduct i follow of this project
| 0
|
4,965
| 7,806,213,395
|
IssuesEvent
|
2018-06-11 13:28:11
|
cptechinc/soft-dpluso
|
https://api.github.com/repos/cptechinc/soft-dpluso
|
closed
|
Fields to change on Processwire
|
PHP Backend Processwire
|
Go through SalesPortals
change the following fields
change_discount to allow_discount
change_price to allow_changeprice
:heavy_plus_sign: allow_belowminprice
:heavy_plus_sign: needs_nonstockadd
:heavy_plus_sign: companyllogo
|
1.0
|
Fields to change on Processwire - Go through SalesPortals
change the following fields
change_discount to allow_discount
change_price to allow_changeprice
:heavy_plus_sign: allow_belowminprice
:heavy_plus_sign: needs_nonstockadd
:heavy_plus_sign: companyllogo
|
process
|
fields to change on processwire go through salesportals change the following fields change discount to allow discount change price to allow changeprice heavy plus sign allow belowminprice heavy plus sign needs nonstockadd heavy plus sign companyllogo
| 1
|
132,494
| 10,757,165,222
|
IssuesEvent
|
2019-10-31 12:46:11
|
elastic/cloud-on-k8s
|
https://api.github.com/repos/elastic/cloud-on-k8s
|
closed
|
Running e2e tests on different versions of vanilla k8s
|
:ci >test
|
We will use https://github.com/kubernetes-sigs/kind to test ECK against different versions of vanilla k8s.
If we will go with `kind`:
- [x] Setup new build agent on CI, approx `n1-standard-16` with 16 CPU and 60 GB RAM should be enough (details for setup: https://github.com/elastic/infra/blob/master/terraform/providers/gcp/env/elastic-apps/helm/values/gobld/devops-ci.yml)
- [x] Build new image or simply download `kind` binary to existed one?
- [x] Create new CI job (both JJBB yml and Jenkins pipeline)
If we will go with Ansible:
- [ ] Evaluate possibility of using Ansible playbook for bootstrapping K8s cluster
- [ ] Build Docker image to run Ansible
- [ ] How to handle SSH keys?
- [ ] Create new CI job (both JJBB yml and Jenkins pipeline)
|
1.0
|
Running e2e tests on different versions of vanilla k8s - We will use https://github.com/kubernetes-sigs/kind to test ECK against different versions of vanilla k8s.
If we will go with `kind`:
- [x] Setup new build agent on CI, approx `n1-standard-16` with 16 CPU and 60 GB RAM should be enough (details for setup: https://github.com/elastic/infra/blob/master/terraform/providers/gcp/env/elastic-apps/helm/values/gobld/devops-ci.yml)
- [x] Build new image or simply download `kind` binary to existed one?
- [x] Create new CI job (both JJBB yml and Jenkins pipeline)
If we will go with Ansible:
- [ ] Evaluate possibility of using Ansible playbook for bootstrapping K8s cluster
- [ ] Build Docker image to run Ansible
- [ ] How to handle SSH keys?
- [ ] Create new CI job (both JJBB yml and Jenkins pipeline)
|
non_process
|
running tests on different versions of vanilla we will use to test eck against different versions of vanilla if we will go with kind setup new build agent on ci approx standard with cpu and gb ram should be enough details for setup build new image or simply download kind binary to existed one create new ci job both jjbb yml and jenkins pipeline if we will go with ansible evaluate possibility of using ansible playbook for bootstrapping cluster build docker image to run ansible how to handle ssh keys create new ci job both jjbb yml and jenkins pipeline
| 0
|
20,030
| 26,512,357,290
|
IssuesEvent
|
2023-01-18 18:06:31
|
dtcenter/MET
|
https://api.github.com/repos/dtcenter/MET
|
closed
|
Bugfix: PB2NC total obs count log is incorrect
|
type: bug requestor: METplus Team MET: PreProcessing Tools (Point) priority: high
|
This was discovered while going through the [METplus tutorial first pb2nc example](https://dtcenter.org/metplus-practical-session-guide-version-5-0/session-2-grid-obs/met-tool-pb2nc/configure)
## Describe the Problem ##
Since MET 10.1, the incorrect total number of obs is reported in the logs. It looks like the total number is higher than the actual number of obs that are written to the output file.
Potential fixes (unverified):
1) It looks like n_derived_obs was added and included in the total count (n_file_obs + n_derived_obs), but n_file_obs is also incremented when n_derived_obs is on [this line](https://github.com/dtcenter/MET/blob/4e3fd4503d48c4888f301d34bdec89efc07b4c72/src/tools/other/pb2nc/pb2nc.cc#L1584). Removing n_file_obs++ from this line may fix the issue?
2) There is a n_total_obs counter variable that is incremented but never used. Using this value in the log instead may produce the correct result.
### Expected Behavior ###
The number of observations written to the NetCDF output file should match the number from the log noting the "Total observations retained or derived."
### Environment ###
Describe your runtime environment:
*1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)*
*2. OS: (e.g. RedHat Linux, MacOS)*
*3. Software version number(s)*
### To Reproduce ###
Describe the steps to reproduce the behavior:
*1. Go to the [METplus tutorial first pb2nc example](https://dtcenter.org/metplus-practical-session-guide-version-5-0/session-2-grid-obs/met-tool-pb2nc/configure)*
*2. Set the values from the instructions*
*3. Run the command (or something similar):*
```
pb2nc \
${METPLUS_DATA}/met_test/data/sample_obs/prepbufr/ndas.t00z.prepbufr.tm12.20070401.nr \
tutorial_pb_run1.nc \
PB2NCConfig_tutorial_run1 \
-v 2
```
*4. Confirm the log message looks like this:*
> DEBUG 2: Total observations retained or derived = 68870
*5. Run again using pb2nc from MET 10.0.0 or earlier and confirm that the log message looks like this:*
> DEBUG 2: Total observations retained or derived = 52491
and the NetCDF output matches this number.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2792542 NOAA Base
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Organization** level **Project** for support of the current coordinated release
- [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Bugfix Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **main_\<Version>**.
Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>`
- [ ] Fix the bug and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **main_\<Version>**.
Pull request: `bugfix <Issue Number> main_<Version> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Development** issue
Select: **Organization** level software support **Project** for the current coordinated release
Select: **Milestone** as the next bugfix version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Complete the steps above to fix the bug on the **develop** branch.
Branch name: `bugfix_<Issue Number>_develop_<Description>`
Pull request: `bugfix <Issue Number> develop <Description>`
Select: **Reviewer(s)** and **Development** issue
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Close this issue.
|
1.0
|
Bugfix: PB2NC total obs count log is incorrect - This was discovered while going through the [METplus tutorial first pb2nc example](https://dtcenter.org/metplus-practical-session-guide-version-5-0/session-2-grid-obs/met-tool-pb2nc/configure)
## Describe the Problem ##
Since MET 10.1, the incorrect total number of obs is reported in the logs. It looks like the total number is higher than the actual number of obs that are written to the output file.
Potential fixes (unverified):
1) It looks like n_derived_obs was added and included in the total count (n_file_obs + n_derived_obs), but n_file_obs is also incremented when n_derived_obs is on [this line](https://github.com/dtcenter/MET/blob/4e3fd4503d48c4888f301d34bdec89efc07b4c72/src/tools/other/pb2nc/pb2nc.cc#L1584). Removing n_file_obs++ from this line may fix the issue?
2) There is a n_total_obs counter variable that is incremented but never used. Using this value in the log instead may produce the correct result.
### Expected Behavior ###
The number of observations written to the NetCDF output file should match the number from the log noting the "Total observations retained or derived."
### Environment ###
Describe your runtime environment:
*1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)*
*2. OS: (e.g. RedHat Linux, MacOS)*
*3. Software version number(s)*
### To Reproduce ###
Describe the steps to reproduce the behavior:
*1. Go to the [METplus tutorial first pb2nc example](https://dtcenter.org/metplus-practical-session-guide-version-5-0/session-2-grid-obs/met-tool-pb2nc/configure)*
*2. Set the values from the instructions*
*3. Run the command (or something similar):*
```
pb2nc \
${METPLUS_DATA}/met_test/data/sample_obs/prepbufr/ndas.t00z.prepbufr.tm12.20070401.nr \
tutorial_pb_run1.nc \
PB2NCConfig_tutorial_run1 \
-v 2
```
*4. Confirm the log message looks like this:*
> DEBUG 2: Total observations retained or derived = 68870
*5. Run again using pb2nc from MET 10.0.0 or earlier and confirm that the log message looks like this:*
> DEBUG 2: Total observations retained or derived = 52491
and the NetCDF output matches this number.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2792542 NOAA Base
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Organization** level **Project** for support of the current coordinated release
- [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Bugfix Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **main_\<Version>**.
Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>`
- [ ] Fix the bug and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **main_\<Version>**.
Pull request: `bugfix <Issue Number> main_<Version> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Development** issue
Select: **Organization** level software support **Project** for the current coordinated release
Select: **Milestone** as the next bugfix version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Complete the steps above to fix the bug on the **develop** branch.
Branch name: `bugfix_<Issue Number>_develop_<Description>`
Pull request: `bugfix <Issue Number> develop <Description>`
Select: **Reviewer(s)** and **Development** issue
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Close this issue.
|
process
|
bugfix total obs count log is incorrect this was discovered while going through the describe the problem since met the incorrect total number of obs is reported in the logs it looks like the total number is higher than the actual number of obs that are written to the output file potential fixes unverified it looks like n derived obs was added and included in the total count n file obs n derived obs but n file obs is also incremented when n derived obs is on removing n file obs from this line may fix the issue there is a n total obs counter variable that is incremented but never used using this value in the log instead may produce the correct result expected behavior the number of observations written to the netcdf output file should match the number from the log noting the total observations retained or derived environment describe your runtime environment machine e g hpc name linux workstation mac laptop os e g redhat linux macos software version number s to reproduce describe the steps to reproduce the behavior go to the set the values from the instructions run the command or something similar metplus data met test data sample obs prepbufr ndas prepbufr nr tutorial pb nc tutorial v confirm the log message looks like this debug total observations retained or derived run again using from met or earlier and confirm that the log message looks like this debug total observations retained or derived and the netcdf output matches this number relevant deadlines list relevant project deadlines here or state none funding source noaa base define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and development issue select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and development issue select repository level development cycle project for the next official release select milestone as the next official version close this issue
| 1
|
7,253
| 10,419,249,183
|
IssuesEvent
|
2019-09-15 15:11:20
|
PHPSocialNetwork/phpfastcache
|
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
|
closed
|
MongoDB driver DNS seedlist format
|
7.1 >_< Working & Scheduled [-_-] In Process ^_^ Improvement
|
The mongodb driver has a hard-coded URL scheme in [Mongodb/Driver.php](https://github.com/PHPSocialNetwork/phpfastcache/blob/master/lib/Phpfastcache/Drivers/Mongodb/Driver.php#L245) on line 245: "mongodb://"
However, according [to: the MongoDB Docs](https://docs.mongodb.com/manual/reference/connection-string/) a new URL scheme "mongodb+srv" is usable with Atlas replica-sets
Can you allow this setting to be configurable. Also there are other recommended connection parameters which are unsupported
retryWrites=true&w=majority
|
1.0
|
MongoDB driver DNS seedlist format - The mongodb driver has a hard-coded URL scheme in [Mongodb/Driver.php](https://github.com/PHPSocialNetwork/phpfastcache/blob/master/lib/Phpfastcache/Drivers/Mongodb/Driver.php#L245) on line 245: "mongodb://"
However, according [to: the MongoDB Docs](https://docs.mongodb.com/manual/reference/connection-string/) a new URL scheme "mongodb+srv" is usable with Atlas replica-sets
Can you allow this setting to be configurable. Also there are other recommended connection parameters which are unsupported
retryWrites=true&w=majority
|
process
|
mongodb driver dns seedlist format the mongodb driver has a hard coded url scheme in on line mongodb however according a new url scheme mongodb srv is usable with atlas replica sets can you allow this setting to be configurable also there are other recommended connection parameters which are unsupported retrywrites true w majority
| 1
|
49,120
| 12,290,298,595
|
IssuesEvent
|
2020-05-10 02:58:38
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
opened
|
bazel build; encountered error while reading extension file 'swift/repositories.bzl': no such package '@build_bazel_rules_swift//swif
|
type:build/install
|
**System information**
- OS Platform and Distribution: Ubuntu 18.04
- TensorFlow installed from: source
- TensorFlow version: 1.14.0
- Python version:3.7.7
- Installed using virtualenv? pip? conda?: conda
- Bazel version: 0.24.1
- trying to build tensorflow cpu not gpu
I followed the intructions and based my versions found here: https://www.tensorflow.org/install/source#tested_build_configurations
**Describe the problem**
When i run the command
``` bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package ``` it gives the following error:
```
Starting local Bazel server and connecting to it...
ERROR: error loading package '': Encountered error while reading extension file 'swift/repositories.bzl': no such package '@build_bazel_rules_swift//swift': java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_swift/releases/download/0.9.0/rules_swift.0.9.0.tar.gz] to /home/domingo_cm/.cache/bazel/_bazel_domingo_cm/f08b2f8b39197d5a57f7b557c17f0caf/external/build_bazel_rules_swift/rules_swift.0.9.0.tar.gz: Unknown host: github.com
ERROR: error loading package '': Encountered error while reading extension file 'swift/repositories.bzl': no such package '@build_bazel_rules_swift//swift': java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_swift/releases/download/0.9.0/rules_swift.0.9.0.tar.gz] to /home/domingo_cm/.cache/bazel/_bazel_domingo_cm/f08b2f8b39197d5a57f7b557c17f0caf/external/build_bazel_rules_swift/rules_swift.0.9.0.tar.gz: Unknown host: github.com
INFO: Elapsed time: 14.145s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
```
**Provide the exact sequence of commands / steps that you executed before running into the problem**
1. git clone https://github.com/tensorflow/tensorflow.git
2. cd tensorflow
3. git checkout r1.14
4. ./configure
5. (assuing bazel is installed ) bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
**Any other info / logs**
I am running under a corporate proxy network if it gives any importance, git version is 2.17.1
|
1.0
|
bazel build; encountered error while reading extension file 'swift/repositories.bzl': no such package '@build_bazel_rules_swift//swif - **System information**
- OS Platform and Distribution: Ubuntu 18.04
- TensorFlow installed from: source
- TensorFlow version: 1.14.0
- Python version:3.7.7
- Installed using virtualenv? pip? conda?: conda
- Bazel version: 0.24.1
- trying to build tensorflow cpu not gpu
I followed the intructions and based my versions found here: https://www.tensorflow.org/install/source#tested_build_configurations
**Describe the problem**
When i run the command
``` bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package ``` it gives the following error:
```
Starting local Bazel server and connecting to it...
ERROR: error loading package '': Encountered error while reading extension file 'swift/repositories.bzl': no such package '@build_bazel_rules_swift//swift': java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_swift/releases/download/0.9.0/rules_swift.0.9.0.tar.gz] to /home/domingo_cm/.cache/bazel/_bazel_domingo_cm/f08b2f8b39197d5a57f7b557c17f0caf/external/build_bazel_rules_swift/rules_swift.0.9.0.tar.gz: Unknown host: github.com
ERROR: error loading package '': Encountered error while reading extension file 'swift/repositories.bzl': no such package '@build_bazel_rules_swift//swift': java.io.IOException: Error downloading [https://github.com/bazelbuild/rules_swift/releases/download/0.9.0/rules_swift.0.9.0.tar.gz] to /home/domingo_cm/.cache/bazel/_bazel_domingo_cm/f08b2f8b39197d5a57f7b557c17f0caf/external/build_bazel_rules_swift/rules_swift.0.9.0.tar.gz: Unknown host: github.com
INFO: Elapsed time: 14.145s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
```
**Provide the exact sequence of commands / steps that you executed before running into the problem**
1. git clone https://github.com/tensorflow/tensorflow.git
2. cd tensorflow
3. git checkout r1.14
4. ./configure
5. (assuing bazel is installed ) bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
**Any other info / logs**
I am running under a corporate proxy network if it gives any importance, git version is 2.17.1
|
non_process
|
bazel build encountered error while reading extension file swift repositories bzl no such package build bazel rules swift swif system information os platform and distribution ubuntu tensorflow installed from source tensorflow version python version installed using virtualenv pip conda conda bazel version trying to build tensorflow cpu not gpu i followed the intructions and based my versions found here describe the problem when i run the command bazel build config opt tensorflow tools pip package build pip package it gives the following error starting local bazel server and connecting to it error error loading package encountered error while reading extension file swift repositories bzl no such package build bazel rules swift swift java io ioexception error downloading to home domingo cm cache bazel bazel domingo cm external build bazel rules swift rules swift tar gz unknown host github com error error loading package encountered error while reading extension file swift repositories bzl no such package build bazel rules swift swift java io ioexception error downloading to home domingo cm cache bazel bazel domingo cm external build bazel rules swift rules swift tar gz unknown host github com info elapsed time info processes failed build did not complete successfully packages loaded provide the exact sequence of commands steps that you executed before running into the problem git clone cd tensorflow git checkout configure assuing bazel is installed bazel build config opt tensorflow tools pip package build pip package any other info logs i am running under a corporate proxy network if it gives any importance git version is
| 0
|
123,784
| 10,290,656,871
|
IssuesEvent
|
2019-08-27 11:35:22
|
Exa-Networks/exabgp
|
https://api.github.com/repos/Exa-Networks/exabgp
|
closed
|
md5: Incorrect padding
|
bug fixed-need-testing
|
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### OS
<!---
Mention the OS you are running ExaBGP on (Linux variant if relevant)
-->
```
$ hostnamectl
Static hostname: nl-ams02c-ispbgp01.xxx.xxx
Icon name: computer-vm
Chassis: vm
Machine ID: fe42ef15e88f89520e0946b6201079d0
Boot ID: 54c23f2abe05477889261b29f5735a34
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-514.21.1.el7.x86_64
Architecture: x86-64
```
##### VERSION
<!--- Paste verbatim the output from “exabgp --version” between quotes below -->
```
ExaBGP : 4.0.2-1c737d99
Python : 3.4.5 (default, Dec 11 2017, 14:22:24) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
Uname : Linux nl-ams02c-ispbgp01.xxx.xxxt 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC 2017 x86_64
Root : /usr
```
##### ENVIRONMENT
<!--- Paste verbatim the output from “exabgp --di” (let us know if the output is empty) between quotes below -->
```
empty
```
<!--- You can also use gist.github.com links for larger files -->
##### CONFIGURATION
<!--- Paste verbatim your configuration file -->
```
template {
neighbor default {
router-id xxxx ;
local-address xxx ;
local-as xxx ;
peer-as xxx ;
hold-time 180 ;
group-updates false ;
}
}
neighbor xxx {
inherit default ;
description "* xxx *" ;
family {
ipv4 unicast ;
}
md5-password xxx ;
}
```
<!--- You can also use gist.github.com links for larger files -->
##### SUMMARY
<!--- Explain the problem briefly -->
When installing ExaBGP via pip3 and trying to establish a BGP session (IP v4) and using `md5` (the password provided is clear text without quotes, not hashed) using the configuration above,
I am getting the following error (server-side):
##### STEPS TO REPRODUCE
<!---
For bugs, please if possible, please run with the "-d" option and provide the full output,
If not possible, please provide the full fault section as reported by ExaBGP
-->
```
$ exabgp --env /home/xxx/xxx/cfg/tpl/exabgp.rrdoc.ini /home/xxx/xxx/cfg/tpl/exabgp.rrdoc.xxx.cfg --debug
```
```
14:55:13 | 3398 | reactor | ExaBGP version : 4.0.2-1c737d99
14:55:13 | 3398 | reactor | Python version : 3.4.5 (default, Dec 11 2017, 14:22:24) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
14:55:13 | 3398 | reactor | System Uname : #1 SMP Thu May 25 17:04:51 UTC 2017
14:55:13 | 3398 | reactor | System MaxInt : 9223372036854775807
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor | <class 'binascii.Error'>
14:55:13 | 3398 | reactor | Incorrect padding
14:55:13 | 3398 | reactor | Traceback (most recent call last):
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/peer.py", line 527, in _run
14:55:13 | 3398 | reactor | for action in self._establish():
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/peer.py", line 314, in _establish
14:55:13 | 3398 | reactor | for action in self._connect():
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/peer.py", line 264, in _connect
14:55:13 | 3398 | reactor | connected = six.next(generator)
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/protocol.py", line 107, in connect
14:55:13 | 3398 | reactor | self.connection = Outgoing(afi,peer,local,self.port,md5,md5_base64,ttl_out)
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/network/outgoing.py", line 31, in __init__
14:55:13 | 3398 | reactor | MD5(self.io,self.peer,port,md5,md5_base64)
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/network/tcp.py", line 138, in MD5
14:55:13 | 3398 | reactor | md5 = base64.b64decode(md5)
14:55:13 | 3398 | reactor | File "/usr/lib64/python3.4/base64.py", line 90, in b64decode
14:55:13 | 3398 | reactor | return binascii.a2b_base64(s)
14:55:13 | 3398 | reactor | binascii.Error: Incorrect padding
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Establish the BGP session (with a JUNOS router).
##### ACTUAL RESULTS
<!--- What actually happened? -->
Runtime error and traceback pasted above, related to md5.
##### IMPORTANCE
<!-- Please let us know if the issue is affecting you in a production environment -->
The BGP session is with a production vRR.
|
1.0
|
md5: Incorrect padding - <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### OS
<!---
Mention the OS you are running ExaBGP on (Linux variant if relevant)
-->
```
$ hostnamectl
Static hostname: nl-ams02c-ispbgp01.xxx.xxx
Icon name: computer-vm
Chassis: vm
Machine ID: fe42ef15e88f89520e0946b6201079d0
Boot ID: 54c23f2abe05477889261b29f5735a34
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-514.21.1.el7.x86_64
Architecture: x86-64
```
##### VERSION
<!--- Paste verbatim the output from “exabgp --version” between quotes below -->
```
ExaBGP : 4.0.2-1c737d99
Python : 3.4.5 (default, Dec 11 2017, 14:22:24) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
Uname : Linux nl-ams02c-ispbgp01.xxx.xxxt 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC 2017 x86_64
Root : /usr
```
##### ENVIRONMENT
<!--- Paste verbatim the output from “exabgp --di” (let us know if the output is empty) between quotes below -->
```
empty
```
<!--- You can also use gist.github.com links for larger files -->
##### CONFIGURATION
<!--- Paste verbatim your configuration file -->
```
template {
neighbor default {
router-id xxxx ;
local-address xxx ;
local-as xxx ;
peer-as xxx ;
hold-time 180 ;
group-updates false ;
}
}
neighbor xxx {
inherit default ;
description "* xxx *" ;
family {
ipv4 unicast ;
}
md5-password xxx ;
}
```
<!--- You can also use gist.github.com links for larger files -->
##### SUMMARY
<!--- Explain the problem briefly -->
When installing ExaBGP via pip3 and trying to establish a BGP session (IP v4) and using `md5` (the password provided is clear text without quotes, not hashed) using the configuration above,
I am getting the following error (server-side):
##### STEPS TO REPRODUCE
<!---
For bugs, please if possible, please run with the "-d" option and provide the full output,
If not possible, please provide the full fault section as reported by ExaBGP
-->
```
$ exabgp --env /home/xxx/xxx/cfg/tpl/exabgp.rrdoc.ini /home/xxx/xxx/cfg/tpl/exabgp.rrdoc.xxx.cfg --debug
```
```
14:55:13 | 3398 | reactor | ExaBGP version : 4.0.2-1c737d99
14:55:13 | 3398 | reactor | Python version : 3.4.5 (default, Dec 11 2017, 14:22:24) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
14:55:13 | 3398 | reactor | System Uname : #1 SMP Thu May 25 17:04:51 UTC 2017
14:55:13 | 3398 | reactor | System MaxInt : 9223372036854775807
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor |
14:55:13 | 3398 | reactor | <class 'binascii.Error'>
14:55:13 | 3398 | reactor | Incorrect padding
14:55:13 | 3398 | reactor | Traceback (most recent call last):
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/peer.py", line 527, in _run
14:55:13 | 3398 | reactor | for action in self._establish():
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/peer.py", line 314, in _establish
14:55:13 | 3398 | reactor | for action in self._connect():
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/peer.py", line 264, in _connect
14:55:13 | 3398 | reactor | connected = six.next(generator)
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/protocol.py", line 107, in connect
14:55:13 | 3398 | reactor | self.connection = Outgoing(afi,peer,local,self.port,md5,md5_base64,ttl_out)
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/network/outgoing.py", line 31, in __init__
14:55:13 | 3398 | reactor | MD5(self.io,self.peer,port,md5,md5_base64)
14:55:13 | 3398 | reactor | File "/usr/lib/python3.4/site-packages/exabgp/reactor/network/tcp.py", line 138, in MD5
14:55:13 | 3398 | reactor | md5 = base64.b64decode(md5)
14:55:13 | 3398 | reactor | File "/usr/lib64/python3.4/base64.py", line 90, in b64decode
14:55:13 | 3398 | reactor | return binascii.a2b_base64(s)
14:55:13 | 3398 | reactor | binascii.Error: Incorrect padding
```
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Establish the BGP session (with a JUNOS router).
##### ACTUAL RESULTS
<!--- What actually happened? -->
Runtime error and traceback pasted above, related to md5.
##### IMPORTANCE
<!-- Please let us know if the issue is affecting you in a production environment -->
The BGP session is with a production vRR.
|
non_process
|
incorrect padding verify first that your issue request is not already reported on github also test if the latest release and master branch are affected too issue type bug report os mention the os you are running exabgp on linux variant if relevant hostnamectl static hostname nl xxx xxx icon name computer vm chassis vm machine id boot id virtualization kvm operating system centos linux core cpe os name cpe o centos centos kernel linux architecture version exabgp python default dec uname linux nl xxx xxxt smp thu may utc root usr environment empty configuration template neighbor default router id xxxx local address xxx local as xxx peer as xxx hold time group updates false neighbor xxx inherit default description xxx family unicast password xxx summary when installing exabgp via and trying to establish a bgp session ip and using the password provided is clear text without quotes not hashed using the configuration above i am getting the following error server side steps to reproduce for bugs please if possible please run with the d option and provide the full output if not possible please provide the full fault section as reported by exabgp exabgp env home xxx xxx cfg tpl exabgp rrdoc ini home xxx xxx cfg tpl exabgp rrdoc xxx cfg debug reactor exabgp version reactor python version default dec reactor system uname smp thu may utc reactor system maxint reactor reactor reactor reactor reactor reactor incorrect padding reactor traceback most recent call last reactor file usr lib site packages exabgp reactor peer py line in run reactor for action in self establish reactor file usr lib site packages exabgp reactor peer py line in establish reactor for action in self connect reactor file usr lib site packages exabgp reactor peer py line in connect reactor connected six next generator reactor file usr lib site packages exabgp reactor protocol py line in connect reactor self connection outgoing afi peer local self port ttl out reactor file usr lib site packages exabgp reactor network outgoing py line in init reactor self io self peer port reactor file usr lib site packages exabgp reactor network tcp py line in reactor reactor file usr py line in reactor return binascii s reactor binascii error incorrect padding expected results establish the bgp session with a junos router actual results runtime error and traceback pasted above related to importance the bgp session is with a production vrr
| 0
|
11,528
| 14,403,188,493
|
IssuesEvent
|
2020-12-03 15:46:03
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Support Yarn 2 without PnP
|
process/candidate team/client tech/typescript
|
Yarn 2 without PnP should work with Prisma Client, but it doesn't.
The binary and schema file can't be found.
|
1.0
|
Support Yarn 2 without PnP - Yarn 2 without PnP should work with Prisma Client, but it doesn't.
The binary and schema file can't be found.
|
process
|
support yarn without pnp yarn without pnp should work with prisma client but it doesn t the binary and schema file can t be found
| 1
|
6,783
| 9,083,822,884
|
IssuesEvent
|
2019-02-17 23:35:25
|
SpongePowered/SpongeForge
|
https://api.github.com/repos/SpongePowered/SpongeForge
|
closed
|
Astral Sorcery Starlight Infuser incompatibility.
|
type: mod incompatibility version: 1.12
|
**I am currently running**
SpongeForge version: spongeforge-1.12.2-2768-7.1.5-RC3522
Forge version: forge-1.12.2-14.23.5.2781-universal
Java version: V8 U191
Operating System: Win 10
Plugins/Mods:
- astralsorcery-1.12.2-1.10.10
- Baubles-1.12-1.5.2
**Issue Description**
The Starlight Infuser does not work when running sponge. Loading the world in singleplayer, the infuser works as intended.
|
True
|
Astral Sorcery Starlight Infuser incompatibility. - **I am currently running**
SpongeForge version: spongeforge-1.12.2-2768-7.1.5-RC3522
Forge version: forge-1.12.2-14.23.5.2781-universal
Java version: V8 U191
Operating System: Win 10
Plugins/Mods:
- astralsorcery-1.12.2-1.10.10
- Baubles-1.12-1.5.2
**Issue Description**
The Starlight Infuser does not work when running sponge. Loading the world in singleplayer, the infuser works as intended.
|
non_process
|
astral sorcery starlight infuser incompatibility i am currently running spongeforge version spongeforge forge version forge universal java version operating system win plugins mods astralsorcery baubles issue description the starlight infuser does not work when running sponge loading the world in singleplayer the infuser works as intended
| 0
|
49,771
| 3,004,245,818
|
IssuesEvent
|
2015-07-25 18:50:09
|
bethlakshmi/GBE2
|
https://api.github.com/repos/bethlakshmi/GBE2
|
closed
|
Editing a scheduled event obliterates the title
|
bug High Priority
|
When I schedule a class (add a session),
Then go to the new scheduled event and change the description.
...the title is removed from the event.
I read the code and I believe it's because the form post does not include a hidden value for the title, which means that the upload of new event information erases the title because the form post doesn't include one.
I'm not sure how this could have worked last year... but apparently it did.
I only tested with classes.
Not a blocker for august, because we don't intend to schedule events in August... but it's a blocker when we want to do event scheduling at all...
|
1.0
|
Editing a scheduled event obliterates the title - When I schedule a class (add a session),
Then go to the new scheduled event and change the description.
...the title is removed from the event.
I read the code and I believe it's because the form post does not include a hidden value for the title, which means that the upload of new event information erases the title because the form post doesn't include one.
I'm not sure how this could have worked last year... but apparently it did.
I only tested with classes.
Not a blocker for august, because we don't intend to schedule events in August... but it's a blocker when we want to do event scheduling at all...
|
non_process
|
editing a scheduled event obliterates the title when i schedule a class add a session then go to the new scheduled event and change the description the title is removed from the event i read the code and i believe it s because the form post does not include a hidden value for the title which means that the upload of new event information erases the title because the form post doesn t include one i m not sure how this could have worked last year but apparently it did i only tested with classes not a blocker for august because we don t intend to schedule events in august but it s a blocker when we want to do event scheduling at all
| 0
|
58,654
| 16,674,502,773
|
IssuesEvent
|
2021-06-07 14:42:06
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
appservice_stream_position can get into such a state that ASes break silently
|
S-Major T-Defect
|
I wasted 3h on arasphere last night trying to work out why it wouldn't send stuff to ASes. turns out the value in that table was somehow mangled. deleting it didn't help, but manually reinserting max streampos did. Plus it's inconsistently named wrt all the other AS stuff :(
|
1.0
|
appservice_stream_position can get into such a state that ASes break silently - I wasted 3h on arasphere last night trying to work out why it wouldn't send stuff to ASes. turns out the value in that table was somehow mangled. deleting it didn't help, but manually reinserting max streampos did. Plus it's inconsistently named wrt all the other AS stuff :(
|
non_process
|
appservice stream position can get into such a state that ases break silently i wasted on arasphere last night trying to work out why it wouldn t send stuff to ases turns out the value in that table was somehow mangled deleting it didn t help but manually reinserting max streampos did plus it s inconsistently named wrt all the other as stuff
| 0
|
417,166
| 12,156,376,029
|
IssuesEvent
|
2020-04-25 17:03:55
|
spencermountain/compromise
|
https://api.github.com/repos/spencermountain/compromise
|
closed
|
Runtime error: replaceWith a contraction
|
bug priority
|
Using this code
```
var doc = nlp("The only reason he doesn't continue is because of how tired he feels.");
doc.verbs().toPastTense();
console.log(doc.text());
```
I would expect the following console log:
` The only reason he did not continue is because of how tired he felt`
Actual result is:
`The only reasoned he did not doesn’t`
Error log is:
```
Compromise error: Linked list broken in phrase 'the-yf98lfd'
terms @ compromise.js:1267
matchAll @ compromise.js:2696
match_1 @ compromise.js:2799
lookBehind @ compromise.js:1984
(anonymous) @ compromise.js:6517
exports.lookBehind @ compromise.js:6516
toBe @ compromise.js:12996
conjugate @ compromise.js:13041
(anonymous) @ compromise.js:13228
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1267 Compromise error: Linked list broken in phrase 'the-yf98lfd'
```
|
1.0
|
Runtime error: replaceWith a contraction - Using this code
```
var doc = nlp("The only reason he doesn't continue is because of how tired he feels.");
doc.verbs().toPastTense();
console.log(doc.text());
```
I would expect the following console log:
` The only reason he did not continue is because of how tired he felt`
Actual result is:
`The only reasoned he did not doesn’t`
Error log is:
```
Compromise error: Linked list broken in phrase 'the-yf98lfd'
terms @ compromise.js:1267
matchAll @ compromise.js:2696
match_1 @ compromise.js:2799
lookBehind @ compromise.js:1984
(anonymous) @ compromise.js:6517
exports.lookBehind @ compromise.js:6516
toBe @ compromise.js:12996
conjugate @ compromise.js:13041
(anonymous) @ compromise.js:13228
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1614
(anonymous) @ compromise.js:1613
appendPhrase @ compromise.js:1611
replace @ compromise.js:1797
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1360 Compromise error: Linked list broken. Missing term 'null' in phrase 'the-yf98lfd'
hasId @ compromise.js:1360
(anonymous) @ compromise.js:1714
(anonymous) @ compromise.js:1713
shrinkAll @ compromise.js:1711
deletePhrase @ compromise.js:1756
replace @ compromise.js:1801
(anonymous) @ compromise.js:7147
replaceWith @ compromise.js:7105
(anonymous) @ compromise.js:13231
(anonymous) @ compromise.js:6740
forEach @ compromise.js:6732
toPastTense @ compromise.js:13225
(anonymous) @ contentscript.js:12
compromise.js:1267 Compromise error: Linked list broken in phrase 'the-yf98lfd'
```
|
non_process
|
runtime error replacewith a contraction using this code var doc nlp the only reason he doesn t continue is because of how tired he feels doc verbs topasttense console log doc text i would expect the following console log the only reason he did not continue is because of how tired he felt actual result is the only reasoned he did not doesn’t error log is compromise error linked list broken in phrase the terms compromise js matchall compromise js match compromise js lookbehind compromise js anonymous compromise js exports lookbehind compromise js tobe compromise js conjugate compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js appendphrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js shrinkall compromise js deletephrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js appendphrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js shrinkall compromise js deletephrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js appendphrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js shrinkall compromise js deletephrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js appendphrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken missing term null in phrase the hasid compromise js anonymous compromise js anonymous compromise js shrinkall compromise js deletephrase compromise js replace compromise js anonymous compromise js replacewith compromise js anonymous compromise js anonymous compromise js foreach compromise js topasttense compromise js anonymous contentscript js compromise js compromise error linked list broken in phrase the
| 0
|
45,649
| 13,131,639,811
|
IssuesEvent
|
2020-08-06 17:23:18
|
jgeraigery/kraft-heinz-merger
|
https://api.github.com/repos/jgeraigery/kraft-heinz-merger
|
opened
|
WS-2018-0590 (High) detected in diff-1.4.0.tgz
|
security vulnerability
|
## WS-2018-0590 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>diff-1.4.0.tgz</b></p></summary>
<p>A javascript text diff implementation.</p>
<p>Library home page: <a href="https://registry.npmjs.org/diff/-/diff-1.4.0.tgz">https://registry.npmjs.org/diff/-/diff-1.4.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/kraft-heinz-merger/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/kraft-heinz-merger/node_modules/diff/package.json</p>
<p>
Dependency Hierarchy:
- nightwatch-0.9.21.tgz (Root Library)
- mocha-nightwatch-3.2.2.tgz
- :x: **diff-1.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/kraft-heinz-merger/commit/af6fe510cfa7228a06515d410aeabf6ecca51b7a">af6fe510cfa7228a06515d410aeabf6ecca51b7a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in diff before v3.5.0, the affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.
<p>Publish Date: 2018-03-05
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1552148>WS-2018-0590</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0">https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0</a></p>
<p>Release Date: 2019-06-11</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"diff","packageVersion":"1.4.0","isTransitiveDependency":true,"dependencyTree":"nightwatch:0.9.21;mocha-nightwatch:3.2.2;diff:1.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.5.0"}],"vulnerabilityIdentifier":"WS-2018-0590","vulnerabilityDetails":"A vulnerability was found in diff before v3.5.0, the affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.","vulnerabilityUrl":"https://bugzilla.redhat.com/show_bug.cgi?id\u003d1552148","cvss2Severity":"high","cvss2Score":"7.0","extraData":{}}</REMEDIATE> -->
|
True
|
WS-2018-0590 (High) detected in diff-1.4.0.tgz - ## WS-2018-0590 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>diff-1.4.0.tgz</b></p></summary>
<p>A javascript text diff implementation.</p>
<p>Library home page: <a href="https://registry.npmjs.org/diff/-/diff-1.4.0.tgz">https://registry.npmjs.org/diff/-/diff-1.4.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/kraft-heinz-merger/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/kraft-heinz-merger/node_modules/diff/package.json</p>
<p>
Dependency Hierarchy:
- nightwatch-0.9.21.tgz (Root Library)
- mocha-nightwatch-3.2.2.tgz
- :x: **diff-1.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/kraft-heinz-merger/commit/af6fe510cfa7228a06515d410aeabf6ecca51b7a">af6fe510cfa7228a06515d410aeabf6ecca51b7a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in diff before v3.5.0, the affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.
<p>Publish Date: 2018-03-05
<p>URL: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1552148>WS-2018-0590</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0">https://github.com/kpdecker/jsdiff/commit/2aec4298639bf30fb88a00b356bf404d3551b8c0</a></p>
<p>Release Date: 2019-06-11</p>
<p>Fix Resolution: 3.5.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"diff","packageVersion":"1.4.0","isTransitiveDependency":true,"dependencyTree":"nightwatch:0.9.21;mocha-nightwatch:3.2.2;diff:1.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.5.0"}],"vulnerabilityIdentifier":"WS-2018-0590","vulnerabilityDetails":"A vulnerability was found in diff before v3.5.0, the affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks.","vulnerabilityUrl":"https://bugzilla.redhat.com/show_bug.cgi?id\u003d1552148","cvss2Severity":"high","cvss2Score":"7.0","extraData":{}}</REMEDIATE> -->
|
non_process
|
ws high detected in diff tgz ws high severity vulnerability vulnerable library diff tgz a javascript text diff implementation library home page a href path to dependency file tmp ws scm kraft heinz merger package json path to vulnerable library tmp ws scm kraft heinz merger node modules diff package json dependency hierarchy nightwatch tgz root library mocha nightwatch tgz x diff tgz vulnerable library found in head commit a href vulnerability details a vulnerability was found in diff before the affected versions of this package are vulnerable to regular expression denial of service redos attacks publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails a vulnerability was found in diff before the affected versions of this package are vulnerable to regular expression denial of service redos attacks vulnerabilityurl
| 0
|
5,479
| 8,355,600,224
|
IssuesEvent
|
2018-10-02 16:05:47
|
HumanCellAtlas/dcp-community
|
https://api.github.com/repos/HumanCellAtlas/dcp-community
|
opened
|
Updates to RFC template from reviews
|
rfc-process
|
- [ ] **User Stories** are *Required* rather than *Optional*
- [ ] **Motivation** reference more generic PM plans instead of a _Thematic_ roadmap
- [ ] **Prior Art** add _Community Standards_ as another example
|
1.0
|
Updates to RFC template from reviews - - [ ] **User Stories** are *Required* rather than *Optional*
- [ ] **Motivation** reference more generic PM plans instead of a _Thematic_ roadmap
- [ ] **Prior Art** add _Community Standards_ as another example
|
process
|
updates to rfc template from reviews user stories are required rather than optional motivation reference more generic pm plans instead of a thematic roadmap prior art add community standards as another example
| 1
|
16,958
| 22,319,439,149
|
IssuesEvent
|
2022-06-14 04:01:18
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Extend the Go Client's CreateProcessInstance API with start instructions
|
team/process-automation
|
The Go client should implement the extended CreateProcessInstance API with start instructions.
Specifically, we'll need to add start instructions to the CreateInstanceCommandStep3, the API depends on the design choices made in https://github.com/camunda/zeebe/issues/9388. Let's be inspired by #9398.
Blocked by https://github.com/camunda/zeebe/issues/9388, https://github.com/camunda/zeebe/issues/9396
|
1.0
|
Extend the Go Client's CreateProcessInstance API with start instructions - The Go client should implement the extended CreateProcessInstance API with start instructions.
Specifically, we'll need to add start instructions to the CreateInstanceCommandStep3, the API depends on the design choices made in https://github.com/camunda/zeebe/issues/9388. Let's be inspired by #9398.
Blocked by https://github.com/camunda/zeebe/issues/9388, https://github.com/camunda/zeebe/issues/9396
|
process
|
extend the go client s createprocessinstance api with start instructions the go client should implement the extended createprocessinstance api with start instructions specifically we ll need to add start instructions to the the api depends on the design choices made in let s be inspired by blocked by
| 1
|
10,890
| 13,671,254,781
|
IssuesEvent
|
2020-09-29 06:39:25
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
Add metrics for Copr Cache
|
component/coprocessor component/metrics epic/copr-cache type/enhancement
|
## Development Task
cache miss, cache hit, cache evict should be recorded and show in grafana.
|
1.0
|
Add metrics for Copr Cache - ## Development Task
cache miss, cache hit, cache evict should be recorded and show in grafana.
|
process
|
add metrics for copr cache development task cache miss cache hit cache evict should be recorded and show in grafana
| 1
|
11,429
| 14,248,198,428
|
IssuesEvent
|
2020-11-19 12:35:47
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `User` from TiDB
|
challenge-program difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `User` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
1.0
|
UCP: Migrate scalar function `User` from TiDB -
## Description
Port the scalar function `User` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function user from tidb description port the scalar function user from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
285,051
| 21,482,447,970
|
IssuesEvent
|
2022-04-26 19:10:13
|
timoast/signac
|
https://api.github.com/repos/timoast/signac
|
opened
|
Reopening Overlay TF footprints for different feature sets (#1080)
|
documentation
|
Hi,
I'm reopening this because I wasn't successful in trying to implement your suggestion.
On a single footprint plot, I am aiming to overlay:
footprint around motif X from cells Y computed from peakset.1,
footprint around motif X from cells Y computed from peakset.2,
```
instances.1 <- matchMotifs(pwm, StringToGRanges(peakset.1), genome = "mm10", out='positions')
instances.2 <- matchMotifs(pwm, StringToGRanges(peakset.2), genome = "mm10", out='positions')
integrated <- Footprint(
object =integrated,
key=clst,
assay='mac.peaks',
genome = BSgenome.Mmusculus.UCSC.mm10,
regions=c(instances.1, instances.2)
)
```
Running the above gives:
| | 0 % ~calculating Error in as.vector(x, mode) :
coercing an AtomicList object to an atomic vector is supported only for
objects with top-level elements of length <= 1
In addition: Warning message:
In if (any(strand(x = x) == "*")) { :
the condition has length > 1 and only the first element will be used
Running `instances.1` without `instances.2` gives the same error.
Would you be able to suggest a workaround? I can provide more code if needed.
Thanks.
|
1.0
|
Reopening Overlay TF footprints for different feature sets (#1080) - Hi,
I'm reopening this because I wasn't successful in trying to implement your suggestion.
On a single footprint plot, I am aiming to overlay:
footprint around motif X from cells Y computed from peakset.1,
footprint around motif X from cells Y computed from peakset.2,
```
instances.1 <- matchMotifs(pwm, StringToGRanges(peakset.1), genome = "mm10", out='positions')
instances.2 <- matchMotifs(pwm, StringToGRanges(peakset.2), genome = "mm10", out='positions')
integrated <- Footprint(
object =integrated,
key=clst,
assay='mac.peaks',
genome = BSgenome.Mmusculus.UCSC.mm10,
regions=c(instances.1, instances.2)
)
```
Running the above gives:
| | 0 % ~calculating Error in as.vector(x, mode) :
coercing an AtomicList object to an atomic vector is supported only for
objects with top-level elements of length <= 1
In addition: Warning message:
In if (any(strand(x = x) == "*")) { :
the condition has length > 1 and only the first element will be used
Running `instances.1` without `instances.2` gives the same error.
Would you be able to suggest a workaround? I can provide more code if needed.
Thanks.
|
non_process
|
reopening overlay tf footprints for different feature sets hi i m reopening this because i wasn t successful in trying to implement your suggestion on a single footprint plot i am aiming to overlay footprint around motif x from cells y computed from peakset footprint around motif x from cells y computed from peakset instances matchmotifs pwm stringtogranges peakset genome out positions instances matchmotifs pwm stringtogranges peakset genome out positions integrated footprint object integrated key clst assay mac peaks genome bsgenome mmusculus ucsc regions c instances instances running the above gives calculating error in as vector x mode coercing an atomiclist object to an atomic vector is supported only for objects with top level elements of length in addition warning message in if any strand x x the condition has length and only the first element will be used running instances without instances gives the same error would you be able to suggest a workaround i can provide more code if needed thanks
| 0
|
570,467
| 17,023,118,477
|
IssuesEvent
|
2021-07-03 00:27:28
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
OpenLayers compatibility - javascript API for embedding OSM in webpages
|
Component: api Priority: major Resolution: fixed Type: enhancement
|
**[Submitted to the original trac issue database at 3.42pm, Monday, 29th May 2006]**
Decoding what crschmidt said here:
http://lists.openstreetmap.org/pipermail/talk/2006-May/004348.html
so that we can use OpenLayers on the OSM website, and so that OSM's WMS server works with OpenLayers for everyone.
Current tile implementation is broken for adding markers and info windows to the map. OpenLayers will fix that.
|
1.0
|
OpenLayers compatibility - javascript API for embedding OSM in webpages - **[Submitted to the original trac issue database at 3.42pm, Monday, 29th May 2006]**
Decoding what crschmidt said here:
http://lists.openstreetmap.org/pipermail/talk/2006-May/004348.html
so that we can use OpenLayers on the OSM website, and so that OSM's WMS server works with OpenLayers for everyone.
Current tile implementation is broken for adding markers and info windows to the map. OpenLayers will fix that.
|
non_process
|
openlayers compatibility javascript api for embedding osm in webpages decoding what crschmidt said here so that we can use openlayers on the osm website and so that osm s wms server works with openlayers for everyone current tile implementation is broken for adding markers and info windows to the map openlayers will fix that
| 0
|
2,485
| 5,265,240,203
|
IssuesEvent
|
2017-02-04 00:16:48
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Running bash shell programmatically on Mac OS X
|
area-System.Diagnostics.Process question
|
I am trying to figure out how to run a bash shell using .NET Core. Specifically, I want to run a bash shell, run a dotnet [path to dll] command and pass commands to the dll. I tried looking through the test classes in `System.Diagnostics.Process` but I couldn't wrap my head around it. :(
What I hope to achieve is something like on the .NET Framework:
`Process.Start("bash dotnet a.dll start")`
Please let me know if I'm asking the question in the appropriate area. :)
I am running Mac OSX El Capitan 10.11.6
Thank you!
|
1.0
|
Running bash shell programmatically on Mac OS X - I am trying to figure out how to run a bash shell using .NET Core. Specifically, I want to run a bash shell, run a dotnet [path to dll] command and pass commands to the dll. I tried looking through the test classes in `System.Diagnostics.Process` but I couldn't wrap my head around it. :(
What I hope to achieve is something like on the .NET Framework:
`Process.Start("bash dotnet a.dll start")`
Please let me know if I'm asking the question in the appropriate area. :)
I am running Mac OSX El Capitan 10.11.6
Thank you!
|
process
|
running bash shell programmatically on mac os x i am trying to figure out how to run a bash shell using net core specifically i want to run a bash shell run a dotnet command and pass commands to the dll i tried looking through the test classes in system diagnostics process but i couldn t wrap my head around it what i hope to achieve is something like on the net framework process start bash dotnet a dll start please let me know if i m asking the question in the appropriate area i am running mac osx el capitan thank you
| 1
|
19,086
| 25,134,224,845
|
IssuesEvent
|
2022-11-09 17:16:07
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
How to insert new label from an existing one?
|
enhancement question processor/attributes processor/metricstransform
|
### Component(s)
processor/attributes, processor/metricstransform
### Is your feature request related to a problem? Please describe.
I can't find a way to create a label from an existing one as I need to.
I have the existing label `envoy_response_code`, e.g. `envoy_response_code=200` and I want to extract a label `status_class` from it with only the first digit and the rest being replaced with xx, e.g. `status_class=2xx`.
I could do the mapping with something like this
```yaml
- include: envoy_vhost_vcluster_upstream_rq
action: update
operations:
- action: update_label
label: envoy_response_code
value_actions:
- value: 200
new_value: 2xx
- value: 503
new_value: 5xx
```
But then I will lose the original response code which I want to retain. I was expecting to be able to duplicate the label and then do this transformation, but the following isn't working.
```yaml
- include: envoy_vhost_vcluster_upstream_rq
action: update
operations:
- action: add_label
new_label: status_class
new_value: {{envoy_response_code}} # This doesn't work
- action: update_label
label: status_class
value_actions:
- value: 200
new_value: 2xx
- value: 503
new_value: 5xx
```
### Describe the solution you'd like
I want to be able to derive new labels from existing ones.
### Describe alternatives you've considered
I didn't find an alternative using the attributes processor
### Additional context
_No response_
|
2.0
|
How to insert new label from an existing one? - ### Component(s)
processor/attributes, processor/metricstransform
### Is your feature request related to a problem? Please describe.
I can't find a way to create a label from an existing one as I need to.
I have the existing label `envoy_response_code`, e.g. `envoy_response_code=200` and I want to extract a label `status_class` from it with only the first digit and the rest being replaced with xx, e.g. `status_class=2xx`.
I could do the mapping with something like this
```yaml
- include: envoy_vhost_vcluster_upstream_rq
action: update
operations:
- action: update_label
label: envoy_response_code
value_actions:
- value: 200
new_value: 2xx
- value: 503
new_value: 5xx
```
But then I will lose the original response code which I want to retain. I was expecting to be able to duplicate the label and then do this transformation, but the following isn't working.
```yaml
- include: envoy_vhost_vcluster_upstream_rq
action: update
operations:
- action: add_label
new_label: status_class
new_value: {{envoy_response_code}} # This doesn't work
- action: update_label
label: status_class
value_actions:
- value: 200
new_value: 2xx
- value: 503
new_value: 5xx
```
### Describe the solution you'd like
I want to be able to derive new labels from existing ones.
### Describe alternatives you've considered
I didn't find an alternative using the attributes processor
### Additional context
_No response_
|
process
|
how to insert new label from an existing one component s processor attributes processor metricstransform is your feature request related to a problem please describe i can t find a way to create a label from an existing one as i need to i have the existing label envoy response code e g envoy response code and i want to extract a label status class from it with only the first digit and the rest being replaced with xx e g status class i could do the mapping with something like this yaml include envoy vhost vcluster upstream rq action update operations action update label label envoy response code value actions value new value value new value but then i will lose the original response code which i want to retain i was expecting to be able to duplicate the label and then do this transformation but the following isn t working yaml include envoy vhost vcluster upstream rq action update operations action add label new label status class new value envoy response code this doesn t work action update label label status class value actions value new value value new value describe the solution you d like i want to be able to derive new labels from existing ones describe alternatives you ve considered i didn t find an alternative using the attributes processor additional context no response
| 1
|
600,314
| 18,293,248,132
|
IssuesEvent
|
2021-10-05 17:32:16
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
opened
|
Collections Management Page not showing Collection website
|
Priority-Normal (Not urgent) Bug
|
**Describe the bug**
When I go to Manage Collections I see that I have entered a website into the web link data slot.

but when I got to the collection's portal page it says that there is no website provided

Am I entering the website info in the wrong area? I couldn't see another option, but I could have totally missed something
**To Reproduce**
To Manage Data -> Metadata ->Manage Collections -> selected UWYMV:Bird
**Expected behavior**
It would be great to have the website be accessible on the portals page. I'm just obviously not doing something correct.
**Desktop (please complete the following information):**
Mac OS, Chrome
|
1.0
|
Collections Management Page not showing Collection website -
**Describe the bug**
When I go to Manage Collections I see that I have entered a website into the web link data slot.

but when I got to the collection's portal page it says that there is no website provided

Am I entering the website info in the wrong area? I couldn't see another option, but I could have totally missed something
**To Reproduce**
To Manage Data -> Metadata ->Manage Collections -> selected UWYMV:Bird
**Expected behavior**
It would be great to have the website be accessible on the portals page. I'm just obviously not doing something correct.
**Desktop (please complete the following information):**
Mac OS, Chrome
|
non_process
|
collections management page not showing collection website describe the bug when i go to manage collections i see that i have entered a website into the web link data slot but when i got to the collection s portal page it says that there is no website provided am i entering the website info in the wrong area i couldn t see another option but i could have totally missed something to reproduce to manage data metadata manage collections selected uwymv bird expected behavior it would be great to have the website be accessible on the portals page i m just obviously not doing something correct desktop please complete the following information mac os chrome
| 0
|
12,572
| 14,987,321,312
|
IssuesEvent
|
2021-01-28 22:42:09
|
BootBlock/FileSieve
|
https://api.github.com/repos/BootBlock/FileSieve
|
closed
|
Add a CreateFolders copy mode
|
enhancement processing
|
In addition to the Simulate/Copy/Move/Delete copy modes, add a *CreateFolders* mode that processes the current profile and creates sub-folders as normal but doesn't actually copy/move/delete files into those folders.
|
1.0
|
Add a CreateFolders copy mode - In addition to the Simulate/Copy/Move/Delete copy modes, add a *CreateFolders* mode that processes the current profile and creates sub-folders as normal but doesn't actually copy/move/delete files into those folders.
|
process
|
add a createfolders copy mode in addition to the simulate copy move delete copy modes add a createfolders mode that processes the current profile and creates sub folders as normal but doesn t actually copy move delete files into those folders
| 1
|
319,631
| 27,389,927,330
|
IssuesEvent
|
2023-02-28 15:39:11
|
eclipse-openj9/openj9
|
https://api.github.com/repos/eclipse-openj9/openj9
|
closed
|
StringIndexOutOfBoundsException in addFailedTestsGrinderLink()
|
comp:test
|
I've been seeing some occurrences of the following. Not sure what's causing it.
https://openj9-jenkins.osuosl.org/job/Test_openjdk19_j9_sanity.openjdk_ppc64_aix_Nightly/116/console
```
03:49:41 Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
```
Another https://openj9-jenkins.osuosl.org/job/Test_openjdk19_j9_sanity.openjdk_ppc64le_linux_Nightly/117/console
@llxia @AdamBrousseau
|
1.0
|
StringIndexOutOfBoundsException in addFailedTestsGrinderLink() - I've been seeing some occurrences of the following. Not sure what's causing it.
https://openj9-jenkins.osuosl.org/job/Test_openjdk19_j9_sanity.openjdk_ppc64_aix_Nightly/116/console
```
03:49:41 Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
```
Another https://openj9-jenkins.osuosl.org/job/Test_openjdk19_j9_sanity.openjdk_ppc64le_linux_Nightly/117/console
@llxia @AdamBrousseau
|
non_process
|
stringindexoutofboundsexception in addfailedtestsgrinderlink i ve been seeing some occurrences of the following not sure what s causing it exception java lang stringindexoutofboundsexception string index out of range another llxia adambrousseau
| 0
|
10,675
| 13,461,588,567
|
IssuesEvent
|
2020-09-09 14:59:37
|
nlpie/mtap
|
https://api.github.com/repos/nlpie/mtap
|
closed
|
Support CSV output of timing information
|
area/framework/processing kind/enhancement lang/python
|
Add a method to the AggregateTimingInfo to output timing info as CSV lines.
|
1.0
|
Support CSV output of timing information - Add a method to the AggregateTimingInfo to output timing info as CSV lines.
|
process
|
support csv output of timing information add a method to the aggregatetiminginfo to output timing info as csv lines
| 1
|
52,786
| 6,277,275,795
|
IssuesEvent
|
2017-07-18 11:47:48
|
Princeton-CDH/derrida-django
|
https://api.github.com/repos/Princeton-CDH/derrida-django
|
closed
|
interventions: non-verbal interventions
|
awaiting testing
|
As an interventions data editor, I want to select non-verbal interventions (underlining, circling, etc.) on a page image so I can transcribe anchor text and document the intervention and where it occurs.
|
1.0
|
interventions: non-verbal interventions - As an interventions data editor, I want to select non-verbal interventions (underlining, circling, etc.) on a page image so I can transcribe anchor text and document the intervention and where it occurs.
|
non_process
|
interventions non verbal interventions as an interventions data editor i want to select non verbal interventions underlining circling etc on a page image so i can transcribe anchor text and document the intervention and where it occurs
| 0
|
1,296
| 3,836,410,754
|
IssuesEvent
|
2016-04-01 17:55:11
|
BayoAdejare/Leverage
|
https://api.github.com/repos/BayoAdejare/Leverage
|
opened
|
Set up organization
|
process/administration
|
On Wednesday we discussed setting up a GitHub organization and having the repo live there (instead of keeping it under @BayoAdejare's name).
|
1.0
|
Set up organization - On Wednesday we discussed setting up a GitHub organization and having the repo live there (instead of keeping it under @BayoAdejare's name).
|
process
|
set up organization on wednesday we discussed setting up a github organization and having the repo live there instead of keeping it under bayoadejare s name
| 1
|
10,091
| 13,044,162,057
|
IssuesEvent
|
2020-07-29 03:47:29
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `UnixTimestampInt` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `UnixTimestampInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `UnixTimestampInt` from TiDB -
## Description
Port the scalar function `UnixTimestampInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function unixtimestampint from tidb description port the scalar function unixtimestampint from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
7,110
| 10,264,440,136
|
IssuesEvent
|
2019-08-22 16:22:08
|
zooniverse/theia
|
https://api.github.com/repos/zooniverse/theia
|
opened
|
Get Landsat 7, Landsat 4/5 images working
|
image acquisition image processing
|
They should be pretty much done but there may be a little additional work needed in the adapter to get them looking right.
|
1.0
|
Get Landsat 7, Landsat 4/5 images working - They should be pretty much done but there may be a little additional work needed in the adapter to get them looking right.
|
process
|
get landsat landsat images working they should be pretty much done but there may be a little additional work needed in the adapter to get them looking right
| 1
|
682,378
| 23,342,761,004
|
IssuesEvent
|
2022-08-09 15:14:35
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.coindesk.com - site is not usable
|
priority-normal browser-focus-geckoview engine-gecko
|
<!-- @browser: Firefox Klar -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:103.0) Gecko/103.0 Firefox/103.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108709 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.coindesk.com
**Browser / Version**: Firefox Klar
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes Firefox
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
The site loaded until the cookie banner appeared, then after the cookie choices were set, it stopped working (white screen)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220802163236</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/e09fd95b-423b-45da-895d-bc0134347a75)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.coindesk.com - site is not usable - <!-- @browser: Firefox Klar -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:103.0) Gecko/103.0 Firefox/103.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/108709 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.coindesk.com
**Browser / Version**: Firefox Klar
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes Firefox
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
The site loaded until the cookie banner appeared, then after the cookie choices were set, it stopped working (white screen)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220802163236</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/8/e09fd95b-423b-45da-895d-bc0134347a75)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox klar operating system android tested another browser yes firefox problem type site is not usable description page not loading correctly steps to reproduce the site loaded until the cookie banner appeared then after the cookie choices were set it stopped working white screen browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
18,263
| 24,344,209,992
|
IssuesEvent
|
2022-10-02 04:35:59
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
Refactor SmartyPants
|
process
|
There's currently a converter for "SmartyPants" (aka swap in smart quotes in text), but as far as I can tell it's not a real converter — aka it doesn't actually process any files automatically ever and only gets used when the `smartify` filter is invoked. So it doesn't actually make any sense for this to be a converter. There a small cost in performance for every converter in the system (since every template loops through all installed converters to see if there's a match), so this is definitely a worthwhile refactor.
|
1.0
|
Refactor SmartyPants - There's currently a converter for "SmartyPants" (aka swap in smart quotes in text), but as far as I can tell it's not a real converter — aka it doesn't actually process any files automatically ever and only gets used when the `smartify` filter is invoked. So it doesn't actually make any sense for this to be a converter. There a small cost in performance for every converter in the system (since every template loops through all installed converters to see if there's a match), so this is definitely a worthwhile refactor.
|
process
|
refactor smartypants there s currently a converter for smartypants aka swap in smart quotes in text but as far as i can tell it s not a real converter — aka it doesn t actually process any files automatically ever and only gets used when the smartify filter is invoked so it doesn t actually make any sense for this to be a converter there a small cost in performance for every converter in the system since every template loops through all installed converters to see if there s a match so this is definitely a worthwhile refactor
| 1
|
772,754
| 27,134,443,706
|
IssuesEvent
|
2023-02-16 12:12:13
|
shaka-project/shaka-player
|
https://api.github.com/repos/shaka-project/shaka-player
|
closed
|
Could Shaka inject MediaKeys polyfills as dependencies, instead of polyfilling the browser?
|
type: enhancement status: waiting on response priority: P4
|
The project I work on acts as middleware between client UI applications and two things: players, and DRM service providers. We support multiple players.
Additionally, one of the DRM services we integrate with requires a dummy license challenge in order to conduct "device activation". So I need to know what EME API's are available in the browser.
The sum of this is, I have my own version of this code:
```
if (navigator.requestMediaKeySystemAccess &&
MediaKeySystemAccess.prototype.getConfiguration) {
...
} else if (HTMLMediaElement.prototype.webkitGenerateKeyRequest) {
...
} else if (HTMLMediaElement.prototype.generateKeyRequest) {
...
} else if (window.MSMediaKeys) {
...
} else {
...
}
```
The trouble is, my detection is complicated by the fact that Shaka polyfills the browser. Under most circumstances the polyfills are effective and I don't have a problem. But I need to be careful about the order of my feature-detection to be sure I am checking for older API's first. This is the case specifically in Safari 10+. If I checked for `requestMediaKeySystemAccess` first, I would see that it exists, but it would not work because Shaka has installed the no-op polyfill. (We make no restrictions on how many different players the client application uses. If they want to use our Safari player for HLS-FP, then use Shaka to play clear DASH, that's their prerogative.)
Tizen 2017 TV's support both legacy 0.1b EME and the modern spec, which means that when I'm doing my detection, I need to check for Tizen 2017 and say no, don't use 0.1b, use `requestMediaKeySystemAccess`.
I understand this is a broader architectural concern, but I thought I would plant the idea. If Shaka could inject its dependencies locally into drm_engine, it could spare its' users some tricky feature detection.
|
1.0
|
Could Shaka inject MediaKeys polyfills as dependencies, instead of polyfilling the browser? - The project I work on acts as middleware between client UI applications and two things: players, and DRM service providers. We support multiple players.
Additionally, one of the DRM services we integrate with requires a dummy license challenge in order to conduct "device activation". So I need to know what EME API's are available in the browser.
The sum of this is, I have my own version of this code:
```
if (navigator.requestMediaKeySystemAccess &&
MediaKeySystemAccess.prototype.getConfiguration) {
...
} else if (HTMLMediaElement.prototype.webkitGenerateKeyRequest) {
...
} else if (HTMLMediaElement.prototype.generateKeyRequest) {
...
} else if (window.MSMediaKeys) {
...
} else {
...
}
```
The trouble is, my detection is complicated by the fact that Shaka polyfills the browser. Under most circumstances the polyfills are effective and I don't have a problem. But I need to be careful about the order of my feature-detection to be sure I am checking for older API's first. This is the case specifically in Safari 10+. If I checked for `requestMediaKeySystemAccess` first, I would see that it exists, but it would not work because Shaka has installed the no-op polyfill. (We make no restrictions on how many different players the client application uses. If they want to use our Safari player for HLS-FP, then use Shaka to play clear DASH, that's their prerogative.)
Tizen 2017 TV's support both legacy 0.1b EME and the modern spec, which means that when I'm doing my detection, I need to check for Tizen 2017 and say no, don't use 0.1b, use `requestMediaKeySystemAccess`.
I understand this is a broader architectural concern, but I thought I would plant the idea. If Shaka could inject its dependencies locally into drm_engine, it could spare its' users some tricky feature detection.
|
non_process
|
could shaka inject mediakeys polyfills as dependencies instead of polyfilling the browser the project i work on acts as middleware between client ui applications and two things players and drm service providers we support multiple players additionally one of the drm services we integrate with requires a dummy license challenge in order to conduct device activation so i need to know what eme api s are available in the browser the sum of this is i have my own version of this code if navigator requestmediakeysystemaccess mediakeysystemaccess prototype getconfiguration else if htmlmediaelement prototype webkitgeneratekeyrequest else if htmlmediaelement prototype generatekeyrequest else if window msmediakeys else the trouble is my detection is complicated by the fact that shaka polyfills the browser under most circumstances the polyfills are effective and i don t have a problem but i need to be careful about the order of my feature detection to be sure i am checking for older api s first this is the case specifically in safari if i checked for requestmediakeysystemaccess first i would see that it exists but it would not work because shaka has installed the no op polyfill we make no restrictions on how many different players the client application uses if they want to use our safari player for hls fp then use shaka to play clear dash that s their prerogative tizen tv s support both legacy eme and the modern spec which means that when i m doing my detection i need to check for tizen and say no don t use use requestmediakeysystemaccess i understand this is a broader architectural concern but i thought i would plant the idea if shaka could inject its dependencies locally into drm engine it could spare its users some tricky feature detection
| 0
|
208,533
| 7,155,519,834
|
IssuesEvent
|
2018-01-26 13:09:46
|
Sinapse-Energia/AP-Sinapse
|
https://api.github.com/repos/Sinapse-Energia/AP-Sinapse
|
opened
|
[Set GPS Coordinates - 128;] To develop the command 128; for AP (EN + CMC)
|
Priority: high Size: 2 Status: new Type: feature
|
To develop the command 128; to set the GPS Coordinates of the device
[Set GPS Coordinates](https://github.com/Sinapse-Energia/AP-Sinapse/wiki/MQTT-Sinapse-API#set-gps-coordinates)
This command is valid for the whole AP. It is to say, it should be available in CMC and EN.
Related with #69 through sunset / sunrise
|
1.0
|
[Set GPS Coordinates - 128;] To develop the command 128; for AP (EN + CMC) - To develop the command 128; to set the GPS Coordinates of the device
[Set GPS Coordinates](https://github.com/Sinapse-Energia/AP-Sinapse/wiki/MQTT-Sinapse-API#set-gps-coordinates)
This command is valid for the whole AP. It is to say, it should be available in CMC and EN.
Related with #69 through sunset / sunrise
|
non_process
|
to develop the command for ap en cmc to develop the command to set the gps coordinates of the device this command is valid for the whole ap it is to say it should be available in cmc and en related with through sunset sunrise
| 0
|
168,779
| 13,102,636,387
|
IssuesEvent
|
2020-08-04 07:07:04
|
Students-of-the-city-of-Kostroma/Student-timetable
|
https://api.github.com/repos/Students-of-the-city-of-Kostroma/Student-timetable
|
closed
|
Исправить ошибки в тестах для сущности Ученое звание
|
Controller Insert(Model model) Model Unit test Учёное звание
|
Исправить следующие ошибки :

Ошибочные тесты находятся в #688
|
1.0
|
Исправить ошибки в тестах для сущности Ученое звание - Исправить следующие ошибки :

Ошибочные тесты находятся в #688
|
non_process
|
исправить ошибки в тестах для сущности ученое звание исправить следующие ошибки ошибочные тесты находятся в
| 0
|
156,394
| 5,968,664,788
|
IssuesEvent
|
2017-05-30 18:35:18
|
chef-cookbooks/firewall
|
https://api.github.com/repos/chef-cookbooks/firewall
|
opened
|
Support for Amazon Linux
|
Priority: Low Status: On Hold Type: Enhancement
|
### Platform Details
Amazon Linux.
### Scenario:
We've had a number of requests for Amazon Linux support, which appears to have different default package names and some other quirks; we'd also need a way to regularly test on Amazon, since no one else seems to ship images of it.
See #154 and #166.
|
1.0
|
Support for Amazon Linux - ### Platform Details
Amazon Linux.
### Scenario:
We've had a number of requests for Amazon Linux support, which appears to have different default package names and some other quirks; we'd also need a way to regularly test on Amazon, since no one else seems to ship images of it.
See #154 and #166.
|
non_process
|
support for amazon linux platform details amazon linux scenario we ve had a number of requests for amazon linux support which appears to have different default package names and some other quirks we d also need a way to regularly test on amazon since no one else seems to ship images of it see and
| 0
|
13,011
| 15,368,307,926
|
IssuesEvent
|
2021-03-02 05:16:59
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Question mark in SQL query (postgres JSON operator) interpreted as prepared statement param
|
.Help Wanted .Limitation Database/Postgres Querying/Processor
|
I am trying to run this query:
select \* from table where json_column->'some_key' ? 'some_value'
Every time I run any query containing ? on our postgres server, I get a "No value specified for parameter 1" error. We have run the exact same query in our database and it works perfectly.
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
|
1.0
|
Question mark in SQL query (postgres JSON operator) interpreted as prepared statement param - I am trying to run this query:
select \* from table where json_column->'some_key' ? 'some_value'
Every time I run any query containing ? on our postgres server, I get a "No value specified for parameter 1" error. We have run the exact same query in our database and it works perfectly.
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
|
process
|
question mark in sql query postgres json operator interpreted as prepared statement param i am trying to run this query select from table where json column some key some value every time i run any query containing on our postgres server i get a no value specified for parameter error we have run the exact same query in our database and it works perfectly ⬇️ please click the 👍 reaction instead of leaving a or 👍 comment
| 1
|
178,527
| 13,782,777,145
|
IssuesEvent
|
2020-10-08 18:10:08
|
Azure/autorest.typescript
|
https://api.github.com/repos/Azure/autorest.typescript
|
closed
|
[Test] composite Swagger
|
priority-1 test-coverage v6
|
Support composite and composite-quirks swaggers
Depends on:
- [x] body-boolean
- [x] body-integer
- [x] body-boolean.quirks
Needs to read referenced swaggers
- [ ] https://github.com/Azure/autorest.testserver/blob/master/swagger/composite-swagger.quirks.json
- [ ] https://github.com/Azure/autorest.testserver/blob/master/swagger/composite-swagger.json
Estimate: [1 days]
|
1.0
|
[Test] composite Swagger - Support composite and composite-quirks swaggers
Depends on:
- [x] body-boolean
- [x] body-integer
- [x] body-boolean.quirks
Needs to read referenced swaggers
- [ ] https://github.com/Azure/autorest.testserver/blob/master/swagger/composite-swagger.quirks.json
- [ ] https://github.com/Azure/autorest.testserver/blob/master/swagger/composite-swagger.json
Estimate: [1 days]
|
non_process
|
composite swagger support composite and composite quirks swaggers depends on body boolean body integer body boolean quirks needs to read referenced swaggers estimate
| 0
|
6,079
| 8,923,558,817
|
IssuesEvent
|
2019-01-21 15:57:53
|
enKryptIO/ethvm
|
https://api.github.com/repos/enKryptIO/ethvm
|
closed
|
DAO Hard Fork is not processed correctly
|
bug milestone:1 priority:high project:processing
|
When the DAO hard fork block is reached we must generate certain balance movements.
|
1.0
|
DAO Hard Fork is not processed correctly - When the DAO hard fork block is reached we must generate certain balance movements.
|
process
|
dao hard fork is not processed correctly when the dao hard fork block is reached we must generate certain balance movements
| 1
|
8,217
| 11,406,024,041
|
IssuesEvent
|
2020-01-31 13:29:02
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
Improve error when `id` is a String in the schema but an Integer in SQLite.
|
kind/improvement process/candidate topic: errors
|
Current error
<img width="781" alt="Screen Shot 2020-01-30 at 16 38 51" src="https://user-images.githubusercontent.com/1328733/73464345-0ca0e500-437f-11ea-9dd2-97720e978f41.png">
With DEBUG
```console
j42@Pluto ~/D/p/t/script> env DEBUG="*" ts-node script.ts
prisma-client {
prisma-client engineConfig: {
prisma-client cwd: '/Users/j42/Dev/prisma-examples/typescript/script/prisma',
prisma-client debug: false,
prisma-client datamodelPath: '/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/schema.prisma',
prisma-client prismaPath: undefined,
prisma-client datasources: [],
prisma-client generator: {
prisma-client name: 'client',
prisma-client provider: 'prisma-client-js',
prisma-client output: '/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client',
prisma-client binaryTargets: [],
prisma-client config: {}
prisma-client },
prisma-client showColors: true,
prisma-client logLevel: undefined,
prisma-client logQueries: undefined
prisma-client }
prisma-client } +0ms
getos { platform: 'darwin', libssl: undefined } +0ms
prisma-client Request: +12ms
prisma-client mutation {
prisma-client createOneUser(data: {
prisma-client email: "alice@prisma.io"
prisma-client name: "Alice"
prisma-client posts: {
prisma-client create: [
prisma-client {
prisma-client title: "Watch the talks from Prisma Day 2019"
prisma-client content: "https://www.prisma.io/blog/z11sg6ipb3i1/"
prisma-client published: true
prisma-client }
prisma-client ]
prisma-client }
prisma-client }) {
prisma-client id
prisma-client email
prisma-client name
prisma-client posts {
prisma-client id
prisma-client title
prisma-client content
prisma-client published
prisma-client }
prisma-client }
prisma-client } +1ms
prisma-client disconnection promise doesnt exist +6ms
engine {
engine PRISMA_DML_PATH: '/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/schema.prisma',
engine PORT: '63298',
engine RUST_BACKTRACE: '1',
engine RUST_LOG: 'info',
engine OVERWRITE_DATASOURCES: '[]',
engine CLICOLOR_FORCE: '1'
engine } +0ms
engine { cwd: '/Users/j42/Dev/prisma-examples/typescript/script/prisma' } +0ms
plusX Execution permissions of /Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/query-engine-darwin are fine +0ms
engine stderr Printing to stderr for debugging
Listening on 127.0.0.1:63298
+15ms
engine Ready after try number 0 +59ms
agentkeepalive sock[0#localhost:63298:] create, timeout 60000ms +0ms
agentkeepalive sock[0#localhost:63298:](requests: 1, finished: 0) close, isError: false +6ms
agentkeepalive sock[1#localhost:63298:] create, timeout 60000ms +1s
agentkeepalive sock[1#localhost:63298:](requests: 1, finished: 0) error: Error: connect ECONNREFUSED 127.0.0.1:63298
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1134:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 63298
}, listenerCount: 2 +1ms
agentkeepalive sock[1#localhost:63298:](requests: 1, finished: 0) close, isError: true +1ms
agentkeepalive sock[2#localhost:63298:] create, timeout 60000ms +2s
agentkeepalive sock[2#localhost:63298:](requests: 1, finished: 0) error: Error: connect ECONNREFUSED 127.0.0.1:63298
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1134:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 63298
}, listenerCount: 2 +2ms
engine {
engine error: GotError [RequestError]: connect ECONNREFUSED 127.0.0.1:63298
engine at ClientRequest.<anonymous> (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/index.js:14275:14)
engine at Object.onceWrapper (events.js:313:26)
engine at ClientRequest.emit (events.js:228:7)
engine at ClientRequest.EventEmitter.emit (domain.js:475:20)
engine at ClientRequest.origin.emit (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/index.js:63703:11)
engine at Socket.socketErrorListener (_http_client.js:406:9)
engine at Socket.emit (events.js:228:7)
engine at Socket.EventEmitter.emit (domain.js:475:20)
engine at emitErrorNT (internal/streams/destroy.js:92:8)
engine at emitErrorAndCloseNT (internal/streams/destroy.js:60:3) {
engine name: 'RequestError',
engine code: 'ECONNREFUSED',
engine host: 'localhost:63298',
engine hostname: 'localhost',
engine method: 'POST',
engine path: '/',
engine socketPath: undefined,
engine protocol: 'http:',
engine url: 'http://localhost:63298/',
engine gotOptions: {
engine path: '/',
engine protocol: 'http:',
engine slashes: true,
engine auth: null,
engine host: 'localhost:63298',
engine port: '63298',
engine hostname: 'localhost',
engine hash: null,
engine search: null,
engine query: null,
engine pathname: '/',
engine href: 'http://localhost:63298/',
engine retry: [Object],
engine headers: [Object],
engine hooks: [Object],
engine decompress: true,
engine throwHttpErrors: true,
engine followRedirect: true,
engine stream: false,
engine form: false,
engine json: true,
engine cache: false,
engine useElectronNet: false,
engine body: '{"query":"mutation {\\n createOneUser(data: {\\n email: \\"alice@prisma.io\\"\\n name: \\"Alice\\"\\n posts: {\\n create: [\\n {\\n title: \\"Watch the talks from Prisma Day 2019\\"\\n content: \\"https://www.prisma.io/blog/z11sg6ipb3i1/\\"\\n published: true\\n }\\n ]\\n }\\n }) {\\n id\\n email\\n name\\n posts {\\n id\\n title\\n content\\n published\\n }\\n }\\n}","variables":{}}',
engine agent: [Agent],
engine method: 'POST',
engine forceRefresh: true
engine }
engine }
engine } +3s
prisma-client Error: Engine exited {"target":"exit","timestamp":"2020-01-30T14:53:08.520Z","level":"error","fields":{"message":"255"}}
prisma-client at /Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/index.js:6370:27
prisma-client at processTicksAndRejections (internal/process/task_queues.js:94:5)
prisma-client at PrismaClientFetcher.request (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/index.js:41:28) +3s
Error:
Invalid `const user1 = await prisma.user.create()` invocation in
/Users/j42/Dev/prisma-examples/typescript/script/script.ts:8:35
4
5 // A `main` function so that we can use async/await
6 async function main() {
7 // Seed the database with users and posts
→ 8 const user1 = await prisma.user.create(
Engine exited {"target":"exit","timestamp":"2020-01-30T14:53:08.520Z","level":"error","fields":{"message":"255"}}
at PrismaClientFetcher.request (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/index.js:62:23)
at processTicksAndRejections (internal/process/task_queues.js:94:5)
prisma-client disconnectionPromise: stopping engine +7ms
engine Stopping Prisma engine +27ms
getos { platform: 'darwin', libssl: undefined } +3s
agentkeepalive sock[2#localhost:63298:](requests: 1, finished: 0) close, isError: true +28ms
```
Directly from engine
```console
j42@Pluto ~/D/p/t/s/prisma> env PRISMA_DML_PATH=schema.prisma ./query-engine-darwin
Printing to stderr for debugging
Listening on 127.0.0.1:4466
{"timestamp":"Jan 30 15:55:44.231","level":"ERROR","target":"prisma","fields":{"message":"PANIC","reason":"Could not convert prisma value to graphqlid: ConversionFailure { from: \"PrismaValue\", to: \"GraphqlId\" }","file":"src/libcore/result.rs","line":1165,"column":5}}
```
|
1.0
|
Improve error when `id` is a String in the schema but an Integer in SQLite. - Current error
<img width="781" alt="Screen Shot 2020-01-30 at 16 38 51" src="https://user-images.githubusercontent.com/1328733/73464345-0ca0e500-437f-11ea-9dd2-97720e978f41.png">
With DEBUG
```console
j42@Pluto ~/D/p/t/script> env DEBUG="*" ts-node script.ts
prisma-client {
prisma-client engineConfig: {
prisma-client cwd: '/Users/j42/Dev/prisma-examples/typescript/script/prisma',
prisma-client debug: false,
prisma-client datamodelPath: '/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/schema.prisma',
prisma-client prismaPath: undefined,
prisma-client datasources: [],
prisma-client generator: {
prisma-client name: 'client',
prisma-client provider: 'prisma-client-js',
prisma-client output: '/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client',
prisma-client binaryTargets: [],
prisma-client config: {}
prisma-client },
prisma-client showColors: true,
prisma-client logLevel: undefined,
prisma-client logQueries: undefined
prisma-client }
prisma-client } +0ms
getos { platform: 'darwin', libssl: undefined } +0ms
prisma-client Request: +12ms
prisma-client mutation {
prisma-client createOneUser(data: {
prisma-client email: "alice@prisma.io"
prisma-client name: "Alice"
prisma-client posts: {
prisma-client create: [
prisma-client {
prisma-client title: "Watch the talks from Prisma Day 2019"
prisma-client content: "https://www.prisma.io/blog/z11sg6ipb3i1/"
prisma-client published: true
prisma-client }
prisma-client ]
prisma-client }
prisma-client }) {
prisma-client id
prisma-client email
prisma-client name
prisma-client posts {
prisma-client id
prisma-client title
prisma-client content
prisma-client published
prisma-client }
prisma-client }
prisma-client } +1ms
prisma-client disconnection promise doesnt exist +6ms
engine {
engine PRISMA_DML_PATH: '/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/schema.prisma',
engine PORT: '63298',
engine RUST_BACKTRACE: '1',
engine RUST_LOG: 'info',
engine OVERWRITE_DATASOURCES: '[]',
engine CLICOLOR_FORCE: '1'
engine } +0ms
engine { cwd: '/Users/j42/Dev/prisma-examples/typescript/script/prisma' } +0ms
plusX Execution permissions of /Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/query-engine-darwin are fine +0ms
engine stderr Printing to stderr for debugging
Listening on 127.0.0.1:63298
+15ms
engine Ready after try number 0 +59ms
agentkeepalive sock[0#localhost:63298:] create, timeout 60000ms +0ms
agentkeepalive sock[0#localhost:63298:](requests: 1, finished: 0) close, isError: false +6ms
agentkeepalive sock[1#localhost:63298:] create, timeout 60000ms +1s
agentkeepalive sock[1#localhost:63298:](requests: 1, finished: 0) error: Error: connect ECONNREFUSED 127.0.0.1:63298
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1134:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 63298
}, listenerCount: 2 +1ms
agentkeepalive sock[1#localhost:63298:](requests: 1, finished: 0) close, isError: true +1ms
agentkeepalive sock[2#localhost:63298:] create, timeout 60000ms +2s
agentkeepalive sock[2#localhost:63298:](requests: 1, finished: 0) error: Error: connect ECONNREFUSED 127.0.0.1:63298
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1134:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 63298
}, listenerCount: 2 +2ms
engine {
engine error: GotError [RequestError]: connect ECONNREFUSED 127.0.0.1:63298
engine at ClientRequest.<anonymous> (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/index.js:14275:14)
engine at Object.onceWrapper (events.js:313:26)
engine at ClientRequest.emit (events.js:228:7)
engine at ClientRequest.EventEmitter.emit (domain.js:475:20)
engine at ClientRequest.origin.emit (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/index.js:63703:11)
engine at Socket.socketErrorListener (_http_client.js:406:9)
engine at Socket.emit (events.js:228:7)
engine at Socket.EventEmitter.emit (domain.js:475:20)
engine at emitErrorNT (internal/streams/destroy.js:92:8)
engine at emitErrorAndCloseNT (internal/streams/destroy.js:60:3) {
engine name: 'RequestError',
engine code: 'ECONNREFUSED',
engine host: 'localhost:63298',
engine hostname: 'localhost',
engine method: 'POST',
engine path: '/',
engine socketPath: undefined,
engine protocol: 'http:',
engine url: 'http://localhost:63298/',
engine gotOptions: {
engine path: '/',
engine protocol: 'http:',
engine slashes: true,
engine auth: null,
engine host: 'localhost:63298',
engine port: '63298',
engine hostname: 'localhost',
engine hash: null,
engine search: null,
engine query: null,
engine pathname: '/',
engine href: 'http://localhost:63298/',
engine retry: [Object],
engine headers: [Object],
engine hooks: [Object],
engine decompress: true,
engine throwHttpErrors: true,
engine followRedirect: true,
engine stream: false,
engine form: false,
engine json: true,
engine cache: false,
engine useElectronNet: false,
engine body: '{"query":"mutation {\\n createOneUser(data: {\\n email: \\"alice@prisma.io\\"\\n name: \\"Alice\\"\\n posts: {\\n create: [\\n {\\n title: \\"Watch the talks from Prisma Day 2019\\"\\n content: \\"https://www.prisma.io/blog/z11sg6ipb3i1/\\"\\n published: true\\n }\\n ]\\n }\\n }) {\\n id\\n email\\n name\\n posts {\\n id\\n title\\n content\\n published\\n }\\n }\\n}","variables":{}}',
engine agent: [Agent],
engine method: 'POST',
engine forceRefresh: true
engine }
engine }
engine } +3s
prisma-client Error: Engine exited {"target":"exit","timestamp":"2020-01-30T14:53:08.520Z","level":"error","fields":{"message":"255"}}
prisma-client at /Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/runtime/index.js:6370:27
prisma-client at processTicksAndRejections (internal/process/task_queues.js:94:5)
prisma-client at PrismaClientFetcher.request (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/index.js:41:28) +3s
Error:
Invalid `const user1 = await prisma.user.create()` invocation in
/Users/j42/Dev/prisma-examples/typescript/script/script.ts:8:35
4
5 // A `main` function so that we can use async/await
6 async function main() {
7 // Seed the database with users and posts
→ 8 const user1 = await prisma.user.create(
Engine exited {"target":"exit","timestamp":"2020-01-30T14:53:08.520Z","level":"error","fields":{"message":"255"}}
at PrismaClientFetcher.request (/Users/j42/Dev/prisma-examples/typescript/script/node_modules/@prisma/client/index.js:62:23)
at processTicksAndRejections (internal/process/task_queues.js:94:5)
prisma-client disconnectionPromise: stopping engine +7ms
engine Stopping Prisma engine +27ms
getos { platform: 'darwin', libssl: undefined } +3s
agentkeepalive sock[2#localhost:63298:](requests: 1, finished: 0) close, isError: true +28ms
```
Directly from engine
```console
j42@Pluto ~/D/p/t/s/prisma> env PRISMA_DML_PATH=schema.prisma ./query-engine-darwin
Printing to stderr for debugging
Listening on 127.0.0.1:4466
{"timestamp":"Jan 30 15:55:44.231","level":"ERROR","target":"prisma","fields":{"message":"PANIC","reason":"Could not convert prisma value to graphqlid: ConversionFailure { from: \"PrismaValue\", to: \"GraphqlId\" }","file":"src/libcore/result.rs","line":1165,"column":5}}
```
|
process
|
improve error when id is a string in the schema but an integer in sqlite current error img width alt screen shot at src with debug console pluto d p t script env debug ts node script ts prisma client prisma client engineconfig prisma client cwd users dev prisma examples typescript script prisma prisma client debug false prisma client datamodelpath users dev prisma examples typescript script node modules prisma client schema prisma prisma client prismapath undefined prisma client datasources prisma client generator prisma client name client prisma client provider prisma client js prisma client output users dev prisma examples typescript script node modules prisma client prisma client binarytargets prisma client config prisma client prisma client showcolors true prisma client loglevel undefined prisma client logqueries undefined prisma client prisma client getos platform darwin libssl undefined prisma client request prisma client mutation prisma client createoneuser data prisma client email alice prisma io prisma client name alice prisma client posts prisma client create prisma client prisma client title watch the talks from prisma day prisma client content prisma client published true prisma client prisma client prisma client prisma client prisma client id prisma client email prisma client name prisma client posts prisma client id prisma client title prisma client content prisma client published prisma client prisma client prisma client prisma client disconnection promise doesnt exist engine engine prisma dml path users dev prisma examples typescript script node modules prisma client schema prisma engine port engine rust backtrace engine rust log info engine overwrite datasources engine clicolor force engine engine cwd users dev prisma examples typescript script prisma plusx execution permissions of users dev prisma examples typescript script node modules prisma client runtime query engine darwin are fine engine stderr printing to stderr for debugging listening on engine ready after try number agentkeepalive sock create timeout agentkeepalive sock requests finished close iserror false agentkeepalive sock create timeout agentkeepalive sock requests finished error error connect econnrefused at tcpconnectwrap afterconnect net js errno econnrefused code econnrefused syscall connect address port listenercount agentkeepalive sock requests finished close iserror true agentkeepalive sock create timeout agentkeepalive sock requests finished error error connect econnrefused at tcpconnectwrap afterconnect net js errno econnrefused code econnrefused syscall connect address port listenercount engine engine error goterror connect econnrefused engine at clientrequest users dev prisma examples typescript script node modules prisma client runtime index js engine at object oncewrapper events js engine at clientrequest emit events js engine at clientrequest eventemitter emit domain js engine at clientrequest origin emit users dev prisma examples typescript script node modules prisma client runtime index js engine at socket socketerrorlistener http client js engine at socket emit events js engine at socket eventemitter emit domain js engine at emiterrornt internal streams destroy js engine at emiterrorandclosent internal streams destroy js engine name requesterror engine code econnrefused engine host localhost engine hostname localhost engine method post engine path engine socketpath undefined engine protocol http engine url engine gotoptions engine path engine protocol http engine slashes true engine auth null engine host localhost engine port engine hostname localhost engine hash null engine search null engine query null engine pathname engine href engine retry engine headers engine hooks engine decompress true engine throwhttperrors true engine followredirect true engine stream false engine form false engine json true engine cache false engine useelectronnet false engine body query mutation n createoneuser data n email alice prisma io n name alice n posts n create n n n id n email n name n posts n id n title n content n published n n n variables engine agent engine method post engine forcerefresh true engine engine engine prisma client error engine exited target exit timestamp level error fields message prisma client at users dev prisma examples typescript script node modules prisma client runtime index js prisma client at processticksandrejections internal process task queues js prisma client at prismaclientfetcher request users dev prisma examples typescript script node modules prisma client index js error invalid const await prisma user create invocation in users dev prisma examples typescript script script ts a main function so that we can use async await async function main seed the database with users and posts → const await prisma user create engine exited target exit timestamp level error fields message at prismaclientfetcher request users dev prisma examples typescript script node modules prisma client index js at processticksandrejections internal process task queues js prisma client disconnectionpromise stopping engine engine stopping prisma engine getos platform darwin libssl undefined agentkeepalive sock requests finished close iserror true directly from engine console pluto d p t s prisma env prisma dml path schema prisma query engine darwin printing to stderr for debugging listening on timestamp jan level error target prisma fields message panic reason could not convert prisma value to graphqlid conversionfailure from prismavalue to graphqlid file src libcore result rs line column
| 1
|
18,870
| 24,799,441,887
|
IssuesEvent
|
2022-10-24 20:16:56
|
dials/dials
|
https://api.github.com/repos/dials/dials
|
closed
|
Add more output from dials.stills_process
|
stale dials.stills_process
|
dials.stills_process should output _refined_reflections.pickle and _integrated_experiments.json.
|
1.0
|
Add more output from dials.stills_process - dials.stills_process should output _refined_reflections.pickle and _integrated_experiments.json.
|
process
|
add more output from dials stills process dials stills process should output refined reflections pickle and integrated experiments json
| 1
|
414,319
| 27,984,149,431
|
IssuesEvent
|
2023-03-26 14:00:52
|
eslint/eslint
|
https://api.github.com/repos/eslint/eslint
|
closed
|
add a back to top button on docs page
|
enhancement documentation accepted
|
### ESLint version
v8.34.0
### What problem do you want to solve?
In all docs page the navigation links to the different topics of the page is on the top but some pages have too many material on it. so it is a tedious task to scroll to the top again in devices like tablet or phones. and for small screen size the main navigation of docs (navigation links for different pages of docs) is also on the top.
### What do you think is the correct solution?
A back to top button would be a good idea to make the scrolling easy.
something like this.

### Participation
- [X] I am willing to submit a pull request for this change.
### Additional comments
_No response_
|
1.0
|
add a back to top button on docs page - ### ESLint version
v8.34.0
### What problem do you want to solve?
In all docs page the navigation links to the different topics of the page is on the top but some pages have too many material on it. so it is a tedious task to scroll to the top again in devices like tablet or phones. and for small screen size the main navigation of docs (navigation links for different pages of docs) is also on the top.
### What do you think is the correct solution?
A back to top button would be a good idea to make the scrolling easy.
something like this.

### Participation
- [X] I am willing to submit a pull request for this change.
### Additional comments
_No response_
|
non_process
|
add a back to top button on docs page eslint version what problem do you want to solve in all docs page the navigation links to the different topics of the page is on the top but some pages have too many material on it so it is a tedious task to scroll to the top again in devices like tablet or phones and for small screen size the main navigation of docs navigation links for different pages of docs is also on the top what do you think is the correct solution a back to top button would be a good idea to make the scrolling easy something like this participation i am willing to submit a pull request for this change additional comments no response
| 0
|
4,167
| 7,107,918,970
|
IssuesEvent
|
2018-01-16 21:45:53
|
18F/product-guide
|
https://api.github.com/repos/18F/product-guide
|
closed
|
UPDATE SECTION (Staffing) - Update owner of staffing process
|
low hanging fruit process change question
|
Staffing is now owned by Team Ops. Need to address this change and any other process changes here around their ownership.
|
1.0
|
UPDATE SECTION (Staffing) - Update owner of staffing process - Staffing is now owned by Team Ops. Need to address this change and any other process changes here around their ownership.
|
process
|
update section staffing update owner of staffing process staffing is now owned by team ops need to address this change and any other process changes here around their ownership
| 1
|
753,087
| 26,339,936,261
|
IssuesEvent
|
2023-01-10 16:54:31
|
sugarlabs/musicblocks
|
https://api.github.com/repos/sugarlabs/musicblocks
|
closed
|
Blocks in macros briefly appear in upper left of the screen
|
Issue-Enhancement Issue-Wontfix Priority-Minor
|
https://www.youtube.com/watch?v=GBf4CIWtMg0&feature=youtu.be
look at the top left corner, right below the navbar. tested on firefox
|
1.0
|
Blocks in macros briefly appear in upper left of the screen - https://www.youtube.com/watch?v=GBf4CIWtMg0&feature=youtu.be
look at the top left corner, right below the navbar. tested on firefox
|
non_process
|
blocks in macros briefly appear in upper left of the screen look at the top left corner right below the navbar tested on firefox
| 0
|
456,120
| 13,145,717,076
|
IssuesEvent
|
2020-08-08 05:18:35
|
alanqchen/Bear-Blog-Engine
|
https://api.github.com/repos/alanqchen/Bear-Blog-Engine
|
closed
|
Update feature image style
|
Medium Priority enhancement frontend
|
Instead of a max-height wrapper, which the image is then zoomed to fit, the image should be adjusted according to the following:
- The image is scaled to fit the post card's width
- The image height has a max of 300px, otherwise, the height is 100%
- If the image height is capped, then the image should be centered
[This seems like a good first reference](https://stackoverflow.com/questions/3751565/css-100-width-or-height-while-keeping-aspect-ratio).
|
1.0
|
Update feature image style - Instead of a max-height wrapper, which the image is then zoomed to fit, the image should be adjusted according to the following:
- The image is scaled to fit the post card's width
- The image height has a max of 300px, otherwise, the height is 100%
- If the image height is capped, then the image should be centered
[This seems like a good first reference](https://stackoverflow.com/questions/3751565/css-100-width-or-height-while-keeping-aspect-ratio).
|
non_process
|
update feature image style instead of a max height wrapper which the image is then zoomed to fit the image should be adjusted according to the following the image is scaled to fit the post card s width the image height has a max of otherwise the height is if the image height is capped then the image should be centered
| 0
|
40,027
| 10,435,914,520
|
IssuesEvent
|
2019-09-17 18:22:11
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
closed
|
cryptsetup: UUID library not found
|
build-error
|
New cryptsetup package was merged into develop just yesterday (#12762).
I was quite sure it was working when merged. I tested a couple hours before, by rebasing my develop fork and merging. And it worked! But today, working from develop HEAD (currently at 3f06d5c12), it looks broken:
```console
$ spack install cryptsetup
[snip]
1 error found in build log:
138 checking for linux/keyctl.h... yes
139 checking whether __NR_add_key is declared... yes
140 checking whether __NR_keyctl is declared... yes
141 checking whether __NR_request_key is declared... yes
142 checking for key_serial_t... no
143 checking for uuid_clear in -luuid... no
>> 144 configure: error: You need the uuid library.
```
I thought I was going mad, but by trial and error, I found that if I do a `git revert b95b4bb9e`:
```
$ spack install cryptsetup
[snip]
==> Successfully installed cryptsetup
```
b95b4bb9e refers to #12794 -- "add compilation option to sqlite" and it merged just 10 minutes before #12762
I don't currently understand how #12794 could be breaking cryptsetup. It seems impossible, since it only added a variant that is off by default.
@Sinan81 @hartzell
|
1.0
|
cryptsetup: UUID library not found - New cryptsetup package was merged into develop just yesterday (#12762).
I was quite sure it was working when merged. I tested a couple hours before, by rebasing my develop fork and merging. And it worked! But today, working from develop HEAD (currently at 3f06d5c12), it looks broken:
```console
$ spack install cryptsetup
[snip]
1 error found in build log:
138 checking for linux/keyctl.h... yes
139 checking whether __NR_add_key is declared... yes
140 checking whether __NR_keyctl is declared... yes
141 checking whether __NR_request_key is declared... yes
142 checking for key_serial_t... no
143 checking for uuid_clear in -luuid... no
>> 144 configure: error: You need the uuid library.
```
I thought I was going mad, but by trial and error, I found that if I do a `git revert b95b4bb9e`:
```
$ spack install cryptsetup
[snip]
==> Successfully installed cryptsetup
```
b95b4bb9e refers to #12794 -- "add compilation option to sqlite" and it merged just 10 minutes before #12762
I don't currently understand how #12794 could be breaking cryptsetup. It seems impossible, since it only added a variant that is off by default.
@Sinan81 @hartzell
|
non_process
|
cryptsetup uuid library not found new cryptsetup package was merged into develop just yesterday i was quite sure it was working when merged i tested a couple hours before by rebasing my develop fork and merging and it worked but today working from develop head currently at it looks broken console spack install cryptsetup error found in build log checking for linux keyctl h yes checking whether nr add key is declared yes checking whether nr keyctl is declared yes checking whether nr request key is declared yes checking for key serial t no checking for uuid clear in luuid no configure error you need the uuid library i thought i was going mad but by trial and error i found that if i do a git revert spack install cryptsetup successfully installed cryptsetup refers to add compilation option to sqlite and it merged just minutes before i don t currently understand how could be breaking cryptsetup it seems impossible since it only added a variant that is off by default hartzell
| 0
|
19,444
| 25,717,090,422
|
IssuesEvent
|
2022-12-07 11:05:18
|
inmanta/inmanta-core
|
https://api.github.com/repos/inmanta/inmanta-core
|
closed
|
Configure pytest to display the logs by default
|
suggestion process tiny
|
When a developer runs the tests, the logs of the server are not shown by default. For this it's required to add the option: `--log-cli-level=DEBUG`. I think it would make sense to adjust the pytest configuration file to show the logs by default.
This can be done by setting the following in the pyproject.toml file:
```
[tool.pytest.ini_options]
addopts = "--log-cli-level=DEBUG"
```
|
1.0
|
Configure pytest to display the logs by default - When a developer runs the tests, the logs of the server are not shown by default. For this it's required to add the option: `--log-cli-level=DEBUG`. I think it would make sense to adjust the pytest configuration file to show the logs by default.
This can be done by setting the following in the pyproject.toml file:
```
[tool.pytest.ini_options]
addopts = "--log-cli-level=DEBUG"
```
|
process
|
configure pytest to display the logs by default when a developer runs the tests the logs of the server are not shown by default for this it s required to add the option log cli level debug i think it would make sense to adjust the pytest configuration file to show the logs by default this can be done by setting the following in the pyproject toml file addopts log cli level debug
| 1
|
11,486
| 14,357,971,189
|
IssuesEvent
|
2020-11-30 13:49:38
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
no such package '@remotejdk_linux//'
|
P4 area-EngProd team-XProduct type: support / not a bug (process)
|
### Description of the problem / feature request:
While compiling bazel-native I get an error as below:
`| ERROR: /home/user/y/yocto30/sp/bazel/bazel_78IhpVnz/out/external/bazel_tools/tools/jdk/BUILD:305:1: no such package '@remotejdk_linux//': java.io.IOException: Error downloading [https://mirror.bazel.build/openjdk/azul-zulu-9.0.7.1-jdk9.0.7/zulu9.0.7.1-jdk9.0.7-linux_x64-allmodules.tar.gz] to /home/user/y/yocto30/sp/bazel/bazel_78IhpVnz/out/external/remotejdk_linux/zulu9.0.7.1-jdk9.0.7-linux_x64-allmodules.tar.gz: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target and referenced by '@bazel_tools//tools/jdk:remote_jdk'
| Analyzing: target //src:bazel_nojdk (237 packages loaded, 10085 targets config\
| ured)
| Fetching @remotejdk_linux; fetching 14s
| ERROR: Analysis of target '//src:bazel_nojdk' failed; build aborted: Analysis failed
`
I have no clue what is the "real" problem. The file is accessable by web-browser and curl. So I guess it is not a proxy issue (although I am behind one).
I have no clue at all what it is about the java issue. A "bitbake openjdk-8-native" has be succesfull.
No idea what this means "no such package '@remotejdk_linux//' and who is complaining?
Or is the certification path a problem? Which certificate and where to put it in?
### Feature requests: what underlying problem are you trying to solve with this feature?
I am using the meta-tensorflow layer in Yocto. I have to cross compile it for an Cortex A9.
### What operating system are you running Bazel on?
I am using Ubuntu 18.04 as a host machine and have to cross compile it for Cortex A9, the toolchain and all the stuff is already created by Yocto - so I justed added this.
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
git://git.yoctoproject.org/meta-tensorflow
99a3d8319db51d92b9b3671816f3e023c764264f
99a3d8319db51d92b9b3671816f3e023c764264f
### Have you found anything relevant by searching the web?
Have found some possible hints here (below), but I am not able to use this information.
https://github.com/bazelbuild/bazel/issues/6656
https://stackoverflow.com/questions/21076179/pkix-path-building-failed-and-unable-to-find-valid-certification-path-to-requ
If answer is in here then I need a little help to understand it. Thank you.
|
1.0
|
no such package '@remotejdk_linux//' - ### Description of the problem / feature request:
While compiling bazel-native I get an error as below:
`| ERROR: /home/user/y/yocto30/sp/bazel/bazel_78IhpVnz/out/external/bazel_tools/tools/jdk/BUILD:305:1: no such package '@remotejdk_linux//': java.io.IOException: Error downloading [https://mirror.bazel.build/openjdk/azul-zulu-9.0.7.1-jdk9.0.7/zulu9.0.7.1-jdk9.0.7-linux_x64-allmodules.tar.gz] to /home/user/y/yocto30/sp/bazel/bazel_78IhpVnz/out/external/remotejdk_linux/zulu9.0.7.1-jdk9.0.7-linux_x64-allmodules.tar.gz: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target and referenced by '@bazel_tools//tools/jdk:remote_jdk'
| Analyzing: target //src:bazel_nojdk (237 packages loaded, 10085 targets config\
| ured)
| Fetching @remotejdk_linux; fetching 14s
| ERROR: Analysis of target '//src:bazel_nojdk' failed; build aborted: Analysis failed
`
I have no clue what is the "real" problem. The file is accessable by web-browser and curl. So I guess it is not a proxy issue (although I am behind one).
I have no clue at all what it is about the java issue. A "bitbake openjdk-8-native" has be succesfull.
No idea what this means "no such package '@remotejdk_linux//' and who is complaining?
Or is the certification path a problem? Which certificate and where to put it in?
### Feature requests: what underlying problem are you trying to solve with this feature?
I am using the meta-tensorflow layer in Yocto. I have to cross compile it for an Cortex A9.
### What operating system are you running Bazel on?
I am using Ubuntu 18.04 as a host machine and have to cross compile it for Cortex A9, the toolchain and all the stuff is already created by Yocto - so I justed added this.
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
git://git.yoctoproject.org/meta-tensorflow
99a3d8319db51d92b9b3671816f3e023c764264f
99a3d8319db51d92b9b3671816f3e023c764264f
### Have you found anything relevant by searching the web?
Have found some possible hints here (below), but I am not able to use this information.
https://github.com/bazelbuild/bazel/issues/6656
https://stackoverflow.com/questions/21076179/pkix-path-building-failed-and-unable-to-find-valid-certification-path-to-requ
If answer is in here then I need a little help to understand it. Thank you.
|
process
|
no such package remotejdk linux description of the problem feature request while compiling bazel native i get an error as below error home user y sp bazel bazel out external bazel tools tools jdk build no such package remotejdk linux java io ioexception error downloading to home user y sp bazel bazel out external remotejdk linux linux allmodules tar gz sun security validator validatorexception pkix path building failed sun security provider certpath suncertpathbuilderexception unable to find valid certification path to requested target and referenced by bazel tools tools jdk remote jdk analyzing target src bazel nojdk packages loaded targets config ured fetching remotejdk linux fetching error analysis of target src bazel nojdk failed build aborted analysis failed i have no clue what is the real problem the file is accessable by web browser and curl so i guess it is not a proxy issue although i am behind one i have no clue at all what it is about the java issue a bitbake openjdk native has be succesfull no idea what this means no such package remotejdk linux and who is complaining or is the certification path a problem which certificate and where to put it in feature requests what underlying problem are you trying to solve with this feature i am using the meta tensorflow layer in yocto i have to cross compile it for an cortex what operating system are you running bazel on i am using ubuntu as a host machine and have to cross compile it for cortex the toolchain and all the stuff is already created by yocto so i justed added this what s the output of git remote get url origin git rev parse master git rev parse head git git yoctoproject org meta tensorflow have you found anything relevant by searching the web have found some possible hints here below but i am not able to use this information if answer is in here then i need a little help to understand it thank you
| 1
|
7,682
| 10,765,404,284
|
IssuesEvent
|
2019-11-01 10:55:40
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
Collection name differences
|
data discovery interoperability process graphs processes work in progress
|
The products (GET /data) deliver often quite different names for the data sets. That leads to a problem with the scripts as you need to rename the products in the process graphs. This could also be a problem with the band ids. Could we have a shared subset?
|
2.0
|
Collection name differences - The products (GET /data) deliver often quite different names for the data sets. That leads to a problem with the scripts as you need to rename the products in the process graphs. This could also be a problem with the band ids. Could we have a shared subset?
|
process
|
collection name differences the products get data deliver often quite different names for the data sets that leads to a problem with the scripts as you need to rename the products in the process graphs this could also be a problem with the band ids could we have a shared subset
| 1
|
18,380
| 24,510,559,993
|
IssuesEvent
|
2022-10-10 20:57:18
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Release 5.3.2 - October 2022
|
P1 type: process release team-OSS
|
# Status of Bazel 5.3.2
- Expected release date: Next Week
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/43)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 5.3, simply send a PR against the `release-5.3.2` branch.
Task list:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 5.3.2 - October 2022 - # Status of Bazel 5.3.2
- Expected release date: Next Week
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/43)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 5.3, simply send a PR against the `release-5.3.2` branch.
Task list:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release october status of bazel expected release date next week to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
| 1
|
14,752
| 18,022,716,463
|
IssuesEvent
|
2021-09-16 21:48:28
|
GoogleCloudPlatform/spring-cloud-gcp
|
https://api.github.com/repos/GoogleCloudPlatform/spring-cloud-gcp
|
closed
|
Find why TraceSampleApplicationIntegrationTests flakes consistently
|
P3 process
|
After #548 was merged, `TraceSampleApplicationIntegrationTests` started flaking. The scheduled daily test run often fails on the weekend -- I usually re-run on Monday, and the same run succeeds.
Some diagnostic information is added in #587.
If the test succeeds, it succeeds quickly. Otherwise it keeps pinging the backend trying to find the 25th span. The span that is missing is one of the final "send" spans on publish. In related news, the successful runs have both subscriber threads outputting "Overriding default instance of MessageHandlerMethodFactory with provided one.", while the failed runs only have one of the threads printing this out.
Bad run:
```
13:00:52.234 [http-nio-auto-1-exec-1] INFO com.example.WorkService - starting busy work
13:00:52.901 [http-nio-auto-1-exec-2] INFO com.example.ExampleController - meeting took 658ms
13:00:53.614 [http-nio-auto-1-exec-3] INFO com.example.ExampleController - meeting took 650ms
13:00:54.224 [http-nio-auto-1-exec-4] INFO com.example.ExampleController - meeting took 589ms
13:00:54.229 [http-nio-auto-1-exec-1] INFO com.example.WorkService - finished busy work
13:00:55.522 [gcp-pubsub-subscriber2] INFO o.s.i.h.s.MessagingMethodInvokerHelper - Overriding default instance of MessageHandlerMethodFactory with provided one.
13:00:55.527 [gcp-pubsub-subscriber1] INFO o.s.i.h.s.MessagingMethodInvokerHelper - Overriding default instance of MessageHandlerMethodFactory with provided one.
13:00:55.570 [gcp-pubsub-subscriber1] INFO com.example.Application - Message arrived! Payload: All work is done via PubSubTemplate.
13:00:55.574 [gcp-pubsub-subscriber2] INFO com.example.Application - Message arrived! Payload: All work is done via SI.
```
Good run:
```
13:09:18.827 [http-nio-auto-1-exec-1] INFO com.example.WorkService - starting busy work
13:09:19.408 [http-nio-auto-1-exec-2] INFO com.example.ExampleController - meeting took 571ms
13:09:19.998 [http-nio-auto-1-exec-3] INFO com.example.ExampleController - meeting took 528ms
13:09:20.518 [http-nio-auto-1-exec-4] INFO com.example.ExampleController - meeting took 501ms
13:09:20.521 [http-nio-auto-1-exec-1] INFO com.example.WorkService - finished busy work
13:09:20.699 [gcp-pubsub-subscriber1] INFO o.s.i.h.s.MessagingMethodInvokerHelper - Overriding default instance of MessageHandlerMethodFactory with provided one.
13:09:20.728 [gcp-pubsub-subscriber1] INFO com.example.Application - Message arrived! Payload: All work is done via SI.
13:09:20.731 [gcp-pubsub-subscriber2] INFO com.example.Application - Message arrived! Payload: All work is done via PubSubTemplate.
```
|
1.0
|
Find why TraceSampleApplicationIntegrationTests flakes consistently - After #548 was merged, `TraceSampleApplicationIntegrationTests` started flaking. The scheduled daily test run often fails on the weekend -- I usually re-run on Monday, and the same run succeeds.
Some diagnostic information is added in #587.
If the test succeeds, it succeeds quickly. Otherwise it keeps pinging the backend trying to find the 25th span. The span that is missing is one of the final "send" spans on publish. In related news, the successful runs have both subscriber threads outputting "Overriding default instance of MessageHandlerMethodFactory with provided one.", while the failed runs only have one of the threads printing this out.
Bad run:
```
13:00:52.234 [http-nio-auto-1-exec-1] INFO com.example.WorkService - starting busy work
13:00:52.901 [http-nio-auto-1-exec-2] INFO com.example.ExampleController - meeting took 658ms
13:00:53.614 [http-nio-auto-1-exec-3] INFO com.example.ExampleController - meeting took 650ms
13:00:54.224 [http-nio-auto-1-exec-4] INFO com.example.ExampleController - meeting took 589ms
13:00:54.229 [http-nio-auto-1-exec-1] INFO com.example.WorkService - finished busy work
13:00:55.522 [gcp-pubsub-subscriber2] INFO o.s.i.h.s.MessagingMethodInvokerHelper - Overriding default instance of MessageHandlerMethodFactory with provided one.
13:00:55.527 [gcp-pubsub-subscriber1] INFO o.s.i.h.s.MessagingMethodInvokerHelper - Overriding default instance of MessageHandlerMethodFactory with provided one.
13:00:55.570 [gcp-pubsub-subscriber1] INFO com.example.Application - Message arrived! Payload: All work is done via PubSubTemplate.
13:00:55.574 [gcp-pubsub-subscriber2] INFO com.example.Application - Message arrived! Payload: All work is done via SI.
```
Good run:
```
13:09:18.827 [http-nio-auto-1-exec-1] INFO com.example.WorkService - starting busy work
13:09:19.408 [http-nio-auto-1-exec-2] INFO com.example.ExampleController - meeting took 571ms
13:09:19.998 [http-nio-auto-1-exec-3] INFO com.example.ExampleController - meeting took 528ms
13:09:20.518 [http-nio-auto-1-exec-4] INFO com.example.ExampleController - meeting took 501ms
13:09:20.521 [http-nio-auto-1-exec-1] INFO com.example.WorkService - finished busy work
13:09:20.699 [gcp-pubsub-subscriber1] INFO o.s.i.h.s.MessagingMethodInvokerHelper - Overriding default instance of MessageHandlerMethodFactory with provided one.
13:09:20.728 [gcp-pubsub-subscriber1] INFO com.example.Application - Message arrived! Payload: All work is done via SI.
13:09:20.731 [gcp-pubsub-subscriber2] INFO com.example.Application - Message arrived! Payload: All work is done via PubSubTemplate.
```
|
process
|
find why tracesampleapplicationintegrationtests flakes consistently after was merged tracesampleapplicationintegrationtests started flaking the scheduled daily test run often fails on the weekend i usually re run on monday and the same run succeeds some diagnostic information is added in if the test succeeds it succeeds quickly otherwise it keeps pinging the backend trying to find the span the span that is missing is one of the final send spans on publish in related news the successful runs have both subscriber threads outputting overriding default instance of messagehandlermethodfactory with provided one while the failed runs only have one of the threads printing this out bad run info com example workservice starting busy work info com example examplecontroller meeting took info com example examplecontroller meeting took info com example examplecontroller meeting took info com example workservice finished busy work info o s i h s messagingmethodinvokerhelper overriding default instance of messagehandlermethodfactory with provided one info o s i h s messagingmethodinvokerhelper overriding default instance of messagehandlermethodfactory with provided one info com example application message arrived payload all work is done via pubsubtemplate info com example application message arrived payload all work is done via si good run info com example workservice starting busy work info com example examplecontroller meeting took info com example examplecontroller meeting took info com example examplecontroller meeting took info com example workservice finished busy work info o s i h s messagingmethodinvokerhelper overriding default instance of messagehandlermethodfactory with provided one info com example application message arrived payload all work is done via si info com example application message arrived payload all work is done via pubsubtemplate
| 1
|
404,557
| 11,859,220,454
|
IssuesEvent
|
2020-03-25 12:59:59
|
Swi005/Chessbot3
|
https://api.github.com/repos/Swi005/Chessbot3
|
closed
|
previousBoards-listen er ofte i feil rekkefølge.
|
Medium Priority bug
|
Det gjør at goBack ofte velger feil brett når den skal gå tilbake to brett.
|
1.0
|
previousBoards-listen er ofte i feil rekkefølge. - Det gjør at goBack ofte velger feil brett når den skal gå tilbake to brett.
|
non_process
|
previousboards listen er ofte i feil rekkefølge det gjør at goback ofte velger feil brett når den skal gå tilbake to brett
| 0
|
96,446
| 3,968,563,440
|
IssuesEvent
|
2016-05-03 20:07:05
|
SpongePowered/Ore
|
https://api.github.com/repos/SpongePowered/Ore
|
closed
|
Plugin review system
|
high priority input wanted
|
[There is a thread about this already](https://forums.spongepowered.org/t/plugin-hosting/1150), it asked people which kind of plugin reviewing system they preferred. Should ore also include a plugin review system before the file goes live? Or many suggestions said that it should warn users if they are downloaded an unverified file (which might fit in Sponges open nature) and have community reviewed option (may be a good idea, but I am not sure how that will be done).
Proposal 3 suggests decompiling bytecode and checking for any suspicious code or pattern. But I strongly dislike the "After the first file, all files are immediately available for download" idea. Should we have a combination all?
|
1.0
|
Plugin review system - [There is a thread about this already](https://forums.spongepowered.org/t/plugin-hosting/1150), it asked people which kind of plugin reviewing system they preferred. Should ore also include a plugin review system before the file goes live? Or many suggestions said that it should warn users if they are downloaded an unverified file (which might fit in Sponges open nature) and have community reviewed option (may be a good idea, but I am not sure how that will be done).
Proposal 3 suggests decompiling bytecode and checking for any suspicious code or pattern. But I strongly dislike the "After the first file, all files are immediately available for download" idea. Should we have a combination all?
|
non_process
|
plugin review system it asked people which kind of plugin reviewing system they preferred should ore also include a plugin review system before the file goes live or many suggestions said that it should warn users if they are downloaded an unverified file which might fit in sponges open nature and have community reviewed option may be a good idea but i am not sure how that will be done proposal suggests decompiling bytecode and checking for any suspicious code or pattern but i strongly dislike the after the first file all files are immediately available for download idea should we have a combination all
| 0
|
481,074
| 13,879,850,894
|
IssuesEvent
|
2020-10-17 16:12:39
|
AY2021S1-CS2113T-F11-1/tp
|
https://api.github.com/repos/AY2021S1-CS2113T-F11-1/tp
|
opened
|
As a user, I want to be able to be able to delete all my records
|
priority.High
|
So that I can remove the files off my computer when they are not needed anymore
|
1.0
|
As a user, I want to be able to be able to delete all my records - So that I can remove the files off my computer when they are not needed anymore
|
non_process
|
as a user i want to be able to be able to delete all my records so that i can remove the files off my computer when they are not needed anymore
| 0
|
18,946
| 24,907,990,513
|
IssuesEvent
|
2022-10-29 14:01:59
|
pycaret/pycaret
|
https://api.github.com/repos/pycaret/pycaret
|
closed
|
[BUG]: time_series.prediction() returns NaN
|
bug time_series preprocessing missing_info
|
### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [x] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
* Hi everyone and dear contributors,
* I wanted to write a time series code with PyCaret for the Tabular Playground Series - Sep 2022 competition on Kaggle. Because that seemed easy to me. I wanted to divide it into 48 different attributes and forecast the num_sold values of individual attributes, so I wrote a code like this.
Data Source doesn't have any NaN
Some unique df returns with NaN values but someone doesn't. Why I can't understand that?
Example of them
Each line is representing to iteration
I used .isna().sum() function for showing missing values .
[array([234]),
array([0]),
array([19]),
array([181]),
array([0]),
array([0]),
array([225]),
array([0]),
array([302]),
array([0]),
array([0]),
array([0]),
array([248]),
array([0]),
array([267]),
array([0]),
array([107]),
array([53]),
array([0]),
array([0]),
array([308]),
array([0]),
array([0]),
array([235]),
array([257]),
array([0]),
array([279]),
array([0]),
array([0]),
array([0]),
array([0]),
array([223]),
array([0]),
array([0]),
array([238]),
array([199]),
array([301]),
array([0]),
array([0]),
array([73]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0])]
Thank you.
### Reproducible Example
```python
data source = https://www.kaggle.com/competitions/tabular-playground-series-sep-2022
```
### Expected Behavior
#### * import libraries
```python
!pip install --pre pycaret
import pandas as pd
from pycaret.time_series import *
import warnings
warnings.filterwarnings("ignore")
pd.set_option('display.max_rows', 500)
```
#### * Data Prep.
```python
train = pd.read_csv("../input/tabular-playground-series-sep-2022/train.csv")
test = pd.read_csv("../input/tabular-playground-series-sep-2022/test.csv")
sub = pd.read_csv("../input/tabular-playground-series-sep-2022/sample_submission.csv")
# Creating the concatenated group to forecast
train['ID'] = train['store'] + '-' + train['product'] + '-' + train['country']
test['ID'] = test['store'] + '-' + test['product'] + '-' + test['country']
train.drop(columns=["country","store","product"],inplace=True)
test.drop(columns=["country","store","product"],inplace=True)
```
#### * Main Code
```python
# Create empty DataFrame for concatenate
result_df = pd.DataFrame()
# We created unique attributes, we will iterating with this list
object_list = list(train['ID'].unique())
for i in tqdm(range(len(object_list)):
# Create unique dataset by unique ID
unique_df = train[train.ID == train['ID'].unique()[i]]
# for validation dataset - before the submission I choose 2020 for trying this code.
val_df = unique_df[unique_df["date"] >= '2020-01-01']#.str.contains("2020")].reset_index(drop=True)
train_df = unique_df[unique_df["date"] < '2020-01-01']#.str.contains("2020")]
# Maybe here is false but we expecting to 366 prediction values for each unique_df
# Shortyly I assumed that for predicting the forecasting horizon for this case
result_lenght = val_df.shape[0]
# When setting 365, the setup module returns an error about size, I'm also trying 1,3,7,15,25
setup_fh = 7
# Setup for pycaret
setup(data = train_df,
target = "num_sold",
verbose=False, #--> Another BUG but it is okay
index="date", # Very clever think, I don't need transforming with pd.datetime, index etc.
ignore_features= ["row_id","ID"], # Also clever
transform_target= "box-cox", # Very nice
fold = 1, # For the time saving for me :D
fh = setup_fh, # Default: 1, But I want to modular function because of I will create def with this code
session_id = 42, # mean of universe)
# Maybe best way, compare models and choose best 3 and blend with them all. But I want to use prophet
# Tuning of Hyperparamters
tuned_model = tune_model(create_model('prophet',
cross_validation=True,
fold=1,
verbose=False),
fold=1,
optimize="SMAPE", # For competition
search_algorithm="random",
choose_better=True, # I don't want overfitting but firstly my code running properly
verbose=False)
finalized_model = finalize_model(tuned_model)
result = predict_model(finalized_model, fh = result_lenght)
#result = result[setup_fh:]
# I need for indexing and verifying
result["index"] = val_df['row_id'].values
result["ID"] = val_df['ID'].values
result["date"] = val_df['date'].values
result.sort_values(by="index",inplace=True)
# After concatenating with iter line
result_df = pd.concat([result_df,result])
```
### Actual Results
```python-traceback
My problem is different
```
### Installed Versions
<details>
System:
python: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) [GCC 9.4.0]
executable: /opt/conda/bin/python
machine: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
PyCaret required dependencies:
pip: 22.1.2
setuptools: 59.8.0
pycaret: 3.0.0.rc3
IPython: 7.33.0
ipywidgets: 7.7.1
tqdm: 4.64.0
numpy: 1.21.6
pandas: 1.3.5
jinja2: 3.1.2
scipy: 1.7.3
joblib: 1.1.0
sklearn: 1.0.2
pyod: Installed but version unavailable
imblearn: 0.9.0
category_encoders: 2.5.0
lightgbm: 3.3.2
numba: 0.55.2
requests: 2.28.1
matplotlib: 3.5.3
scikitplot: 0.3.7
yellowbrick: 1.5
plotly: 5.10.0
kaleido: 0.2.1
statsmodels: 0.13.2
sktime: 0.11.4
tbats: Installed but version unavailable
pmdarima: 2.0.1
psutil: 5.9.1
</details>
|
1.0
|
[BUG]: time_series.prediction() returns NaN - ### pycaret version checks
- [X] I have checked that this issue has not already been reported [here](https://github.com/pycaret/pycaret/issues).
- [X] I have confirmed this bug exists on the [latest version](https://github.com/pycaret/pycaret/releases) of pycaret.
- [x] I have confirmed this bug exists on the master branch of pycaret (pip install -U git+https://github.com/pycaret/pycaret.git@master).
### Issue Description
* Hi everyone and dear contributors,
* I wanted to write a time series code with PyCaret for the Tabular Playground Series - Sep 2022 competition on Kaggle. Because that seemed easy to me. I wanted to divide it into 48 different attributes and forecast the num_sold values of individual attributes, so I wrote a code like this.
Data Source doesn't have any NaN
Some unique df returns with NaN values but someone doesn't. Why I can't understand that?
Example of them
Each line is representing to iteration
I used .isna().sum() function for showing missing values .
[array([234]),
array([0]),
array([19]),
array([181]),
array([0]),
array([0]),
array([225]),
array([0]),
array([302]),
array([0]),
array([0]),
array([0]),
array([248]),
array([0]),
array([267]),
array([0]),
array([107]),
array([53]),
array([0]),
array([0]),
array([308]),
array([0]),
array([0]),
array([235]),
array([257]),
array([0]),
array([279]),
array([0]),
array([0]),
array([0]),
array([0]),
array([223]),
array([0]),
array([0]),
array([238]),
array([199]),
array([301]),
array([0]),
array([0]),
array([73]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0]),
array([0])]
Thank you.
### Reproducible Example
```python
data source = https://www.kaggle.com/competitions/tabular-playground-series-sep-2022
```
### Expected Behavior
#### * import libraries
```python
!pip install --pre pycaret
import pandas as pd
from pycaret.time_series import *
import warnings
warnings.filterwarnings("ignore")
pd.set_option('display.max_rows', 500)
```
#### * Data Prep.
```python
train = pd.read_csv("../input/tabular-playground-series-sep-2022/train.csv")
test = pd.read_csv("../input/tabular-playground-series-sep-2022/test.csv")
sub = pd.read_csv("../input/tabular-playground-series-sep-2022/sample_submission.csv")
# Creating the concatenated group to forecast
train['ID'] = train['store'] + '-' + train['product'] + '-' + train['country']
test['ID'] = test['store'] + '-' + test['product'] + '-' + test['country']
train.drop(columns=["country","store","product"],inplace=True)
test.drop(columns=["country","store","product"],inplace=True)
```
#### * Main Code
```python
# Create empty DataFrame for concatenate
result_df = pd.DataFrame()
# We created unique attributes, we will iterating with this list
object_list = list(train['ID'].unique())
for i in tqdm(range(len(object_list)):
# Create unique dataset by unique ID
unique_df = train[train.ID == train['ID'].unique()[i]]
# for validation dataset - before the submission I choose 2020 for trying this code.
val_df = unique_df[unique_df["date"] >= '2020-01-01']#.str.contains("2020")].reset_index(drop=True)
train_df = unique_df[unique_df["date"] < '2020-01-01']#.str.contains("2020")]
# Maybe here is false but we expecting to 366 prediction values for each unique_df
# Shortyly I assumed that for predicting the forecasting horizon for this case
result_lenght = val_df.shape[0]
# When setting 365, the setup module returns an error about size, I'm also trying 1,3,7,15,25
setup_fh = 7
# Setup for pycaret
setup(data = train_df,
target = "num_sold",
verbose=False, #--> Another BUG but it is okay
index="date", # Very clever think, I don't need transforming with pd.datetime, index etc.
ignore_features= ["row_id","ID"], # Also clever
transform_target= "box-cox", # Very nice
fold = 1, # For the time saving for me :D
fh = setup_fh, # Default: 1, But I want to modular function because of I will create def with this code
session_id = 42, # mean of universe)
# Maybe best way, compare models and choose best 3 and blend with them all. But I want to use prophet
# Tuning of Hyperparamters
tuned_model = tune_model(create_model('prophet',
cross_validation=True,
fold=1,
verbose=False),
fold=1,
optimize="SMAPE", # For competition
search_algorithm="random",
choose_better=True, # I don't want overfitting but firstly my code running properly
verbose=False)
finalized_model = finalize_model(tuned_model)
result = predict_model(finalized_model, fh = result_lenght)
#result = result[setup_fh:]
# I need for indexing and verifying
result["index"] = val_df['row_id'].values
result["ID"] = val_df['ID'].values
result["date"] = val_df['date'].values
result.sort_values(by="index",inplace=True)
# After concatenating with iter line
result_df = pd.concat([result_df,result])
```
### Actual Results
```python-traceback
My problem is different
```
### Installed Versions
<details>
System:
python: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) [GCC 9.4.0]
executable: /opt/conda/bin/python
machine: Linux-5.10.133+-x86_64-with-debian-bullseye-sid
PyCaret required dependencies:
pip: 22.1.2
setuptools: 59.8.0
pycaret: 3.0.0.rc3
IPython: 7.33.0
ipywidgets: 7.7.1
tqdm: 4.64.0
numpy: 1.21.6
pandas: 1.3.5
jinja2: 3.1.2
scipy: 1.7.3
joblib: 1.1.0
sklearn: 1.0.2
pyod: Installed but version unavailable
imblearn: 0.9.0
category_encoders: 2.5.0
lightgbm: 3.3.2
numba: 0.55.2
requests: 2.28.1
matplotlib: 3.5.3
scikitplot: 0.3.7
yellowbrick: 1.5
plotly: 5.10.0
kaleido: 0.2.1
statsmodels: 0.13.2
sktime: 0.11.4
tbats: Installed but version unavailable
pmdarima: 2.0.1
psutil: 5.9.1
</details>
|
process
|
time series prediction returns nan pycaret version checks i have checked that this issue has not already been reported i have confirmed this bug exists on the of pycaret i have confirmed this bug exists on the master branch of pycaret pip install u git issue description hi everyone and dear contributors i wanted to write a time series code with pycaret for the tabular playground series sep competition on kaggle because that seemed easy to me i wanted to divide it into different attributes and forecast the num sold values of individual attributes so i wrote a code like this data source doesn t have any nan some unique df returns with nan values but someone doesn t why i can t understand that example of them each line is representing to iteration i used isna sum function for showing missing values array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array array thank you reproducible example python data source expected behavior import libraries python pip install pre pycaret import pandas as pd from pycaret time series import import warnings warnings filterwarnings ignore pd set option display max rows data prep python train pd read csv input tabular playground series sep train csv test pd read csv input tabular playground series sep test csv sub pd read csv input tabular playground series sep sample submission csv creating the concatenated group to forecast train train train train test test test test train drop columns inplace true test drop columns inplace true main code python create empty dataframe for concatenate result df pd dataframe we created unique attributes we will iterating with this list object list list train unique for i in tqdm range len object list create unique dataset by unique id unique df train unique for validation dataset before the submission i choose for trying this code val df unique df str contains reset index drop true train df unique df str contains maybe here is false but we expecting to prediction values for each unique df shortyly i assumed that for predicting the forecasting horizon for this case result lenght val df shape when setting the setup module returns an error about size i m also trying setup fh setup for pycaret setup data train df target num sold verbose false another bug but it is okay index date very clever think i don t need transforming with pd datetime index etc ignore features also clever transform target box cox very nice fold for the time saving for me d fh setup fh default but i want to modular function because of i will create def with this code session id mean of universe maybe best way compare models and choose best and blend with them all but i want to use prophet tuning of hyperparamters tuned model tune model create model prophet cross validation true fold verbose false fold optimize smape for competition search algorithm random choose better true i don t want overfitting but firstly my code running properly verbose false finalized model finalize model tuned model result predict model finalized model fh result lenght result result i need for indexing and verifying result val df values result val df values result val df values result sort values by index inplace true after concatenating with iter line result df pd concat actual results python traceback my problem is different installed versions system python packaged by conda forge default oct executable opt conda bin python machine linux with debian bullseye sid pycaret required dependencies pip setuptools pycaret ipython ipywidgets tqdm numpy pandas scipy joblib sklearn pyod installed but version unavailable imblearn category encoders lightgbm numba requests matplotlib scikitplot yellowbrick plotly kaleido statsmodels sktime tbats installed but version unavailable pmdarima psutil
| 1
|
37,858
| 6,650,189,349
|
IssuesEvent
|
2017-09-28 15:31:10
|
acquia/blt
|
https://api.github.com/repos/acquia/blt
|
closed
|
Modifying BLT Configuration definition is not accurate
|
documentation in progress
|
My system information:
* Operating system type: Windows10 (host), DrupalVM (guest)
* Operating system version: Ubuntu 16.04.3 LTS
* BLT version: 8.9.2
BLT Configuration definition said "BLT configuration can be customized by overriding the value of default variable values... Values loaded from the later files will overwrite values in earlier files."
That definition is not accurate because when project.yml and project.local.yml are loaded an array_merge is executed (arrayMergeRecursiveDistinct) instead of a real array override.
**project.yml (partial)**
```
modules:
local:
enable: [dblog, devel, seckit, views_ui]
uninstall: [acsf, acquia_connector, shield]
```
**project.local.yml (partial)**
```
modules:
local:
enable: [dblog, devel, seckit, views_ui]
uninstall: { }
```
I get the following output:
```
modules:
local:
enable: [dblog, devel, seckit, views_ui]
uninstall: [acsf, acquia_connector, shield]
```
And I expected this to happen:
**modules.local.uninstall** should be empty if we are expecting an override,
|
1.0
|
Modifying BLT Configuration definition is not accurate - My system information:
* Operating system type: Windows10 (host), DrupalVM (guest)
* Operating system version: Ubuntu 16.04.3 LTS
* BLT version: 8.9.2
BLT Configuration definition said "BLT configuration can be customized by overriding the value of default variable values... Values loaded from the later files will overwrite values in earlier files."
That definition is not accurate because when project.yml and project.local.yml are loaded an array_merge is executed (arrayMergeRecursiveDistinct) instead of a real array override.
**project.yml (partial)**
```
modules:
local:
enable: [dblog, devel, seckit, views_ui]
uninstall: [acsf, acquia_connector, shield]
```
**project.local.yml (partial)**
```
modules:
local:
enable: [dblog, devel, seckit, views_ui]
uninstall: { }
```
I get the following output:
```
modules:
local:
enable: [dblog, devel, seckit, views_ui]
uninstall: [acsf, acquia_connector, shield]
```
And I expected this to happen:
**modules.local.uninstall** should be empty if we are expecting an override,
|
non_process
|
modifying blt configuration definition is not accurate my system information operating system type host drupalvm guest operating system version ubuntu lts blt version blt configuration definition said blt configuration can be customized by overriding the value of default variable values values loaded from the later files will overwrite values in earlier files that definition is not accurate because when project yml and project local yml are loaded an array merge is executed arraymergerecursivedistinct instead of a real array override project yml partial modules local enable uninstall project local yml partial modules local enable uninstall i get the following output modules local enable uninstall and i expected this to happen modules local uninstall should be empty if we are expecting an override
| 0
|
21,536
| 29,833,829,476
|
IssuesEvent
|
2023-06-18 15:41:29
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@dral/replit-notebook-extension-engine 1.0.8 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":" let fork = spawn(process.execPath, [\n ...process.execArgv,\n node_engine_path\n ], {\n detached: true,\n stdio: 'ignore'\n });","location":"package/entry.js:103","message":"This package is silently executing another executable"}]}```
|
1.0
|
@dral/replit-notebook-extension-engine 1.0.8 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" let fork = spawn(process.execPath, [\n ...process.execArgv,\n node_engine_path\n ], {\n detached: true,\n stdio: 'ignore'\n });","location":"package/entry.js:103","message":"This package is silently executing another executable"}]}```
|
process
|
dral replit notebook extension engine has guarddog issues npm silent process execution n detached true n stdio ignore n location package entry js message this package is silently executing another executable
| 1
|
58,378
| 3,088,981,604
|
IssuesEvent
|
2015-08-25 19:17:10
|
pavel-pimenov/flylinkdc-r5xx
|
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
|
opened
|
Не работает установка ограничений избранному пользователю если он оне в сети
|
bug imported Priority-Medium
|
_From [mike.kor...@gmail.com](https://code.google.com/u/101495626515388303633/) on October 13, 2014 20:19:44_
1. Добавляем пользователей в друзья
2. Когда кто-то из них уходит с хаба пытаемся установить ему ограничение на скачивание или еще как-то отредактировать запись о нем - ничего не получается.
**Attachment:** [Fly_x64_r17690_friends_bug.png Fly_x64_r17690_friends_bug2.png](http://code.google.com/p/flylinkdc/issues/detail?id=1501)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1501_
|
1.0
|
Не работает установка ограничений избранному пользователю если он оне в сети - _From [mike.kor...@gmail.com](https://code.google.com/u/101495626515388303633/) on October 13, 2014 20:19:44_
1. Добавляем пользователей в друзья
2. Когда кто-то из них уходит с хаба пытаемся установить ему ограничение на скачивание или еще как-то отредактировать запись о нем - ничего не получается.
**Attachment:** [Fly_x64_r17690_friends_bug.png Fly_x64_r17690_friends_bug2.png](http://code.google.com/p/flylinkdc/issues/detail?id=1501)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1501_
|
non_process
|
не работает установка ограничений избранному пользователю если он оне в сети from on october добавляем пользователей в друзья когда кто то из них уходит с хаба пытаемся установить ему ограничение на скачивание или еще как то отредактировать запись о нем ничего не получается attachment original issue
| 0
|
1,856
| 4,676,338,449
|
IssuesEvent
|
2016-10-07 11:26:31
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Issue with drop_fields short-circuiting on unknown field
|
:Processors bug libbeat Pioneer Program
|
There appears to be a bug in the latest winlogbeat. I am trying to skip some fields, so using the drop_fields feature. If there is an event that doesn't contain a field, the beat appears to ignore the remaining fields in the drop_fields list.
For confirmed bugs, please report:
- Version: 5.0 beta
- Operating System: Windows 10
- Steps to Reproduce: using the below snippet in the winlogbeat.yml config. If field "user_made_up" doesn't exist, the remaining fields are ignored
processors:
- drop_fields:
fields: [computer_name, user.domain, user_made_up, beat.hostname, ...]
As suggested by @andrewkroh, the following work - each drop_fields is processed separately.
processors:
- drop_fields:
fields:
- computer_name
- drop_fields:
fields:
- beat.hostname
- drop_fields:
fields:
- user.domain
- drop_fields:
fields:
- user.name
|
1.0
|
Issue with drop_fields short-circuiting on unknown field - There appears to be a bug in the latest winlogbeat. I am trying to skip some fields, so using the drop_fields feature. If there is an event that doesn't contain a field, the beat appears to ignore the remaining fields in the drop_fields list.
For confirmed bugs, please report:
- Version: 5.0 beta
- Operating System: Windows 10
- Steps to Reproduce: using the below snippet in the winlogbeat.yml config. If field "user_made_up" doesn't exist, the remaining fields are ignored
processors:
- drop_fields:
fields: [computer_name, user.domain, user_made_up, beat.hostname, ...]
As suggested by @andrewkroh, the following work - each drop_fields is processed separately.
processors:
- drop_fields:
fields:
- computer_name
- drop_fields:
fields:
- beat.hostname
- drop_fields:
fields:
- user.domain
- drop_fields:
fields:
- user.name
|
process
|
issue with drop fields short circuiting on unknown field there appears to be a bug in the latest winlogbeat i am trying to skip some fields so using the drop fields feature if there is an event that doesn t contain a field the beat appears to ignore the remaining fields in the drop fields list for confirmed bugs please report version beta operating system windows steps to reproduce using the below snippet in the winlogbeat yml config if field user made up doesn t exist the remaining fields are ignored processors drop fields fields as suggested by andrewkroh the following work each drop fields is processed separately processors drop fields fields computer name drop fields fields beat hostname drop fields fields user domain drop fields fields user name
| 1
|
829,186
| 31,857,866,630
|
IssuesEvent
|
2023-09-15 08:47:25
|
bryntum/support
|
https://api.github.com/repos/bryntum/support
|
closed
|
TimeZone and syncDataOnLoad
|
bug resolved high-priority forum large-account OEM
|
[Forum post](https://forum.bryntum.com/viewtopic.php?p=126029&sid=5875d59f87aef620b752609d445b7409#p126029)
At the moment, syncDataOnLoad will break time zone conversion. We need to fix that.
Also, we need to come up with a solution to add and update records that is not time zone converted to a store that is time zone converted.
|
1.0
|
TimeZone and syncDataOnLoad - [Forum post](https://forum.bryntum.com/viewtopic.php?p=126029&sid=5875d59f87aef620b752609d445b7409#p126029)
At the moment, syncDataOnLoad will break time zone conversion. We need to fix that.
Also, we need to come up with a solution to add and update records that is not time zone converted to a store that is time zone converted.
|
non_process
|
timezone and syncdataonload at the moment syncdataonload will break time zone conversion we need to fix that also we need to come up with a solution to add and update records that is not time zone converted to a store that is time zone converted
| 0
|
9,411
| 12,406,915,075
|
IssuesEvent
|
2020-05-21 20:03:11
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Note on scalar parameters not correct according to eq expression docs
|
Pri1 devops-cicd-process/tech devops/prod doc-bug
|
There's a note on this page that reads like this:
> Scalar parameters without a specified type are treated as strings. For example, eq(parameters['myparam'], true) will return true, even if the myparam parameter is the word false, if myparam is not explicitly made boolean. Non-empty strings are cast to true in a Boolean context. That expression could be rewritten to explicitly compare strings: eq(parameters['myparam'], 'true').
So according to this note, if `parameters['myparam']` is the string "false" (or any other non-empty string), that will convert to the boolean value `true` and then do `eq(true, true)`.
But unless I'm missing something here, this doesn't make much sense if we take into account what is said for the `eq` expression [here](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#eq):
> Converts right parameter to match type of left parameter. Returns `False` if conversion fails.
Given this, we should be doing `eq('false', 'true')`... so which one is correct?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66
* Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065
* Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/templates.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Note on scalar parameters not correct according to eq expression docs - There's a note on this page that reads like this:
> Scalar parameters without a specified type are treated as strings. For example, eq(parameters['myparam'], true) will return true, even if the myparam parameter is the word false, if myparam is not explicitly made boolean. Non-empty strings are cast to true in a Boolean context. That expression could be rewritten to explicitly compare strings: eq(parameters['myparam'], 'true').
So according to this note, if `parameters['myparam']` is the string "false" (or any other non-empty string), that will convert to the boolean value `true` and then do `eq(true, true)`.
But unless I'm missing something here, this doesn't make much sense if we take into account what is said for the `eq` expression [here](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#eq):
> Converts right parameter to match type of left parameter. Returns `False` if conversion fails.
Given this, we should be doing `eq('false', 'true')`... so which one is correct?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66
* Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065
* Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/templates.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
note on scalar parameters not correct according to eq expression docs there s a note on this page that reads like this scalar parameters without a specified type are treated as strings for example eq parameters true will return true even if the myparam parameter is the word false if myparam is not explicitly made boolean non empty strings are cast to true in a boolean context that expression could be rewritten to explicitly compare strings eq parameters true so according to this note if parameters is the string false or any other non empty string that will convert to the boolean value true and then do eq true true but unless i m missing something here this doesn t make much sense if we take into account what is said for the eq expression converts right parameter to match type of left parameter returns false if conversion fails given this we should be doing eq false true so which one is correct document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bbdc version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
400,976
| 11,783,698,871
|
IssuesEvent
|
2020-03-17 06:26:47
|
AY1920S2-CS2103T-W16-1/main
|
https://api.github.com/repos/AY1920S2-CS2103T-W16-1/main
|
closed
|
I want to delete a task
|
priority.High type.Story
|
As a user, I want to delete a task, so that I can remove tasks that I no longer care to track
|
1.0
|
I want to delete a task - As a user, I want to delete a task, so that I can remove tasks that I no longer care to track
|
non_process
|
i want to delete a task as a user i want to delete a task so that i can remove tasks that i no longer care to track
| 0
|
10,972
| 13,776,545,326
|
IssuesEvent
|
2020-10-08 09:36:52
|
prisma/e2e-tests
|
https://api.github.com/repos/prisma/e2e-tests
|
opened
|
Print start time in the slack logs
|
kind/improvement process/candidate
|
Sometimes, the message from GH actions was delayed and that caused some confusion pointing to something being broken while the logs were stale.
Adding a timestamp to the log will address that.
Internal discussion: https://prisma-company.slack.com/archives/CV0A8N0FL/p1602058276005800?thread_ts=1602000886.003200&cid=CV0A8N0FL
|
1.0
|
Print start time in the slack logs - Sometimes, the message from GH actions was delayed and that caused some confusion pointing to something being broken while the logs were stale.
Adding a timestamp to the log will address that.
Internal discussion: https://prisma-company.slack.com/archives/CV0A8N0FL/p1602058276005800?thread_ts=1602000886.003200&cid=CV0A8N0FL
|
process
|
print start time in the slack logs sometimes the message from gh actions was delayed and that caused some confusion pointing to something being broken while the logs were stale adding a timestamp to the log will address that internal discussion
| 1
|
7,215
| 10,346,996,823
|
IssuesEvent
|
2019-09-04 16:22:05
|
qri-io/desktop
|
https://api.github.com/repos/qri-io/desktop
|
closed
|
Add app icons to the electron app
|
chore main process
|
- Confirm that the appropriate icon also shows up under "About Qri Desktop" in the File menu
|
1.0
|
Add app icons to the electron app - - Confirm that the appropriate icon also shows up under "About Qri Desktop" in the File menu
|
process
|
add app icons to the electron app confirm that the appropriate icon also shows up under about qri desktop in the file menu
| 1
|
4,668
| 2,562,585,707
|
IssuesEvent
|
2015-02-06 03:26:19
|
cs2103jan2015-t09-3j/main
|
https://api.github.com/repos/cs2103jan2015-t09-3j/main
|
opened
|
As a user, I want the app to scold me if I never do my task on time and vice versa
|
priority.low type.enhancement type.story
|
so that I will be more motivated to do tasks on time
|
1.0
|
As a user, I want the app to scold me if I never do my task on time and vice versa - so that I will be more motivated to do tasks on time
|
non_process
|
as a user i want the app to scold me if i never do my task on time and vice versa so that i will be more motivated to do tasks on time
| 0
|
461
| 2,902,638,070
|
IssuesEvent
|
2015-06-18 08:28:38
|
mitchellh/packer
|
https://api.github.com/repos/mitchellh/packer
|
closed
|
Missing metadata.json (Packer (push) -> ATLAS (build) -> Vagrant (download))
|
bug post-processor/atlas
|
Atlas builds the vagrant.box, but when I try to use it, vagrant tell me that there is no metadata.json.
My tpl:
```json
{
"min_packer_version": "0.7.5",
"variables": {
"debian_version": "8.0.0",
"box_version": "1.0.0"
},
"builders": [
{
"type": "virtualbox-iso",
"iso_checksum": "d9209f355449fe13db3963571b1f52d4",
"iso_checksum_type": "md5",
"iso_url": "http://cdimage.debian.org/cdimage/release/current/amd64/iso-cd/debian-{{user `debian_version`}}-amd64-netinst.iso",
"boot_command": [
"<esc> ",
"install ",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/jessie.preseed.cfg ",
"debian-installer=de_DE ",
"auto ",
"locale=de_DE ",
"kbd-chooser/method=de ",
"netcfg/get_hostname=dockerhost.dev ",
"netcfg/get_domain=dockerhost.dev ",
"fb=false ",
"debconf/frontend=noninteractive ",
"console-setup/ask_detect=false ",
"console-keymaps-at/keymap=de ",
"keyboard-configuration/xkb-keymap=de ",
"<enter>"
],
"boot_wait": "4s",
"disk_size": "100000",
"guest_additions_mode": "disable",
"guest_os_type": "Debian_64",
"headless": false,
"http_directory": "http",
"shutdown_command": "echo 'vagrant' | sudo -S shutdown -h now",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"vm_name": "debian-{{user `debian_version`}}"
}
],
"push": {
"name": "tobiasb/dctest",
"base_dir": ".",
"vcs": true
},
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/base.sh",
"scripts/vagrant.sh",
"scripts/install_software.sh",
"scripts/cleanup.sh"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | sudo -S sh '{{ .Path }}'"
}
}
}
],
"post-processors": [
{
"type": "vagrant",
"keep_input_artifact": false
},
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "tobiasb/dctest",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox"
}
}
]
}
```
Terminal output:
```
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'tobiasb/dctest' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Loading metadata for box 'tobiasb/dctest'
default: URL: https://atlas.hashicorp.com/tobiasb/dctest
==> default: Adding box 'tobiasb/dctest' (v0.8) for provider: virtualbox
default: Downloading: https://atlas.hashicorp.com/tobiasb/boxes/dctest/versions/0.8/providers/virtualbox.box
The "metadata.json" file for the box 'tobiasb/dctest' was not found.
Boxes require this file in order for Vagrant to determine the
provider it was made for. If you made the box, please add a
"metadata.json" file to it. If someone else made the box, please
notify the box creator that the box is corrupt. Documentation for
box file format can be found at the URL below:
http://docs.vagrantup.com/v2/boxes/format.html
```
And btw I do not find the options for artifact_type. And the "Delete" button on Atlas (```https://atlas.hashicorp.com/VENDOR/boxes/BOXNAME/settings```) do not delete my boxes or packer tpl configs it just hide it from the boxes/configs list ```https://atlas.hashicorp.com/vagrant```. I can unarchive it again to the frontend list. And I believe this not what we want. ;-)
|
1.0
|
Missing metadata.json (Packer (push) -> ATLAS (build) -> Vagrant (download)) - Atlas builds the vagrant.box, but when I try to use it, vagrant tell me that there is no metadata.json.
My tpl:
```json
{
"min_packer_version": "0.7.5",
"variables": {
"debian_version": "8.0.0",
"box_version": "1.0.0"
},
"builders": [
{
"type": "virtualbox-iso",
"iso_checksum": "d9209f355449fe13db3963571b1f52d4",
"iso_checksum_type": "md5",
"iso_url": "http://cdimage.debian.org/cdimage/release/current/amd64/iso-cd/debian-{{user `debian_version`}}-amd64-netinst.iso",
"boot_command": [
"<esc> ",
"install ",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/jessie.preseed.cfg ",
"debian-installer=de_DE ",
"auto ",
"locale=de_DE ",
"kbd-chooser/method=de ",
"netcfg/get_hostname=dockerhost.dev ",
"netcfg/get_domain=dockerhost.dev ",
"fb=false ",
"debconf/frontend=noninteractive ",
"console-setup/ask_detect=false ",
"console-keymaps-at/keymap=de ",
"keyboard-configuration/xkb-keymap=de ",
"<enter>"
],
"boot_wait": "4s",
"disk_size": "100000",
"guest_additions_mode": "disable",
"guest_os_type": "Debian_64",
"headless": false,
"http_directory": "http",
"shutdown_command": "echo 'vagrant' | sudo -S shutdown -h now",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"vm_name": "debian-{{user `debian_version`}}"
}
],
"push": {
"name": "tobiasb/dctest",
"base_dir": ".",
"vcs": true
},
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/base.sh",
"scripts/vagrant.sh",
"scripts/install_software.sh",
"scripts/cleanup.sh"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | sudo -S sh '{{ .Path }}'"
}
}
}
],
"post-processors": [
{
"type": "vagrant",
"keep_input_artifact": false
},
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "tobiasb/dctest",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox"
}
}
]
}
```
Terminal output:
```
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'tobiasb/dctest' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Loading metadata for box 'tobiasb/dctest'
default: URL: https://atlas.hashicorp.com/tobiasb/dctest
==> default: Adding box 'tobiasb/dctest' (v0.8) for provider: virtualbox
default: Downloading: https://atlas.hashicorp.com/tobiasb/boxes/dctest/versions/0.8/providers/virtualbox.box
The "metadata.json" file for the box 'tobiasb/dctest' was not found.
Boxes require this file in order for Vagrant to determine the
provider it was made for. If you made the box, please add a
"metadata.json" file to it. If someone else made the box, please
notify the box creator that the box is corrupt. Documentation for
box file format can be found at the URL below:
http://docs.vagrantup.com/v2/boxes/format.html
```
And btw I do not find the options for artifact_type. And the "Delete" button on Atlas (```https://atlas.hashicorp.com/VENDOR/boxes/BOXNAME/settings```) do not delete my boxes or packer tpl configs it just hide it from the boxes/configs list ```https://atlas.hashicorp.com/vagrant```. I can unarchive it again to the frontend list. And I believe this not what we want. ;-)
|
process
|
missing metadata json packer push atlas build vagrant download atlas builds the vagrant box but when i try to use it vagrant tell me that there is no metadata json my tpl json min packer version variables debian version box version builders type virtualbox iso iso checksum iso checksum type iso url debian version netinst iso boot command install preseed url httpip httpport jessie preseed cfg debian installer de de auto locale de de kbd chooser method de netcfg get hostname dockerhost dev netcfg get domain dockerhost dev fb false debconf frontend noninteractive console setup ask detect false console keymaps at keymap de keyboard configuration xkb keymap de boot wait disk size guest additions mode disable guest os type debian headless false http directory http shutdown command echo vagrant sudo s shutdown h now ssh username vagrant ssh password vagrant ssh port vm name debian user debian version push name tobiasb dctest base dir vcs true provisioners type shell scripts scripts base sh scripts vagrant sh scripts install software sh scripts cleanup sh override virtualbox iso execute command echo vagrant sudo s sh path post processors type vagrant keep input artifact false type atlas only artifact tobiasb dctest artifact type vagrant box metadata provider virtualbox terminal output bringing machine default up with virtualbox provider default box tobiasb dctest could not be found attempting to find and install default box provider virtualbox default box version default loading metadata for box tobiasb dctest default url default adding box tobiasb dctest for provider virtualbox default downloading the metadata json file for the box tobiasb dctest was not found boxes require this file in order for vagrant to determine the provider it was made for if you made the box please add a metadata json file to it if someone else made the box please notify the box creator that the box is corrupt documentation for box file format can be found at the url below and btw i do not find the options for artifact type and the delete button on atlas do not delete my boxes or packer tpl configs it just hide it from the boxes configs list i can unarchive it again to the frontend list and i believe this not what we want
| 1
|
7,889
| 11,054,524,496
|
IssuesEvent
|
2019-12-10 13:39:37
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
closed
|
Releases without googleapis.dev documentation
|
type: process
|
The following releases failed to push their documentation to googleapis.dev:
- [x] Google.Cloud.Language.V1 version 1.4.0
- [x] Google.Cloud.VideoIntelligence.V1 version 1.3.0
- [x] Google.Cloud.TextToSpeech.V1 version 1.1.0
- [x] Google.Cloud.Redis.V1 version 1.1.0
- [x] Google.Cloud.ErrorReporting.V1Beta1 version 1.0.0-beta10
- [x] Google.Cloud.Scheduler.V1 version 1.1.0
The NuGet packages were successfully pushed, and the docs were uploaded to the gh-pages branch, but they need to be pushed to googleapis.dev by a Kokoro job. We cuirrently don't have any tooling for this.
|
1.0
|
Releases without googleapis.dev documentation - The following releases failed to push their documentation to googleapis.dev:
- [x] Google.Cloud.Language.V1 version 1.4.0
- [x] Google.Cloud.VideoIntelligence.V1 version 1.3.0
- [x] Google.Cloud.TextToSpeech.V1 version 1.1.0
- [x] Google.Cloud.Redis.V1 version 1.1.0
- [x] Google.Cloud.ErrorReporting.V1Beta1 version 1.0.0-beta10
- [x] Google.Cloud.Scheduler.V1 version 1.1.0
The NuGet packages were successfully pushed, and the docs were uploaded to the gh-pages branch, but they need to be pushed to googleapis.dev by a Kokoro job. We cuirrently don't have any tooling for this.
|
process
|
releases without googleapis dev documentation the following releases failed to push their documentation to googleapis dev google cloud language version google cloud videointelligence version google cloud texttospeech version google cloud redis version google cloud errorreporting version google cloud scheduler version the nuget packages were successfully pushed and the docs were uploaded to the gh pages branch but they need to be pushed to googleapis dev by a kokoro job we cuirrently don t have any tooling for this
| 1
|
17,862
| 23,808,735,546
|
IssuesEvent
|
2022-09-04 12:46:06
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[Test][FLINK-28352] [Umbrella] Make Pulsar connector stable on Flink CI
|
compute/data-processing type/bug
|
- [x] [[FLINK-26721]PulsarSourceITCase.testSavepoint failed on azure pipeline](https://issues.apache.org/jira/browse/FLINK-26721)
- [x] [[FLINK-23944]PulsarSourceITCase.testTaskManagerFailure is instable](https://issues.apache.org/jira/browse/FLINK-23944)
- [x] [[FLINK-26177]PulsarSourceITCase.testScaleDown fails with timeout](https://issues.apache.org/jira/browse/FLINK-26177)
- [x] [[FLINK-25815]PulsarSourceITCase.testTaskManagerFailure failed on azure due to timeout](https://issues.apache.org/jira/browse/FLINK-25815)
- [x] [[FLINK-27833]PulsarSourceITCase.testTaskManagerFailure failed with AssertionError](https://issues.apache.org/jira/browse/FLINK-27833)
- [x] [[FLINK-24872]PulsarSourceITCase.testMultipleSplits failed on AZP](https://issues.apache.org/jira/browse/FLINK-24872)
- [x] [[FLINK-25884]PulsarSourceITCase.testTaskManagerFailure failed on azure](https://issues.apache.org/jira/browse/FLINK-25884)
- [x] [[FLINK-27917]PulsarUnorderedPartitionSplitReaderTest.consumeMessageCreatedBeforeHandleSplitsChangesAndResetToEarliestPosition failed with AssertionError](https://issues.apache.org/jira/browse/FLINK-27917)
- [x] [[FLINK-25740]PulsarSourceOrderedE2ECase fails on azure](https://issues.apache.org/jira/browse/FLINK-25740)
- [x] [[FLINK-26237]PulsarSourceOrderedE2ECase failed on azure due to timeout](https://issues.apache.org/jira/browse/FLINK-26237)
- [x] [[FLINK-26238]PulsarSinkITCase.writeRecordsToPulsar failed on azure](https://issues.apache.org/jira/browse/FLINK-26238)
- [x] [[FLINK-26240]PulsarSinkITCase.tearDown failed in azure](https://issues.apache.org/jira/browse/FLINK-26240)
- [x] [[FLINK-27388]PulsarSourceEnumeratorTest#discoverPartitionsTriggersAssignments seems flaky](https://issues.apache.org/jira/browse/FLINK-27388)
- [ ] [[FLINK-26980]PulsarUnorderedSourceReaderTest hang on azure](https://issues.apache.org/jira/browse/FLINK-26980)
- [ ] [[FLINK-24302]Direct buffer memory leak on Pulsar connector with Java 11](https://issues.apache.org/jira/browse/FLINK-24302)
|
1.0
|
[Test][FLINK-28352] [Umbrella] Make Pulsar connector stable on Flink CI - - [x] [[FLINK-26721]PulsarSourceITCase.testSavepoint failed on azure pipeline](https://issues.apache.org/jira/browse/FLINK-26721)
- [x] [[FLINK-23944]PulsarSourceITCase.testTaskManagerFailure is instable](https://issues.apache.org/jira/browse/FLINK-23944)
- [x] [[FLINK-26177]PulsarSourceITCase.testScaleDown fails with timeout](https://issues.apache.org/jira/browse/FLINK-26177)
- [x] [[FLINK-25815]PulsarSourceITCase.testTaskManagerFailure failed on azure due to timeout](https://issues.apache.org/jira/browse/FLINK-25815)
- [x] [[FLINK-27833]PulsarSourceITCase.testTaskManagerFailure failed with AssertionError](https://issues.apache.org/jira/browse/FLINK-27833)
- [x] [[FLINK-24872]PulsarSourceITCase.testMultipleSplits failed on AZP](https://issues.apache.org/jira/browse/FLINK-24872)
- [x] [[FLINK-25884]PulsarSourceITCase.testTaskManagerFailure failed on azure](https://issues.apache.org/jira/browse/FLINK-25884)
- [x] [[FLINK-27917]PulsarUnorderedPartitionSplitReaderTest.consumeMessageCreatedBeforeHandleSplitsChangesAndResetToEarliestPosition failed with AssertionError](https://issues.apache.org/jira/browse/FLINK-27917)
- [x] [[FLINK-25740]PulsarSourceOrderedE2ECase fails on azure](https://issues.apache.org/jira/browse/FLINK-25740)
- [x] [[FLINK-26237]PulsarSourceOrderedE2ECase failed on azure due to timeout](https://issues.apache.org/jira/browse/FLINK-26237)
- [x] [[FLINK-26238]PulsarSinkITCase.writeRecordsToPulsar failed on azure](https://issues.apache.org/jira/browse/FLINK-26238)
- [x] [[FLINK-26240]PulsarSinkITCase.tearDown failed in azure](https://issues.apache.org/jira/browse/FLINK-26240)
- [x] [[FLINK-27388]PulsarSourceEnumeratorTest#discoverPartitionsTriggersAssignments seems flaky](https://issues.apache.org/jira/browse/FLINK-27388)
- [ ] [[FLINK-26980]PulsarUnorderedSourceReaderTest hang on azure](https://issues.apache.org/jira/browse/FLINK-26980)
- [ ] [[FLINK-24302]Direct buffer memory leak on Pulsar connector with Java 11](https://issues.apache.org/jira/browse/FLINK-24302)
|
process
|
make pulsar connector stable on flink ci pulsarsourceitcase testsavepoint failed on azure pipeline pulsarsourceitcase testtaskmanagerfailure is instable pulsarsourceitcase testscaledown fails with timeout pulsarsourceitcase testtaskmanagerfailure failed on azure due to timeout pulsarsourceitcase testtaskmanagerfailure failed with assertionerror pulsarsourceitcase testmultiplesplits failed on azp pulsarsourceitcase testtaskmanagerfailure failed on azure pulsarunorderedpartitionsplitreadertest consumemessagecreatedbeforehandlesplitschangesandresettoearliestposition failed with assertionerror fails on azure failed on azure due to timeout pulsarsinkitcase writerecordstopulsar failed on azure pulsarsinkitcase teardown failed in azure pulsarsourceenumeratortest discoverpartitionstriggersassignments seems flaky pulsarunorderedsourcereadertest hang on azure direct buffer memory leak on pulsar connector with java
| 1
|
12,096
| 14,740,096,081
|
IssuesEvent
|
2021-01-07 08:30:34
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Allentown - SA Billing - Late Fee Account List
|
anc-process anp-important ant-bug has attachment
|
In GitLab by @kdjstudios on Oct 3, 2018, 11:13
[Allentown.xlsx](/uploads/1c863539af5033bf32255f9bb33bd168/Allentown.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-43435
|
1.0
|
Allentown - SA Billing - Late Fee Account List - In GitLab by @kdjstudios on Oct 3, 2018, 11:13
[Allentown.xlsx](/uploads/1c863539af5033bf32255f9bb33bd168/Allentown.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-43435
|
process
|
allentown sa billing late fee account list in gitlab by kdjstudios on oct uploads allentown xlsx hd
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.