Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
48,226
13,067,548,018
IssuesEvent
2020-07-31 00:48:48
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
Revisit Random Number Generation (Trac #2013)
Migrated from Trac combo core defect
Currently icetray has its own random number generator interface. With 3 instances: * I3TRandom: ROOT's implementation of mt19937 * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. Random number generation has not been revisited in the past 13 years. Things to consider: * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`) * we only use version 2.0a of SPRNG, newer versions are available * SPRNG has a failure mode where it uses the exact same stream for every job in a batch * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG Requirements: * Determine whether to continue to use our custom RNG interface or switch to the c++11 one * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system. * should be able to set the seed without causing all streams to be identical Migrated from https://code.icecube.wisc.edu/ticket/2013 ```json { "status": "closed", "changetime": "2019-09-18T05:54:49", "description": "Currently icetray has its own random number generator interface. With 3 instances:\n * I3TRandom: ROOT's implementation of mt19937\n * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable\n * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. \n\nRandom number generation has not been revisited in the past 13 years. Things to consider:\n * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`)\n * we only use version 2.0a of SPRNG, newer versions are available\n * SPRNG has a failure mode where it uses the exact same stream for every job in a batch \n * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers\n * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG\n\nRequirements: \n * Determine whether to continue to use our custom RNG interface or switch to the c++11 one\n * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system.\n * should be able to set the seed without causing all streams to be identical \n\n\n\n", "reporter": "kjmeagher", "cc": "cweaver, olivas", "resolution": "duplicate", "_ts": "1568786089339356", "component": "combo core", "summary": "Revisit Random Number Generation", "priority": "normal", "keywords": "random, rng, gsl, sprng", "time": "2017-05-09T19:53:04", "milestone": "Long-Term Future", "owner": "juancarlos", "type": "defect" } ```
1.0
Revisit Random Number Generation (Trac #2013) - Currently icetray has its own random number generator interface. With 3 instances: * I3TRandom: ROOT's implementation of mt19937 * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. Random number generation has not been revisited in the past 13 years. Things to consider: * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`) * we only use version 2.0a of SPRNG, newer versions are available * SPRNG has a failure mode where it uses the exact same stream for every job in a batch * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG Requirements: * Determine whether to continue to use our custom RNG interface or switch to the c++11 one * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system. * should be able to set the seed without causing all streams to be identical Migrated from https://code.icecube.wisc.edu/ticket/2013 ```json { "status": "closed", "changetime": "2019-09-18T05:54:49", "description": "Currently icetray has its own random number generator interface. With 3 instances:\n * I3TRandom: ROOT's implementation of mt19937\n * I3GSLRandom: uses GSL's implementation of mt19937 but can be changed with and environment variable\n * I3SPRNGRandomService: combines the output of SPRNG and GSL for reasons which are not adequately explained in the documentation. \n\nRandom number generation has not been revisited in the past 13 years. Things to consider:\n * boost/c++11 has a random number generator interface which has all the functions we need (except for the unused `PoissonD`)\n * we only use version 2.0a of SPRNG, newer versions are available\n * SPRNG has a failure mode where it uses the exact same stream for every job in a batch \n * SPRNG has a nonstandard install script and who knows how long it will continue to work with current compilers\n * It is unclear if combining SPRNG and mt19937 is a statistically valid RNG\n\nRequirements: \n * Determine whether to continue to use our custom RNG interface or switch to the c++11 one\n * Determine if there is a better RNG for batch processing (ie it can derive multiple streams from both the dataset number and the job number which are all independent). Bonus points for having a normal build system.\n * should be able to set the seed without causing all streams to be identical \n\n\n\n", "reporter": "kjmeagher", "cc": "cweaver, olivas", "resolution": "duplicate", "_ts": "1568786089339356", "component": "combo core", "summary": "Revisit Random Number Generation", "priority": "normal", "keywords": "random, rng, gsl, sprng", "time": "2017-05-09T19:53:04", "milestone": "Long-Term Future", "owner": "juancarlos", "type": "defect" } ```
defect
revisit random number generation trac currently icetray has its own random number generator interface with instances root s implementation of uses gsl s implementation of but can be changed with and environment variable combines the output of sprng and gsl for reasons which are not adequately explained in the documentation random number generation has not been revisited in the past years things to consider boost c has a random number generator interface which has all the functions we need except for the unused poissond we only use version of sprng newer versions are available sprng has a failure mode where it uses the exact same stream for every job in a batch sprng has a nonstandard install script and who knows how long it will continue to work with current compilers it is unclear if combining sprng and is a statistically valid rng requirements determine whether to continue to use our custom rng interface or switch to the c one determine if there is a better rng for batch processing ie it can derive multiple streams from both the dataset number and the job number which are all independent bonus points for having a normal build system should be able to set the seed without causing all streams to be identical migrated from json status closed changetime description currently icetray has its own random number generator interface with instances n root s implementation of n uses gsl s implementation of but can be changed with and environment variable n combines the output of sprng and gsl for reasons which are not adequately explained in the documentation n nrandom number generation has not been revisited in the past years things to consider n boost c has a random number generator interface which has all the functions we need except for the unused poissond n we only use version of sprng newer versions are available n sprng has a failure mode where it uses the exact same stream for every job in a batch n sprng has a nonstandard install script and who knows how long it will continue to work with current compilers n it is unclear if combining sprng and is a statistically valid rng n nrequirements n determine whether to continue to use our custom rng interface or switch to the c one n determine if there is a better rng for batch processing ie it can derive multiple streams from both the dataset number and the job number which are all independent bonus points for having a normal build system n should be able to set the seed without causing all streams to be identical n n n n reporter kjmeagher cc cweaver olivas resolution duplicate ts component combo core summary revisit random number generation priority normal keywords random rng gsl sprng time milestone long term future owner juancarlos type defect
1
33,934
7,302,940,518
IssuesEvent
2018-02-27 11:21:05
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
generic HazelcastException thrown, wrapping NativeOutOfMemoryError
Team: Core Type: Defect
``` com.hazelcast.core.HazelcastException: com.hazelcast.memory.NativeOutOfMemoryError: Not enough contiguous memory available! Cannot allocate 7814 KB! Max Native Memory: 1640 MB, Committed Native Memory: 1636 MB, Used Native Memory: 946 MB at com.hazelcast.util.ExceptionUtil$1.create(ExceptionUtil.java:40) at com.hazelcast.util.ExceptionUtil.peel(ExceptionUtil.java:124) at com.hazelcast.util.ExceptionUtil.peel(ExceptionUtil.java:69) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.sendClientMessage(AbstractMessageTask.java:221) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.handleProcessingFailure(AbstractMessageTask.java:162) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.onFailure(AbstractPartitionMessageTask.java:96) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:246) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.execute(AbstractPartitionMessageTask.java:78) at com.hazelcast.spi.impl.AbstractInvocationFuture.unblock(AbstractInvocationFuture.java:239) at com.hazelcast.spi.impl.AbstractInvocationFuture.andThen(AbstractInvocationFuture.java:215) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:69) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:103) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100) at ------ submitted from ------.(Unknown Source) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:96) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:33) at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:155) at com.hazelcast.client.spi.ClientProxy.invokeOnPartition(ClientProxy.java:204) at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:198) at com.hazelcast.client.proxy.ClientMapProxy.putInternal(ClientMapProxy.java:517) at com.hazelcast.client.proxy.ClientMapProxy.put(ClientMapProxy.java:509) at com.hazelcast.client.proxy.ClientMapProxy.put(ClientMapProxy.java:312) at hzcmd.map.multi.Put.timeStep(Put.java:14) at remote.bench.marker.MetricsMarker.flatOut(MetricsMarker.java:53) at remote.bench.marker.MetricsMarker.bench(MetricsMarker.java:40) at remote.bench.BenchThread.call(BenchThread.java:38) at remote.bench.BenchThread.call(BenchThread.java:12) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: com.hazelcast.memory.NativeOutOfMemoryError: Not enough contiguous memory available! Cannot allocate 7814 KB! Max Native Memory: 1640 MB, Committed Native Memory: 1636 MB, Used Native Memory: 946 MB at com.hazelcast.memory.NativeMemoryStats.checkAndAddCommittedNative(NativeMemoryStats.java:50) at com.hazelcast.memory.StandardMemoryManager.allocate(StandardMemoryManager.java:109) at com.hazelcast.memory.ThreadLocalPoolingMemoryManager.allocateExternalBlock(ThreadLocalPoolingMemoryManager.java:248) at com.hazelcast.memory.AbstractPoolingMemoryManager.allocate(AbstractPoolingMemoryManager.java:69) at com.hazelcast.memory.PoolingMemoryManager.allocate(PoolingMemoryManager.java:140) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.convertDataInternal(EnterpriseSerializationServiceV1.java:168) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.convertData(EnterpriseSerializationServiceV1.java:146) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.toDataInternal(EnterpriseSerializationServiceV1.java:97) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.toNativeData(EnterpriseSerializationServiceV1.java:84) at com.hazelcast.internal.hidensity.impl.DefaultHiDensityRecordProcessor.toData(DefaultHiDensityRecordProcessor.java:110) at com.hazelcast.map.impl.record.HDRecordFactory.newRecord(HDRecordFactory.java:44) at com.hazelcast.map.impl.recordstore.AbstractRecordStore.createRecord(AbstractRecordStore.java:95) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.createRecord(DefaultRecordStore.java:78) at com.hazelcast.map.impl.recordstore.EnterpriseRecordStore.createRecordInternal(EnterpriseRecordStore.java:133) at com.hazelcast.map.impl.recordstore.EnterpriseRecordStore.createRecord(EnterpriseRecordStore.java:125) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.putInternal(DefaultRecordStore.java:713) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.put(DefaultRecordStore.java:697) at com.hazelcast.map.impl.operation.HDPutOperation.runInternal(HDPutOperation.java:18) at com.hazelcast.map.impl.operation.HDMapOperation.evictAllAndRetry(HDMapOperation.java:227) at com.hazelcast.map.impl.operation.HDMapOperation.runInternalWithForcedEviction(HDMapOperation.java:151) at com.hazelcast.map.impl.operation.HDMapOperation.run(HDMapOperation.java:80) at com.hazelcast.spi.Operation.call(Operation.java:141) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:202) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:191) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:406) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:433) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:569) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:554) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:513) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:207) at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:60) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:64) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:103) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100) at ------ submitted from ------.(Unknown Source) at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:127) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:243) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.execute(AbstractPartitionMessageTask.java:78) at com.hazelcast.spi.impl.AbstractInvocationFuture.unblock(AbstractInvocationFuture.java:239) at com.hazelcast.spi.impl.AbstractInvocationFuture.andThen(AbstractInvocationFuture.java:215) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:69) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:103) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100) ``` /disk1/jenkins/workspace/kill-All/3.10-SNAPSHOT/2018_02_21-15_05_39/stable/hd/pool/big-val Failed ``` fail HzClient2HZ put hzcmd.map.multi.Put threadId=1 com.hazelcast.core.HazelcastException: com.hazelcast.memory.NativeOutOfMemoryError: Not enough contiguous memory available! Cannot allocate 7814 KB! Max Native Memory: 1640 MB, Committed Native Memory: 1636 MB, Used Native Memory: 946 MB ``` http://54.82.84.143/~jenkins/workspace/kill-All/3.10-SNAPSHOT/2018_02_21-15_05_39/stable/hd/pool/big-val https://hazelcast-l337.ci.cloudbees.com/view/kill/job/kill-All/33/console
1.0
generic HazelcastException thrown, wrapping NativeOutOfMemoryError - ``` com.hazelcast.core.HazelcastException: com.hazelcast.memory.NativeOutOfMemoryError: Not enough contiguous memory available! Cannot allocate 7814 KB! Max Native Memory: 1640 MB, Committed Native Memory: 1636 MB, Used Native Memory: 946 MB at com.hazelcast.util.ExceptionUtil$1.create(ExceptionUtil.java:40) at com.hazelcast.util.ExceptionUtil.peel(ExceptionUtil.java:124) at com.hazelcast.util.ExceptionUtil.peel(ExceptionUtil.java:69) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.sendClientMessage(AbstractMessageTask.java:221) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.handleProcessingFailure(AbstractMessageTask.java:162) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.onFailure(AbstractPartitionMessageTask.java:96) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:246) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.execute(AbstractPartitionMessageTask.java:78) at com.hazelcast.spi.impl.AbstractInvocationFuture.unblock(AbstractInvocationFuture.java:239) at com.hazelcast.spi.impl.AbstractInvocationFuture.andThen(AbstractInvocationFuture.java:215) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:69) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:103) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100) at ------ submitted from ------.(Unknown Source) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:96) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:33) at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:155) at com.hazelcast.client.spi.ClientProxy.invokeOnPartition(ClientProxy.java:204) at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:198) at com.hazelcast.client.proxy.ClientMapProxy.putInternal(ClientMapProxy.java:517) at com.hazelcast.client.proxy.ClientMapProxy.put(ClientMapProxy.java:509) at com.hazelcast.client.proxy.ClientMapProxy.put(ClientMapProxy.java:312) at hzcmd.map.multi.Put.timeStep(Put.java:14) at remote.bench.marker.MetricsMarker.flatOut(MetricsMarker.java:53) at remote.bench.marker.MetricsMarker.bench(MetricsMarker.java:40) at remote.bench.BenchThread.call(BenchThread.java:38) at remote.bench.BenchThread.call(BenchThread.java:12) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: com.hazelcast.memory.NativeOutOfMemoryError: Not enough contiguous memory available! Cannot allocate 7814 KB! Max Native Memory: 1640 MB, Committed Native Memory: 1636 MB, Used Native Memory: 946 MB at com.hazelcast.memory.NativeMemoryStats.checkAndAddCommittedNative(NativeMemoryStats.java:50) at com.hazelcast.memory.StandardMemoryManager.allocate(StandardMemoryManager.java:109) at com.hazelcast.memory.ThreadLocalPoolingMemoryManager.allocateExternalBlock(ThreadLocalPoolingMemoryManager.java:248) at com.hazelcast.memory.AbstractPoolingMemoryManager.allocate(AbstractPoolingMemoryManager.java:69) at com.hazelcast.memory.PoolingMemoryManager.allocate(PoolingMemoryManager.java:140) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.convertDataInternal(EnterpriseSerializationServiceV1.java:168) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.convertData(EnterpriseSerializationServiceV1.java:146) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.toDataInternal(EnterpriseSerializationServiceV1.java:97) at com.hazelcast.internal.serialization.impl.EnterpriseSerializationServiceV1.toNativeData(EnterpriseSerializationServiceV1.java:84) at com.hazelcast.internal.hidensity.impl.DefaultHiDensityRecordProcessor.toData(DefaultHiDensityRecordProcessor.java:110) at com.hazelcast.map.impl.record.HDRecordFactory.newRecord(HDRecordFactory.java:44) at com.hazelcast.map.impl.recordstore.AbstractRecordStore.createRecord(AbstractRecordStore.java:95) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.createRecord(DefaultRecordStore.java:78) at com.hazelcast.map.impl.recordstore.EnterpriseRecordStore.createRecordInternal(EnterpriseRecordStore.java:133) at com.hazelcast.map.impl.recordstore.EnterpriseRecordStore.createRecord(EnterpriseRecordStore.java:125) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.putInternal(DefaultRecordStore.java:713) at com.hazelcast.map.impl.recordstore.DefaultRecordStore.put(DefaultRecordStore.java:697) at com.hazelcast.map.impl.operation.HDPutOperation.runInternal(HDPutOperation.java:18) at com.hazelcast.map.impl.operation.HDMapOperation.evictAllAndRetry(HDMapOperation.java:227) at com.hazelcast.map.impl.operation.HDMapOperation.runInternalWithForcedEviction(HDMapOperation.java:151) at com.hazelcast.map.impl.operation.HDMapOperation.run(HDMapOperation.java:80) at com.hazelcast.spi.Operation.call(Operation.java:141) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:202) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:191) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:406) at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:433) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:569) at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:554) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:513) at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:207) at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:60) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:64) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:103) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100) at ------ submitted from ------.(Unknown Source) at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:127) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:243) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.execute(AbstractPartitionMessageTask.java:78) at com.hazelcast.spi.impl.AbstractInvocationFuture.unblock(AbstractInvocationFuture.java:239) at com.hazelcast.spi.impl.AbstractInvocationFuture.andThen(AbstractInvocationFuture.java:215) at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:69) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:123) at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:103) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:155) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100) ``` /disk1/jenkins/workspace/kill-All/3.10-SNAPSHOT/2018_02_21-15_05_39/stable/hd/pool/big-val Failed ``` fail HzClient2HZ put hzcmd.map.multi.Put threadId=1 com.hazelcast.core.HazelcastException: com.hazelcast.memory.NativeOutOfMemoryError: Not enough contiguous memory available! Cannot allocate 7814 KB! Max Native Memory: 1640 MB, Committed Native Memory: 1636 MB, Used Native Memory: 946 MB ``` http://54.82.84.143/~jenkins/workspace/kill-All/3.10-SNAPSHOT/2018_02_21-15_05_39/stable/hd/pool/big-val https://hazelcast-l337.ci.cloudbees.com/view/kill/job/kill-All/33/console
defect
generic hazelcastexception thrown wrapping nativeoutofmemoryerror com hazelcast core hazelcastexception com hazelcast memory nativeoutofmemoryerror not enough contiguous memory available cannot allocate kb max native memory mb committed native memory mb used native memory mb at com hazelcast util exceptionutil create exceptionutil java at com hazelcast util exceptionutil peel exceptionutil java at com hazelcast util exceptionutil peel exceptionutil java at com hazelcast client impl protocol task abstractmessagetask sendclientmessage abstractmessagetask java at com hazelcast client impl protocol task abstractmessagetask handleprocessingfailure abstractmessagetask java at com hazelcast client impl protocol task abstractpartitionmessagetask onfailure abstractpartitionmessagetask java at com hazelcast spi impl abstractinvocationfuture run abstractinvocationfuture java at com hazelcast client impl protocol task abstractpartitionmessagetask execute abstractpartitionmessagetask java at com hazelcast spi impl abstractinvocationfuture unblock abstractinvocationfuture java at com hazelcast spi impl abstractinvocationfuture andthen abstractinvocationfuture java at com hazelcast client impl protocol task abstractpartitionmessagetask processmessage abstractpartitionmessagetask java at com hazelcast client impl protocol task abstractmessagetask initializeandprocessmessage abstractmessagetask java at com hazelcast client impl protocol task abstractmessagetask run abstractmessagetask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread run operationthread java at submitted from unknown source at com hazelcast client spi impl clientinvocationfuture resolveandthrowifexception clientinvocationfuture java at com hazelcast client spi impl clientinvocationfuture resolveandthrowifexception clientinvocationfuture java at com hazelcast spi impl abstractinvocationfuture get abstractinvocationfuture java at com hazelcast client spi clientproxy invokeonpartition clientproxy java at com hazelcast client spi clientproxy invoke clientproxy java at com hazelcast client proxy clientmapproxy putinternal clientmapproxy java at com hazelcast client proxy clientmapproxy put clientmapproxy java at com hazelcast client proxy clientmapproxy put clientmapproxy java at hzcmd map multi put timestep put java at remote bench marker metricsmarker flatout metricsmarker java at remote bench marker metricsmarker bench metricsmarker java at remote bench benchthread call benchthread java at remote bench benchthread call benchthread java at java util concurrent futuretask run futuretask java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by com hazelcast memory nativeoutofmemoryerror not enough contiguous memory available cannot allocate kb max native memory mb committed native memory mb used native memory mb at com hazelcast memory nativememorystats checkandaddcommittednative nativememorystats java at com hazelcast memory standardmemorymanager allocate standardmemorymanager java at com hazelcast memory threadlocalpoolingmemorymanager allocateexternalblock threadlocalpoolingmemorymanager java at com hazelcast memory abstractpoolingmemorymanager allocate abstractpoolingmemorymanager java at com hazelcast memory poolingmemorymanager allocate poolingmemorymanager java at com hazelcast internal serialization impl convertdatainternal java at com hazelcast internal serialization impl convertdata java at com hazelcast internal serialization impl todatainternal java at com hazelcast internal serialization impl tonativedata java at com hazelcast internal hidensity impl defaulthidensityrecordprocessor todata defaulthidensityrecordprocessor java at com hazelcast map impl record hdrecordfactory newrecord hdrecordfactory java at com hazelcast map impl recordstore abstractrecordstore createrecord abstractrecordstore java at com hazelcast map impl recordstore defaultrecordstore createrecord defaultrecordstore java at com hazelcast map impl recordstore enterpriserecordstore createrecordinternal enterpriserecordstore java at com hazelcast map impl recordstore enterpriserecordstore createrecord enterpriserecordstore java at com hazelcast map impl recordstore defaultrecordstore putinternal defaultrecordstore java at com hazelcast map impl recordstore defaultrecordstore put defaultrecordstore java at com hazelcast map impl operation hdputoperation runinternal hdputoperation java at com hazelcast map impl operation hdmapoperation evictallandretry hdmapoperation java at com hazelcast map impl operation hdmapoperation runinternalwithforcedeviction hdmapoperation java at com hazelcast map impl operation hdmapoperation run hdmapoperation java at com hazelcast spi operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl run operationexecutorimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl runorexecute operationexecutorimpl java at com hazelcast spi impl operationservice impl invocation doinvokelocal invocation java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast client impl protocol task abstractpartitionmessagetask processmessage abstractpartitionmessagetask java at com hazelcast client impl protocol task abstractmessagetask initializeandprocessmessage abstractmessagetask java at com hazelcast client impl protocol task abstractmessagetask run abstractmessagetask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread run operationthread java at submitted from unknown source at com hazelcast spi impl operationservice impl invocationfuture resolve invocationfuture java at com hazelcast spi impl abstractinvocationfuture run abstractinvocationfuture java at com hazelcast client impl protocol task abstractpartitionmessagetask execute abstractpartitionmessagetask java at com hazelcast spi impl abstractinvocationfuture unblock abstractinvocationfuture java at com hazelcast spi impl abstractinvocationfuture andthen abstractinvocationfuture java at com hazelcast client impl protocol task abstractpartitionmessagetask processmessage abstractpartitionmessagetask java at com hazelcast client impl protocol task abstractmessagetask initializeandprocessmessage abstractmessagetask java at com hazelcast client impl protocol task abstractmessagetask run abstractmessagetask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread run operationthread java jenkins workspace kill all snapshot stable hd pool big val failed fail put hzcmd map multi put threadid com hazelcast core hazelcastexception com hazelcast memory nativeoutofmemoryerror not enough contiguous memory available cannot allocate kb max native memory mb committed native memory mb used native memory mb
1
158,828
24,902,258,612
IssuesEvent
2022-10-28 22:35:29
Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022
https://api.github.com/repos/Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022
closed
Admin Design - Profile (Desktop View)
documentation enhancement hacktoberfest-accepted hacktoberfest design
Design the desktop view for the admin profile page. The product upload screen will comprise of; - A section containing admin's name, display picture and stars to indicate that the profile belongs to the admin. - A section for admin's personal information, comprising of; - Input field for admin's name, email, phone number and location. - A section for password - Input field for new password and confirm password - A "Cancel and Save Changes button" Ensure you use colors and typography on the style guide to ensure consistency.
1.0
Admin Design - Profile (Desktop View) - Design the desktop view for the admin profile page. The product upload screen will comprise of; - A section containing admin's name, display picture and stars to indicate that the profile belongs to the admin. - A section for admin's personal information, comprising of; - Input field for admin's name, email, phone number and location. - A section for password - Input field for new password and confirm password - A "Cancel and Save Changes button" Ensure you use colors and typography on the style guide to ensure consistency.
non_defect
admin design profile desktop view design the desktop view for the admin profile page the product upload screen will comprise of a section containing admin s name display picture and stars to indicate that the profile belongs to the admin a section for admin s personal information comprising of input field for admin s name email phone number and location a section for password input field for new password and confirm password a cancel and save changes button ensure you use colors and typography on the style guide to ensure consistency
0
20,793
10,551,001,509
IssuesEvent
2019-10-03 12:26:35
adrijshikhar/hidden-stone
https://api.github.com/repos/adrijshikhar/hidden-stone
opened
CVE-2018-14042 (Medium) detected in bootstrap-3.4.0-3.4.1.min.js
security vulnerability
## CVE-2018-14042 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.4.0-3.4.1.min.js</b></p></summary> <p>Google-styled theme for Bootstrap.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.4.0-3.4.1/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.4.0-3.4.1/js/bootstrap.min.js</a></p> <p>Path to dependency file: /tmp/ws-scm/hidden-stone/index.html</p> <p>Path to vulnerable library: /hidden-stone/index.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.4.0-3.4.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/adrijshikhar/hidden-stone/commit/c05d10ea2ffc8054929c6ab113eb056c0b697602">c05d10ea2ffc8054929c6ab113eb056c0b697602</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14042>CVE-2018-14042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14042">https://nvd.nist.gov/vuln/detail/CVE-2018-14042</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: 4.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-14042 (Medium) detected in bootstrap-3.4.0-3.4.1.min.js - ## CVE-2018-14042 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.4.0-3.4.1.min.js</b></p></summary> <p>Google-styled theme for Bootstrap.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.4.0-3.4.1/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/todc-bootstrap/3.4.0-3.4.1/js/bootstrap.min.js</a></p> <p>Path to dependency file: /tmp/ws-scm/hidden-stone/index.html</p> <p>Path to vulnerable library: /hidden-stone/index.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.4.0-3.4.1.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/adrijshikhar/hidden-stone/commit/c05d10ea2ffc8054929c6ab113eb056c0b697602">c05d10ea2ffc8054929c6ab113eb056c0b697602</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-14042>CVE-2018-14042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14042">https://nvd.nist.gov/vuln/detail/CVE-2018-14042</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: 4.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js google styled theme for bootstrap library home page a href path to dependency file tmp ws scm hidden stone index html path to vulnerable library hidden stone index html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href vulnerability details in bootstrap before xss is possible in the data container property of tooltip publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
16,148
9,286,032,220
IssuesEvent
2019-03-21 09:16:02
scalableminds/webknossos
https://api.github.com/repos/scalableminds/webknossos
closed
Perform bucket picking in webworker
frontend performance rendering
With the newest refactoring of the bucket pickers, it should be relatively easy to move the code into webworkers. Bucket picking is one of the top candidates for blocking the main thread afaik. So, the perf win should be quite lucrative.
True
Perform bucket picking in webworker - With the newest refactoring of the bucket pickers, it should be relatively easy to move the code into webworkers. Bucket picking is one of the top candidates for blocking the main thread afaik. So, the perf win should be quite lucrative.
non_defect
perform bucket picking in webworker with the newest refactoring of the bucket pickers it should be relatively easy to move the code into webworkers bucket picking is one of the top candidates for blocking the main thread afaik so the perf win should be quite lucrative
0
71,955
23,867,832,389
IssuesEvent
2022-09-07 12:36:23
naev/naev
https://api.github.com/repos/naev/naev
closed
flip minigame doesn't solve with mouse
Type-Defect Priority-Low
Pyroman — 2022/09/02 it seems taiomi03's minigame disagrees that they're lit up ![mg_failure](https://user-images.githubusercontent.com/54677/188834278-93f2dc8e-7f44-4f03-88d0-0b92bb1c118a.png) was i supposed to solve it in a very specific way or something? LJ_Dude — 2022/09/02 Wait there's more taiomi content? :O Pyroman — 2022/09/02 yes there were no errors or warnings produced, but the game clearly didn't think i had succeeded or failed LJ_Dude — 2022/09/02 curious Pyroman — 2022/09/02 i tried again and i had to click before it continued, plus the success noise should probably be added to that I don't think im supposed to have the black cat mission play again Pyroman — 2022/09/02 also might be intended but our player seems to have memory loss after speaking with Scavenger as we act like it's the first time asking the question bobbens — 2022/09/02 Did you click with the mouse? I had no issue with the keyboard Pyroman — 2022/09/02 I did after, but going from minigames that continue on their own to one that i need to continue that still doesn't play anything is confusing
1.0
flip minigame doesn't solve with mouse - Pyroman — 2022/09/02 it seems taiomi03's minigame disagrees that they're lit up ![mg_failure](https://user-images.githubusercontent.com/54677/188834278-93f2dc8e-7f44-4f03-88d0-0b92bb1c118a.png) was i supposed to solve it in a very specific way or something? LJ_Dude — 2022/09/02 Wait there's more taiomi content? :O Pyroman — 2022/09/02 yes there were no errors or warnings produced, but the game clearly didn't think i had succeeded or failed LJ_Dude — 2022/09/02 curious Pyroman — 2022/09/02 i tried again and i had to click before it continued, plus the success noise should probably be added to that I don't think im supposed to have the black cat mission play again Pyroman — 2022/09/02 also might be intended but our player seems to have memory loss after speaking with Scavenger as we act like it's the first time asking the question bobbens — 2022/09/02 Did you click with the mouse? I had no issue with the keyboard Pyroman — 2022/09/02 I did after, but going from minigames that continue on their own to one that i need to continue that still doesn't play anything is confusing
defect
flip minigame doesn t solve with mouse pyroman — it seems s minigame disagrees that they re lit up was i supposed to solve it in a very specific way or something lj dude — wait there s more taiomi content o pyroman — yes there were no errors or warnings produced but the game clearly didn t think i had succeeded or failed lj dude — curious pyroman — i tried again and i had to click before it continued plus the success noise should probably be added to that i don t think im supposed to have the black cat mission play again pyroman — also might be intended but our player seems to have memory loss after speaking with scavenger as we act like it s the first time asking the question bobbens — did you click with the mouse i had no issue with the keyboard pyroman — i did after but going from minigames that continue on their own to one that i need to continue that still doesn t play anything is confusing
1
90,643
3,828,518,641
IssuesEvent
2016-03-31 06:19:27
AtlasOfLivingAustralia/fieldcapture
https://api.github.com/repos/AtlasOfLivingAustralia/fieldcapture
closed
Export project dashboard and all-project data as CSV download
priority-critical status-new type-enhancement
*migrated from:* https://code.google.com/p/ala/issues/detail?id=434 *date:* Tue Dec 10 04:57:39 2013 *author:* CoolDa...@gmail.com --- Will need the ability to export both the project output data and all project data in a downloadable CSV format. Not required until Feb-Mar 2014.
1.0
Export project dashboard and all-project data as CSV download - *migrated from:* https://code.google.com/p/ala/issues/detail?id=434 *date:* Tue Dec 10 04:57:39 2013 *author:* CoolDa...@gmail.com --- Will need the ability to export both the project output data and all project data in a downloadable CSV format. Not required until Feb-Mar 2014.
non_defect
export project dashboard and all project data as csv download migrated from date tue dec author coolda gmail com will need the ability to export both the project output data and all project data in a downloadable csv format not required until feb mar
0
74,132
24,962,185,928
IssuesEvent
2022-11-01 16:22:12
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
Adaptive MFU/MRU seems broken
Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Rocky Linux Distribution Version | 8.6 Kernel Version | 4.18.0-372.26.1.el8_6.x86_64 Architecture | x86_64 OpenZFS Version | 2.1.99-1501_g07de86923 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing MRU become very large when reading a big file for the first time, leading to extreme and unexpected MFU shrink. Moreover, re-reading a previously MFU-cached file shows degraded performance. Please note that the same effect can be obtained by writing new data, with MRU again completely displacing MFU. This means MFU cache is effectively over-shadowed by less valuable MRU cache, breaking ARC adaptive nature. ### Describe how to reproduce the problem ``` # disabling prefetch for easing debug echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable # test.img is 3.8G, test2.img is 3.6G # first read [root@localhost ~]# dd if=/tank/test/test.img of=/dev/null bs=1M status=progress 2768240640 bytes (2.8 GB, 2.6 GiB) copied, 2 s, 1.4 GB/s 3800+0 records in 3800+0 records out 3984588800 bytes (4.0 GB, 3.7 GiB) copied, 2.88245 s, 1.4 GB/s # second read to move it in MFU [root@localhost ~]# dd if=/tank/test/test.img of=/dev/null bs=1M status=progress 3800+0 records in 3800+0 records out 3984588800 bytes (4.0 GB, 3.7 GiB) copied, 0.78748 s, 5.1 GB/s # MFU is 3.8G ARC size (current): 98.4 % 3.8 GiB Target size (adaptive): 100.0 % 3.9 GiB Min size (hard limit): 6.2 % 248.6 MiB Max size (high water): 16:1 3.9 GiB Most Frequently Used (MFU) cache size: 100.0 % 3.8 GiB Most Recently Used (MRU) cache size: < 0.1 % 1.9 MiB Metadata cache size (hard limit): 75.0 % 2.9 GiB Metadata cache size (current): 0.1 % 3.8 MiB Dnode cache size (hard limit): 10.0 % 298.4 MiB Dnode cache size (current): < 0.1 % 93.2 KiB # reading test2.img for the first time [root@localhost ~]# dd if=/tank/test/test2.img of=/dev/null bs=1M status=progress 3348103168 bytes (3.3 GB, 3.1 GiB) copied, 4 s, 837 MB/s 3639+0 records in 3639+0 records out 3815768064 bytes (3.8 GB, 3.6 GiB) copied, 4.63754 s, 823 MB/s # MFU was almost completely evicted by MRU ARC size (current): 99.7 % 3.9 GiB Target size (adaptive): 100.0 % 3.9 GiB Min size (hard limit): 6.2 % 248.6 MiB Max size (high water): 16:1 3.9 GiB Most Frequently Used (MFU) cache size: 5.3 % 210.7 MiB Most Recently Used (MRU) cache size: 94.7 % 3.7 GiB Metadata cache size (hard limit): 75.0 % 2.9 GiB Metadata cache size (current): 0.2 % 5.1 MiB Dnode cache size (hard limit): 10.0 % 298.4 MiB Dnode cache size (current): < 0.1 % 96.1 KiB # relative arcstat, note how ghost lists hit was 0 [root@localhost parameters]# arcstat -f time,read,miss,miss%,dmis,dm%,pmis,pm%,mmis,mm%,size,c,avail,mru,mfug,mfu,mfug 1 time read miss miss% dmis dm% pmis pm% mmis mm% size c avail mru mfug mfu mfug 12:04:21 1.2K 590 50 590 50 0 0 0 0 3.9G 3.9G 3.3G 4 0 586 0 12:04:22 1.9K 937 50 937 50 0 0 1 1 3.9G 3.9G 3.3G 58 0 881 0 12:04:23 1.6K 815 51 815 51 0 0 1 1 3.9G 3.9G 3.3G 68 0 746 0 12:04:24 1.8K 892 51 892 51 0 0 1 1 3.9G 3.9G 3.3G 66 0 825 0 12:04:25 754 377 50 377 50 0 0 0 0 3.9G 3.9G 3.3G 0 0 377 0 # try re-reading the first file (test.img) to re-populate MFU, notice the degraded performance... [root@localhost ~]# dd if=/tank/test/test.img of=/dev/null bs=1M status=progress 2836398080 bytes (2.8 GB, 2.6 GiB) copied, 2 s, 1.4 GB/s 3800+0 records in 3800+0 records out 3984588800 bytes (4.0 GB, 3.7 GiB) copied, 2.73548 s, 1.5 GB/s #...and how MFU does _not_ repopulate completely, with some data "locked" by MRU ARC size (current): 100.0 % 3.9 GiB Target size (adaptive): 100.0 % 3.9 GiB Min size (hard limit): 6.2 % 248.6 MiB Max size (high water): 16:1 3.9 GiB Most Frequently Used (MFU) cache size: 93.9 % 3.6 GiB Most Recently Used (MRU) cache size: 6.1 % 242.9 MiB Metadata cache size (hard limit): 75.0 % 2.9 GiB Metadata cache size (current): 0.2 % 5.1 MiB Dnode cache size (hard limit): 10.0 % 298.4 MiB Dnode cache size (current): < 0.1 % 96.1 KiB ``` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> None
1.0
Adaptive MFU/MRU seems broken - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Rocky Linux Distribution Version | 8.6 Kernel Version | 4.18.0-372.26.1.el8_6.x86_64 Architecture | x86_64 OpenZFS Version | 2.1.99-1501_g07de86923 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing MRU become very large when reading a big file for the first time, leading to extreme and unexpected MFU shrink. Moreover, re-reading a previously MFU-cached file shows degraded performance. Please note that the same effect can be obtained by writing new data, with MRU again completely displacing MFU. This means MFU cache is effectively over-shadowed by less valuable MRU cache, breaking ARC adaptive nature. ### Describe how to reproduce the problem ``` # disabling prefetch for easing debug echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable # test.img is 3.8G, test2.img is 3.6G # first read [root@localhost ~]# dd if=/tank/test/test.img of=/dev/null bs=1M status=progress 2768240640 bytes (2.8 GB, 2.6 GiB) copied, 2 s, 1.4 GB/s 3800+0 records in 3800+0 records out 3984588800 bytes (4.0 GB, 3.7 GiB) copied, 2.88245 s, 1.4 GB/s # second read to move it in MFU [root@localhost ~]# dd if=/tank/test/test.img of=/dev/null bs=1M status=progress 3800+0 records in 3800+0 records out 3984588800 bytes (4.0 GB, 3.7 GiB) copied, 0.78748 s, 5.1 GB/s # MFU is 3.8G ARC size (current): 98.4 % 3.8 GiB Target size (adaptive): 100.0 % 3.9 GiB Min size (hard limit): 6.2 % 248.6 MiB Max size (high water): 16:1 3.9 GiB Most Frequently Used (MFU) cache size: 100.0 % 3.8 GiB Most Recently Used (MRU) cache size: < 0.1 % 1.9 MiB Metadata cache size (hard limit): 75.0 % 2.9 GiB Metadata cache size (current): 0.1 % 3.8 MiB Dnode cache size (hard limit): 10.0 % 298.4 MiB Dnode cache size (current): < 0.1 % 93.2 KiB # reading test2.img for the first time [root@localhost ~]# dd if=/tank/test/test2.img of=/dev/null bs=1M status=progress 3348103168 bytes (3.3 GB, 3.1 GiB) copied, 4 s, 837 MB/s 3639+0 records in 3639+0 records out 3815768064 bytes (3.8 GB, 3.6 GiB) copied, 4.63754 s, 823 MB/s # MFU was almost completely evicted by MRU ARC size (current): 99.7 % 3.9 GiB Target size (adaptive): 100.0 % 3.9 GiB Min size (hard limit): 6.2 % 248.6 MiB Max size (high water): 16:1 3.9 GiB Most Frequently Used (MFU) cache size: 5.3 % 210.7 MiB Most Recently Used (MRU) cache size: 94.7 % 3.7 GiB Metadata cache size (hard limit): 75.0 % 2.9 GiB Metadata cache size (current): 0.2 % 5.1 MiB Dnode cache size (hard limit): 10.0 % 298.4 MiB Dnode cache size (current): < 0.1 % 96.1 KiB # relative arcstat, note how ghost lists hit was 0 [root@localhost parameters]# arcstat -f time,read,miss,miss%,dmis,dm%,pmis,pm%,mmis,mm%,size,c,avail,mru,mfug,mfu,mfug 1 time read miss miss% dmis dm% pmis pm% mmis mm% size c avail mru mfug mfu mfug 12:04:21 1.2K 590 50 590 50 0 0 0 0 3.9G 3.9G 3.3G 4 0 586 0 12:04:22 1.9K 937 50 937 50 0 0 1 1 3.9G 3.9G 3.3G 58 0 881 0 12:04:23 1.6K 815 51 815 51 0 0 1 1 3.9G 3.9G 3.3G 68 0 746 0 12:04:24 1.8K 892 51 892 51 0 0 1 1 3.9G 3.9G 3.3G 66 0 825 0 12:04:25 754 377 50 377 50 0 0 0 0 3.9G 3.9G 3.3G 0 0 377 0 # try re-reading the first file (test.img) to re-populate MFU, notice the degraded performance... [root@localhost ~]# dd if=/tank/test/test.img of=/dev/null bs=1M status=progress 2836398080 bytes (2.8 GB, 2.6 GiB) copied, 2 s, 1.4 GB/s 3800+0 records in 3800+0 records out 3984588800 bytes (4.0 GB, 3.7 GiB) copied, 2.73548 s, 1.5 GB/s #...and how MFU does _not_ repopulate completely, with some data "locked" by MRU ARC size (current): 100.0 % 3.9 GiB Target size (adaptive): 100.0 % 3.9 GiB Min size (hard limit): 6.2 % 248.6 MiB Max size (high water): 16:1 3.9 GiB Most Frequently Used (MFU) cache size: 93.9 % 3.6 GiB Most Recently Used (MRU) cache size: 6.1 % 242.9 MiB Metadata cache size (hard limit): 75.0 % 2.9 GiB Metadata cache size (current): 0.2 % 5.1 MiB Dnode cache size (hard limit): 10.0 % 298.4 MiB Dnode cache size (current): < 0.1 % 96.1 KiB ``` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> None
defect
adaptive mfu mru seems broken thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name rocky linux distribution version kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing mru become very large when reading a big file for the first time leading to extreme and unexpected mfu shrink moreover re reading a previously mfu cached file shows degraded performance please note that the same effect can be obtained by writing new data with mru again completely displacing mfu this means mfu cache is effectively over shadowed by less valuable mru cache breaking arc adaptive nature describe how to reproduce the problem disabling prefetch for easing debug echo sys module zfs parameters zfs prefetch disable test img is img is first read dd if tank test test img of dev null bs status progress bytes gb gib copied s gb s records in records out bytes gb gib copied s gb s second read to move it in mfu dd if tank test test img of dev null bs status progress records in records out bytes gb gib copied s gb s mfu is arc size current gib target size adaptive gib min size hard limit mib max size high water gib most frequently used mfu cache size gib most recently used mru cache size mib metadata cache size hard limit gib metadata cache size current mib dnode cache size hard limit mib dnode cache size current kib reading img for the first time dd if tank test img of dev null bs status progress bytes gb gib copied s mb s records in records out bytes gb gib copied s mb s mfu was almost completely evicted by mru arc size current gib target size adaptive gib min size hard limit mib max size high water gib most frequently used mfu cache size mib most recently used mru cache size gib metadata cache size hard limit gib metadata cache size current mib dnode cache size hard limit mib dnode cache size current kib relative arcstat note how ghost lists hit was arcstat f time read miss miss dmis dm pmis pm mmis mm size c avail mru mfug mfu mfug time read miss miss dmis dm pmis pm mmis mm size c avail mru mfug mfu mfug try re reading the first file test img to re populate mfu notice the degraded performance dd if tank test test img of dev null bs status progress bytes gb gib copied s gb s records in records out bytes gb gib copied s gb s and how mfu does not repopulate completely with some data locked by mru arc size current gib target size adaptive gib min size hard limit mib max size high water gib most frequently used mfu cache size gib most recently used mru cache size mib metadata cache size hard limit gib metadata cache size current mib dnode cache size hard limit mib dnode cache size current kib include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with none
1
7,745
2,610,631,129
IssuesEvent
2015-02-26 21:31:50
alistairreilly/open-ig
https://api.github.com/repos/alistairreilly/open-ig
closed
doesn't start
auto-migrated Priority-Medium Type-Defect
``` the game won't start on my 64-Linux machine i tried several vm's sun-java(32 and 64 bit) and openjdk see attachment for for filelist, output and system specs ``` Original issue reported on code.google.com by `punx...@gmail.com` on 8 Jan 2011 at 11:51 Attachments: * [infos](https://storage.googleapis.com/google-code-attachments/open-ig/issue-5/comment-0/infos)
1.0
doesn't start - ``` the game won't start on my 64-Linux machine i tried several vm's sun-java(32 and 64 bit) and openjdk see attachment for for filelist, output and system specs ``` Original issue reported on code.google.com by `punx...@gmail.com` on 8 Jan 2011 at 11:51 Attachments: * [infos](https://storage.googleapis.com/google-code-attachments/open-ig/issue-5/comment-0/infos)
defect
doesn t start the game won t start on my linux machine i tried several vm s sun java and bit and openjdk see attachment for for filelist output and system specs original issue reported on code google com by punx gmail com on jan at attachments
1
223,711
7,460,025,679
IssuesEvent
2018-03-30 17:50:38
MRN-Code/coinstac
https://api.github.com/repos/MRN-Code/coinstac
closed
Pipeline: remote cannot handle mixed pipeline specs
bug high priority pipeline
# Problem If the pipeline spec contains both a local and decentralized comp the pipeline explooooods # Tasks Fix it
1.0
Pipeline: remote cannot handle mixed pipeline specs - # Problem If the pipeline spec contains both a local and decentralized comp the pipeline explooooods # Tasks Fix it
non_defect
pipeline remote cannot handle mixed pipeline specs problem if the pipeline spec contains both a local and decentralized comp the pipeline explooooods tasks fix it
0
576,500
17,088,339,307
IssuesEvent
2021-07-08 14:27:11
cloudskiff/driftctl
https://api.github.com/repos/cloudskiff/driftctl
opened
Check if supportedType() is used when possible
good first issue priority/2
**Description** for example, here we must use supportedType instead of hard-writing the resource type: ``` func (e *VPCSecurityGroupEnumerator) SupportedType() resource.ResourceType { return resourceaws.AwsSecurityGroupResourceType } func (e *VPCSecurityGroupEnumerator) Enumerate() ([]resource.Resource, error) { securityGroups, _, err := e.repository.ListAllSecurityGroups() if err != nil { return nil, remoteerror.NewResourceEnumerationError(err, resourceaws.AwsSecurityGroupResourceType) } ```
1.0
Check if supportedType() is used when possible - **Description** for example, here we must use supportedType instead of hard-writing the resource type: ``` func (e *VPCSecurityGroupEnumerator) SupportedType() resource.ResourceType { return resourceaws.AwsSecurityGroupResourceType } func (e *VPCSecurityGroupEnumerator) Enumerate() ([]resource.Resource, error) { securityGroups, _, err := e.repository.ListAllSecurityGroups() if err != nil { return nil, remoteerror.NewResourceEnumerationError(err, resourceaws.AwsSecurityGroupResourceType) } ```
non_defect
check if supportedtype is used when possible description for example here we must use supportedtype instead of hard writing the resource type func e vpcsecuritygroupenumerator supportedtype resource resourcetype return resourceaws awssecuritygroupresourcetype func e vpcsecuritygroupenumerator enumerate resource resource error securitygroups err e repository listallsecuritygroups if err nil return nil remoteerror newresourceenumerationerror err resourceaws awssecuritygroupresourcetype
0
9,772
2,615,174,470
IssuesEvent
2015-03-01 06:57:28
chrsmith/reaver-wps
https://api.github.com/repos/chrsmith/reaver-wps
opened
WPA2-PSK Cracking by BackTrack5
auto-migrated Priority-Triage Type-Defect
``` Dear Friends! Aircrack-ng is failed to find the Password from Dictionary, while capturing large amount of Data. (2). Does Reaver failed to Capture the Password of WPA2-PSk router. appling repeating attempts on WPA2-PSK whose radio signals are 802.11g and SegamCom router , Reaver doesnot go further from Trying Pin 12345670 and it stop there. Viewing many articles , still i saw its impossible to crack WPA2-PSK. While i am using Tenda W311u+ 3.5 DBi antena , doesnot slove issue. from this Adapter i attacked Successfully on WPA-PSK (802.11gn radio Signals). Still wash show me That WPS Locked "No" WPA2-PSk router . but issue is same. Any One have Idea. Note:(I have triend all methods that were written in googleCodes) ``` Original issue reported on code.google.com by `farrukhb...@gmail.com` on 30 Dec 2013 at 3:51
1.0
WPA2-PSK Cracking by BackTrack5 - ``` Dear Friends! Aircrack-ng is failed to find the Password from Dictionary, while capturing large amount of Data. (2). Does Reaver failed to Capture the Password of WPA2-PSk router. appling repeating attempts on WPA2-PSK whose radio signals are 802.11g and SegamCom router , Reaver doesnot go further from Trying Pin 12345670 and it stop there. Viewing many articles , still i saw its impossible to crack WPA2-PSK. While i am using Tenda W311u+ 3.5 DBi antena , doesnot slove issue. from this Adapter i attacked Successfully on WPA-PSK (802.11gn radio Signals). Still wash show me That WPS Locked "No" WPA2-PSk router . but issue is same. Any One have Idea. Note:(I have triend all methods that were written in googleCodes) ``` Original issue reported on code.google.com by `farrukhb...@gmail.com` on 30 Dec 2013 at 3:51
defect
psk cracking by dear friends aircrack ng is failed to find the password from dictionary while capturing large amount of data does reaver failed to capture the password of psk router appling repeating attempts on psk whose radio signals are and segamcom router reaver doesnot go further from trying pin and it stop there viewing many articles still i saw its impossible to crack psk while i am using tenda dbi antena doesnot slove issue from this adapter i attacked successfully on wpa psk radio signals still wash show me that wps locked no psk router but issue is same any one have idea note i have triend all methods that were written in googlecodes original issue reported on code google com by farrukhb gmail com on dec at
1
67,450
9,048,547,355
IssuesEvent
2019-02-12 00:31:59
kyma-project/website
https://api.github.com/repos/kyma-project/website
opened
Cropped bad thumbnails of Kyma on social websites
area/documentation
<!-- Thank you for your contribution. Before you submit the issue: 1. Search open and closed issues for duplicates. 2. Read the contributing guidelines. --> **Description** <!-- Provide a clear and concise description of the problem. Describe where it appears, when it occurred, and what it affects. --> When you paste https://kyma-project.io link into Facebook and social websites, the thumbnail is a cropped bad image of Kyma logo. Looks very unprofessional. <!-- Provide relevant technical details such as the browser name and version, or the operating system. --> **Expected result** <!-- Describe what you expect to happen. --> A scaled, proper thumbnail of Kyma logo. **Actual result** ![screen shot 2019-02-12 at 01 07 24](https://user-images.githubusercontent.com/1930204/52603123-bb45d000-2e65-11e9-8ad7-7b0ccfa18ef4.png) <!-- Describe what happens instead. --> **Steps to reproduce** - Paste a link of https://kyma-project.io in Facebook post. - Checkout the result thumbnail <!-- List the steps to follow to reproduce the bug. Attach any files, links, code samples, or screenshots that could help in investigating the problem. -->
1.0
Cropped bad thumbnails of Kyma on social websites - <!-- Thank you for your contribution. Before you submit the issue: 1. Search open and closed issues for duplicates. 2. Read the contributing guidelines. --> **Description** <!-- Provide a clear and concise description of the problem. Describe where it appears, when it occurred, and what it affects. --> When you paste https://kyma-project.io link into Facebook and social websites, the thumbnail is a cropped bad image of Kyma logo. Looks very unprofessional. <!-- Provide relevant technical details such as the browser name and version, or the operating system. --> **Expected result** <!-- Describe what you expect to happen. --> A scaled, proper thumbnail of Kyma logo. **Actual result** ![screen shot 2019-02-12 at 01 07 24](https://user-images.githubusercontent.com/1930204/52603123-bb45d000-2e65-11e9-8ad7-7b0ccfa18ef4.png) <!-- Describe what happens instead. --> **Steps to reproduce** - Paste a link of https://kyma-project.io in Facebook post. - Checkout the result thumbnail <!-- List the steps to follow to reproduce the bug. Attach any files, links, code samples, or screenshots that could help in investigating the problem. -->
non_defect
cropped bad thumbnails of kyma on social websites thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description provide a clear and concise description of the problem describe where it appears when it occurred and what it affects when you paste link into facebook and social websites the thumbnail is a cropped bad image of kyma logo looks very unprofessional expected result a scaled proper thumbnail of kyma logo actual result steps to reproduce paste a link of in facebook post checkout the result thumbnail
0
61,715
17,023,762,781
IssuesEvent
2021-07-03 03:42:56
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
find the good library of mapnik
Component: mod_tile Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 12.36pm, Friday, 9th December 2011]** Hello when we compile mod_tile on debian stable, that work fine. On sid it's an other story, ld failed with message : cannot find -lmapnik. It's because mapnik has evolved to mapnik2 ... If we change Makefile : - RENDER_LDFLAGS += -lmapnik -Liniparser3.0b -liniparser + RENDER_LDFLAGS += -lmapnik2 -Liniparser3.0b -liniparser that work fine ... but it's not very easy for all people. I propose to change the Makefile with : - RENDER_LDFLAGS += -lmapnik -Liniparser3.0b -liniparser + RENDER_LDFLAGS += -Liniparser3.0b -liniparser + MAPNIK2 = $(shell ldconfig -p | grep -c mapnik2) + ifeq ($(MAPNIK2), 0) + RENDER_LDFLAGS += -lmapnik + else + RENDER_LDFLAGS += -lmapnik2 + endif
1.0
find the good library of mapnik - **[Submitted to the original trac issue database at 12.36pm, Friday, 9th December 2011]** Hello when we compile mod_tile on debian stable, that work fine. On sid it's an other story, ld failed with message : cannot find -lmapnik. It's because mapnik has evolved to mapnik2 ... If we change Makefile : - RENDER_LDFLAGS += -lmapnik -Liniparser3.0b -liniparser + RENDER_LDFLAGS += -lmapnik2 -Liniparser3.0b -liniparser that work fine ... but it's not very easy for all people. I propose to change the Makefile with : - RENDER_LDFLAGS += -lmapnik -Liniparser3.0b -liniparser + RENDER_LDFLAGS += -Liniparser3.0b -liniparser + MAPNIK2 = $(shell ldconfig -p | grep -c mapnik2) + ifeq ($(MAPNIK2), 0) + RENDER_LDFLAGS += -lmapnik + else + RENDER_LDFLAGS += -lmapnik2 + endif
defect
find the good library of mapnik hello when we compile mod tile on debian stable that work fine on sid it s an other story ld failed with message cannot find lmapnik it s because mapnik has evolved to if we change makefile render ldflags lmapnik liniparser render ldflags liniparser that work fine but it s not very easy for all people i propose to change the makefile with render ldflags lmapnik liniparser render ldflags liniparser shell ldconfig p grep c ifeq render ldflags lmapnik else render ldflags endif
1
131,861
5,166,436,911
IssuesEvent
2017-01-17 16:13:46
snaiperskaya96/test-import-repo
https://api.github.com/repos/snaiperskaya96/test-import-repo
opened
New "Tasks" page for viewing scheduled tasks
Accepted Enhancement High Priority
https://trello.com/c/OadTHfNu/39-new-tasks-page-for-viewing-scheduled-tasks This page should display a list of the currently scheduled tasks.
1.0
New "Tasks" page for viewing scheduled tasks - https://trello.com/c/OadTHfNu/39-new-tasks-page-for-viewing-scheduled-tasks This page should display a list of the currently scheduled tasks.
non_defect
new tasks page for viewing scheduled tasks this page should display a list of the currently scheduled tasks
0
275,965
30,312,519,323
IssuesEvent
2023-07-10 13:40:50
cedriking/carbon-api
https://api.github.com/repos/cedriking/carbon-api
opened
puppeteer-20.8.0.tgz: 1 vulnerabilities (highest severity is: 5.3)
Mend: dependency security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>puppeteer-20.8.0.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/word-wrap/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/cedriking/carbon-api/commit/e352d4c0d4a298c84cc6efc979e79d4cf91e5f15">e352d4c0d4a298c84cc6efc979e79d4cf91e5f15</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (puppeteer version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2023-26115](https://www.mend.io/vulnerability-database/CVE-2023-26115) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | word-wrap-1.2.3.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2023-26115</summary> ### Vulnerable Library - <b>word-wrap-1.2.3.tgz</b></p> <p>Wrap words to a specified length.</p> <p>Library home page: <a href="https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz">https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/word-wrap/package.json</p> <p> Dependency Hierarchy: - puppeteer-20.8.0.tgz (Root Library) - browsers-1.4.3.tgz - proxy-agent-6.2.1.tgz - pac-proxy-agent-6.0.3.tgz - pac-resolver-6.0.2.tgz - degenerator-4.0.4.tgz - escodegen-1.14.3.tgz - optionator-0.8.3.tgz - :x: **word-wrap-1.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/cedriking/carbon-api/commit/e352d4c0d4a298c84cc6efc979e79d4cf91e5f15">e352d4c0d4a298c84cc6efc979e79d4cf91e5f15</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> All versions of the package word-wrap are vulnerable to Regular Expression Denial of Service (ReDoS) due to the usage of an insecure regular expression within the result variable. <p>Publish Date: 2023-06-22 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26115>CVE-2023-26115</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
puppeteer-20.8.0.tgz: 1 vulnerabilities (highest severity is: 5.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>puppeteer-20.8.0.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/word-wrap/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/cedriking/carbon-api/commit/e352d4c0d4a298c84cc6efc979e79d4cf91e5f15">e352d4c0d4a298c84cc6efc979e79d4cf91e5f15</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (puppeteer version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2023-26115](https://www.mend.io/vulnerability-database/CVE-2023-26115) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | word-wrap-1.2.3.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2023-26115</summary> ### Vulnerable Library - <b>word-wrap-1.2.3.tgz</b></p> <p>Wrap words to a specified length.</p> <p>Library home page: <a href="https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz">https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/word-wrap/package.json</p> <p> Dependency Hierarchy: - puppeteer-20.8.0.tgz (Root Library) - browsers-1.4.3.tgz - proxy-agent-6.2.1.tgz - pac-proxy-agent-6.0.3.tgz - pac-resolver-6.0.2.tgz - degenerator-4.0.4.tgz - escodegen-1.14.3.tgz - optionator-0.8.3.tgz - :x: **word-wrap-1.2.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/cedriking/carbon-api/commit/e352d4c0d4a298c84cc6efc979e79d4cf91e5f15">e352d4c0d4a298c84cc6efc979e79d4cf91e5f15</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> All versions of the package word-wrap are vulnerable to Regular Expression Denial of Service (ReDoS) due to the usage of an insecure regular expression within the result variable. <p>Publish Date: 2023-06-22 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26115>CVE-2023-26115</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_defect
puppeteer tgz vulnerabilities highest severity is vulnerable library puppeteer tgz path to dependency file package json path to vulnerable library node modules word wrap package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in puppeteer version remediation available medium word wrap tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library word wrap tgz wrap words to a specified length library home page a href path to dependency file package json path to vulnerable library node modules word wrap package json dependency hierarchy puppeteer tgz root library browsers tgz proxy agent tgz pac proxy agent tgz pac resolver tgz degenerator tgz escodegen tgz optionator tgz x word wrap tgz vulnerable library found in head commit a href found in base branch main vulnerability details all versions of the package word wrap are vulnerable to regular expression denial of service redos due to the usage of an insecure regular expression within the result variable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href step up your open source security game with mend
0
734,325
25,344,674,746
IssuesEvent
2022-11-19 04:01:26
grpc/grpc
https://api.github.com/repos/grpc/grpc
opened
tcpdump determines that a message is received, but stream read is not responding
kind/bug lang/c++ priority/P2 untriaged
gdb prints this stream object : ``` stream_ = { <grpc::ServerAsyncReaderWriterInterface<rpc::CServerFrame, rpc::CServerFrame>> = { <grpc::internal::ServerAsyncStreamingInterface> = { _vptr.ServerAsyncStreamingInterface = 0x1f9c1d0 <vtable for grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>+16> }, <grpc::internal::AsyncWriterInterface<rpc::CServerFrame>> = { _vptr.AsyncWriterInterface = 0x1f9c228 <vtable for grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>+104> }, <grpc::internal::AsyncReaderInterface<rpc::CServerFrame>> = { _vptr.AsyncReaderInterface = 0x1f9c258 <vtable for grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>+152> }, <No data fields>}, members of grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>: call_ = { call_hook_ = 0x9297b80, cq_ = 0x8a82690, call_ = 0x923c060, --Type <RET> for more, q to quit, c to continue without paging-- max_receive_message_size_ = 134217728, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, ctx_ = 0x9894c48, meta_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c008 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> }, <No data fields>}, <grpc::internal::CallOpSendInitialMetadata> = { hijacked_ = false, send_ = false, flags_ = 0, initial_metadata_count_ = 159993968, metadata_map_ = 0xb, initial_metadata_ = 0x6156746375727473, maybe_compression_level_ = { is_set = false, level = 570425344 } }, <grpc::internal::CallNoOp<2>> = {<No data fields>}, <grpc::internal::CallNoOp<3>> = {<No data fields>}, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x9895050, return_tag_ = 0x9895050, call_ = { call_hook_ = 0x0, cq_ = 0x0, call_ = 0x0, max_receive_message_size_ = -1, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false <repeats 13 times>} }, current_interceptor_index_ = 0, reverse_ = false, ran_hijacking_interceptor_ = false, call_ = 0x0, ops_ = 0x0, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { --Type <RET> for more, q to quit, c to continue without paging-- static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x9895110, _M_const_object = 0x9895110, _M_function_pointer = 0x9895110, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x9895110, this adjustment 9 }, _M_pod_data = "\020Q\211\t\000\000\000\000\t\000\000\000\000\000\000" }, _M_manager = 0x0 }, members of std::function<void()>: _M_invoker = 0x28d0065 <strerror_pool+1189> }, send_message_ = 0x0, fail_send_message_ = 0x0, orig_send_message_ = 0x0, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x98950e0, _M_const_object = 0x98950e0, _M_function_pointer = 0x98950e0, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x98950e0 }, _M_pod_data = "\340P\211\t", '\000' <repeats 11 times> }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x9895160 }, send_initial_metadata_ = 0xa, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = 4 }, read_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c180 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpRecvMessage<rpc::CServerFrame>, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> --Type <RET> for more, q to quit, c to continue without paging-- }, <No data fields>}, <grpc::internal::CallOpRecvMessage<rpc::CServerFrame>> = { got_message = false, message_ = 0x255566d0, recv_buf_ = { buffer_ = 0x0 }, allow_not_getting_message_ = false, hijacked_ = false, hijacked_recv_message_failed_ = false }, <grpc::internal::CallNoOp<2>> = {<No data fields>}, <grpc::internal::CallNoOp<3>> = {<No data fields>}, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpRecvMessage<rpc::CServerFrame>, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x98951b0, return_tag_ = 0x8b1ff60, call_ = { call_hook_ = 0x9297b80, cq_ = 0x8a82690, call_ = 0x923c060, max_receive_message_size_ = 134217728, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false, false, false, false, false, false, false, false, false, true, false, false, false} }, current_interceptor_index_ = 0, reverse_ = true, ran_hijacking_interceptor_ = false, call_ = 0x98951e8, ops_ = 0x98951b0, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x22000000028dcf40, _M_const_object = 0x22000000028dcf40, _M_function_pointer = 0x22000000028dcf40, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x22000000028dcf40, this adjustment 2 }, _M_pod_data = "@ύ\002\000\000\000\"\002\000\000\000\000\000\000" }, _M_manager = 0x0 --Type <RET> for more, q to quit, c to continue without paging-- }, members of std::function<void()>: _M_invoker = 0x98952c0 }, send_message_ = 0x0, fail_send_message_ = 0x0, orig_send_message_ = 0x0, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x98952a0, _M_const_object = 0x98952a0, _M_function_pointer = 0x98952a0, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x98952a0, this adjustment 1 }, _M_pod_data = "\240R\211\t\000\000\000\000\001\000\000\000\000\000\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x22000000028dcf40 }, send_initial_metadata_ = 0x1, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = false }, write_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c058 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpSendMessage, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> }, <No data fields>}, <grpc::internal::CallOpSendInitialMetadata> = { hijacked_ = false, send_ = false, flags_ = 0, initial_metadata_count_ = 0, metadata_map_ = 0x9894d38, initial_metadata_ = 0x0, maybe_compression_level_ = { is_set = false, level = GRPC_COMPRESS_LEVEL_NONE } --Type <RET> for more, q to quit, c to continue without paging-- }, <grpc::internal::CallOpSendMessage> = { msg_ = 0x0, hijacked_ = false, failed_send_ = false, send_buf_ = { buffer_ = 0x0 }, write_options_ = { flags_ = 0, last_message_ = false }, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x988e930, _M_const_object = 0x988e930, _M_function_pointer = 0x988e930, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x988e930, this adjustment 159994160 }, _M_pod_data = "0\351\210\t\000\000\000\000\060Q\211\t\000\000\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x0 } }, <grpc::internal::CallOpServerSendStatus> = { hijacked_ = false, send_status_available_ = false, send_status_code_ = GRPC_STATUS_OK, send_error_details_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: _M_p = 0x9895390 "" }, _M_string_length = 0, { _M_local_buf = "\000omponents\000\002\000\000\000", _M_allocated_capacity = 7954885741726494464 } }, send_error_message_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: --Type <RET> for more, q to quit, c to continue without paging-- _M_p = 0x98953b0 "" }, _M_string_length = 0, { _M_local_buf = "\000U\211\t\000\000\000\000\310\351\210\t\000\000\000", _M_allocated_capacity = 159995136 } }, trailing_metadata_count_ = 159994880, metadata_map_ = 0x9895450, trailing_metadata_ = 0x98953e0, error_message_slice_ = { refcount = 0x9, data = { refcounted = { length = 7881667067788620391, bytes = 0x7ffc03090065 "" }, inlined = { length = 103 'g', bytes = "roupName\000\t\003\374\177\000\000\003\000\000\000\000\000\000" } } } }, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpSendMessage, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x9895308, return_tag_ = 0x8b1fda0, call_ = { call_hook_ = 0x9297b80, cq_ = 0x8a82690, call_ = 0x923c060, max_receive_message_size_ = 134217728, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false <repeats 13 times>} }, current_interceptor_index_ = 0, reverse_ = true, ran_hijacking_interceptor_ = false, call_ = 0x9895408, ops_ = 0x9895308, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { --Type <RET> for more, q to quit, c to continue without paging-- static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x4, _M_const_object = 0x4, _M_function_pointer = 0x4, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x4, this adjustment 8030559152816742772 }, _M_pod_data = "\004\000\000\000\000\000\000\000tags\000Gro" }, _M_manager = 0x0 }, members of std::function<void()>: _M_invoker = 0x2 }, send_message_ = 0x0, fail_send_message_ = 0x9895341, orig_send_message_ = 0x0, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x0, _M_const_object = 0x0, _M_function_pointer = 0x0, _M_member_pointer = NULL }, _M_pod_data = "\000\000\000\000\000\000\000\000\340\373\n\003\374\177\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x11c6b9d50 }, send_initial_metadata_ = 0x9894d38, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = true }, finish_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c0a8 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> --Type <RET> for more, q to quit, c to continue without paging-- }, <No data fields>}, <grpc::internal::CallOpSendInitialMetadata> = { hijacked_ = false, send_ = false, flags_ = 0, initial_metadata_count_ = 0, metadata_map_ = 0x9895630, initial_metadata_ = 0x98955e0, maybe_compression_level_ = { is_set = false, level = GRPC_COMPRESS_LEVEL_NONE } }, <grpc::internal::CallOpServerSendStatus> = { hijacked_ = false, send_status_available_ = false, send_status_code_ = GRPC_STATUS_OK, send_error_details_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: _M_p = 0x9895570 "" }, _M_string_length = 0, { _M_local_buf = "\000asePos\000@ύ\002\000\000\000\"", _M_allocated_capacity = 32492013411852544 } }, send_error_message_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: _M_p = 0x9895590 "" }, _M_string_length = 0, { _M_local_buf = "\000Y\211\t\000\000\000\000\240Y\211\t\000\000\000", _M_allocated_capacity = 159996160 } }, trailing_metadata_count_ = 0, metadata_map_ = 0x0, trailing_metadata_ = 0x98955c0, error_message_slice_ = { refcount = 0xa, data = { refcounted = { length = 7598819715831919477, bytes = 0x2200000002007974 <error: Cannot access memory at address 0x2200000002007974> }, inlined = { length = 117 'u', --Type <RET> for more, q to quit, c to continue without paging-- bytes = "serEntity\000\002\000\000\000\"\006\000\000\000\000\000\000" } } } }, <grpc::internal::CallNoOp<3>> = {<No data fields>}, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x9895528, return_tag_ = 0x9895528, call_ = { call_hook_ = 0x0, cq_ = 0x0, call_ = 0x0, max_receive_message_size_ = -1, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false <repeats 13 times>} }, current_interceptor_index_ = 0, reverse_ = false, ran_hijacking_interceptor_ = false, call_ = 0x0, ops_ = 0x0, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0xf, _M_const_object = 0xf, _M_function_pointer = 0xf, _M_member_pointer = &virtual table offset 14, this adjustment 8315446253158941026 }, _M_pod_data = "\017\000\000\000\000\000\000\000bAddOffs" }, _M_manager = 0x0 }, members of std::function<void()>: _M_invoker = 0x4 }, send_message_ = 0x0, fail_send_message_ = 0x0, orig_send_message_ = 0x0, --Type <RET> for more, q to quit, c to continue without paging-- serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x9895720, _M_const_object = 0x9895720, _M_function_pointer = 0x9895720, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x9895720, this adjustment 159995600 }, _M_pod_data = " W\211\t\000\000\000\000\320V\211\t\000\000\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x3 }, send_initial_metadata_ = 0x726964 <fmt::v7::detail::write<char, fmt::v7::detail::buffer_appender<char>, float, 0>(fmt::v7::detail::buffer_appender<char>, float, fmt::v7::basic_format_specs<char>, fmt::v7::detail::locale_ref)+644>, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = 117 } }, ```
1.0
tcpdump determines that a message is received, but stream read is not responding - gdb prints this stream object : ``` stream_ = { <grpc::ServerAsyncReaderWriterInterface<rpc::CServerFrame, rpc::CServerFrame>> = { <grpc::internal::ServerAsyncStreamingInterface> = { _vptr.ServerAsyncStreamingInterface = 0x1f9c1d0 <vtable for grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>+16> }, <grpc::internal::AsyncWriterInterface<rpc::CServerFrame>> = { _vptr.AsyncWriterInterface = 0x1f9c228 <vtable for grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>+104> }, <grpc::internal::AsyncReaderInterface<rpc::CServerFrame>> = { _vptr.AsyncReaderInterface = 0x1f9c258 <vtable for grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>+152> }, <No data fields>}, members of grpc::ServerAsyncReaderWriter<rpc::CServerFrame, rpc::CServerFrame>: call_ = { call_hook_ = 0x9297b80, cq_ = 0x8a82690, call_ = 0x923c060, --Type <RET> for more, q to quit, c to continue without paging-- max_receive_message_size_ = 134217728, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, ctx_ = 0x9894c48, meta_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c008 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> }, <No data fields>}, <grpc::internal::CallOpSendInitialMetadata> = { hijacked_ = false, send_ = false, flags_ = 0, initial_metadata_count_ = 159993968, metadata_map_ = 0xb, initial_metadata_ = 0x6156746375727473, maybe_compression_level_ = { is_set = false, level = 570425344 } }, <grpc::internal::CallNoOp<2>> = {<No data fields>}, <grpc::internal::CallNoOp<3>> = {<No data fields>}, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x9895050, return_tag_ = 0x9895050, call_ = { call_hook_ = 0x0, cq_ = 0x0, call_ = 0x0, max_receive_message_size_ = -1, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false <repeats 13 times>} }, current_interceptor_index_ = 0, reverse_ = false, ran_hijacking_interceptor_ = false, call_ = 0x0, ops_ = 0x0, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { --Type <RET> for more, q to quit, c to continue without paging-- static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x9895110, _M_const_object = 0x9895110, _M_function_pointer = 0x9895110, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x9895110, this adjustment 9 }, _M_pod_data = "\020Q\211\t\000\000\000\000\t\000\000\000\000\000\000" }, _M_manager = 0x0 }, members of std::function<void()>: _M_invoker = 0x28d0065 <strerror_pool+1189> }, send_message_ = 0x0, fail_send_message_ = 0x0, orig_send_message_ = 0x0, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x98950e0, _M_const_object = 0x98950e0, _M_function_pointer = 0x98950e0, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x98950e0 }, _M_pod_data = "\340P\211\t", '\000' <repeats 11 times> }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x9895160 }, send_initial_metadata_ = 0xa, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = 4 }, read_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c180 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpRecvMessage<rpc::CServerFrame>, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> --Type <RET> for more, q to quit, c to continue without paging-- }, <No data fields>}, <grpc::internal::CallOpRecvMessage<rpc::CServerFrame>> = { got_message = false, message_ = 0x255566d0, recv_buf_ = { buffer_ = 0x0 }, allow_not_getting_message_ = false, hijacked_ = false, hijacked_recv_message_failed_ = false }, <grpc::internal::CallNoOp<2>> = {<No data fields>}, <grpc::internal::CallNoOp<3>> = {<No data fields>}, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpRecvMessage<rpc::CServerFrame>, grpc::internal::CallNoOp<2>, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x98951b0, return_tag_ = 0x8b1ff60, call_ = { call_hook_ = 0x9297b80, cq_ = 0x8a82690, call_ = 0x923c060, max_receive_message_size_ = 134217728, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false, false, false, false, false, false, false, false, false, true, false, false, false} }, current_interceptor_index_ = 0, reverse_ = true, ran_hijacking_interceptor_ = false, call_ = 0x98951e8, ops_ = 0x98951b0, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x22000000028dcf40, _M_const_object = 0x22000000028dcf40, _M_function_pointer = 0x22000000028dcf40, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x22000000028dcf40, this adjustment 2 }, _M_pod_data = "@ύ\002\000\000\000\"\002\000\000\000\000\000\000" }, _M_manager = 0x0 --Type <RET> for more, q to quit, c to continue without paging-- }, members of std::function<void()>: _M_invoker = 0x98952c0 }, send_message_ = 0x0, fail_send_message_ = 0x0, orig_send_message_ = 0x0, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x98952a0, _M_const_object = 0x98952a0, _M_function_pointer = 0x98952a0, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x98952a0, this adjustment 1 }, _M_pod_data = "\240R\211\t\000\000\000\000\001\000\000\000\000\000\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x22000000028dcf40 }, send_initial_metadata_ = 0x1, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = false }, write_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c058 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpSendMessage, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> }, <No data fields>}, <grpc::internal::CallOpSendInitialMetadata> = { hijacked_ = false, send_ = false, flags_ = 0, initial_metadata_count_ = 0, metadata_map_ = 0x9894d38, initial_metadata_ = 0x0, maybe_compression_level_ = { is_set = false, level = GRPC_COMPRESS_LEVEL_NONE } --Type <RET> for more, q to quit, c to continue without paging-- }, <grpc::internal::CallOpSendMessage> = { msg_ = 0x0, hijacked_ = false, failed_send_ = false, send_buf_ = { buffer_ = 0x0 }, write_options_ = { flags_ = 0, last_message_ = false }, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x988e930, _M_const_object = 0x988e930, _M_function_pointer = 0x988e930, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x988e930, this adjustment 159994160 }, _M_pod_data = "0\351\210\t\000\000\000\000\060Q\211\t\000\000\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x0 } }, <grpc::internal::CallOpServerSendStatus> = { hijacked_ = false, send_status_available_ = false, send_status_code_ = GRPC_STATUS_OK, send_error_details_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: _M_p = 0x9895390 "" }, _M_string_length = 0, { _M_local_buf = "\000omponents\000\002\000\000\000", _M_allocated_capacity = 7954885741726494464 } }, send_error_message_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: --Type <RET> for more, q to quit, c to continue without paging-- _M_p = 0x98953b0 "" }, _M_string_length = 0, { _M_local_buf = "\000U\211\t\000\000\000\000\310\351\210\t\000\000\000", _M_allocated_capacity = 159995136 } }, trailing_metadata_count_ = 159994880, metadata_map_ = 0x9895450, trailing_metadata_ = 0x98953e0, error_message_slice_ = { refcount = 0x9, data = { refcounted = { length = 7881667067788620391, bytes = 0x7ffc03090065 "" }, inlined = { length = 103 'g', bytes = "roupName\000\t\003\374\177\000\000\003\000\000\000\000\000\000" } } } }, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpSendMessage, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x9895308, return_tag_ = 0x8b1fda0, call_ = { call_hook_ = 0x9297b80, cq_ = 0x8a82690, call_ = 0x923c060, max_receive_message_size_ = 134217728, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false <repeats 13 times>} }, current_interceptor_index_ = 0, reverse_ = true, ran_hijacking_interceptor_ = false, call_ = 0x9895408, ops_ = 0x9895308, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { --Type <RET> for more, q to quit, c to continue without paging-- static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x4, _M_const_object = 0x4, _M_function_pointer = 0x4, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x4, this adjustment 8030559152816742772 }, _M_pod_data = "\004\000\000\000\000\000\000\000tags\000Gro" }, _M_manager = 0x0 }, members of std::function<void()>: _M_invoker = 0x2 }, send_message_ = 0x0, fail_send_message_ = 0x9895341, orig_send_message_ = 0x0, serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x0, _M_const_object = 0x0, _M_function_pointer = 0x0, _M_member_pointer = NULL }, _M_pod_data = "\000\000\000\000\000\000\000\000\340\373\n\003\374\177\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x11c6b9d50 }, send_initial_metadata_ = 0x9894d38, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = true }, finish_ops_ = { <grpc::internal::CallOpSetInterface> = { <grpc::internal::CompletionQueueTag> = { _vptr.CompletionQueueTag = 0x1f9c0a8 <vtable for grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >+16> --Type <RET> for more, q to quit, c to continue without paging-- }, <No data fields>}, <grpc::internal::CallOpSendInitialMetadata> = { hijacked_ = false, send_ = false, flags_ = 0, initial_metadata_count_ = 0, metadata_map_ = 0x9895630, initial_metadata_ = 0x98955e0, maybe_compression_level_ = { is_set = false, level = GRPC_COMPRESS_LEVEL_NONE } }, <grpc::internal::CallOpServerSendStatus> = { hijacked_ = false, send_status_available_ = false, send_status_code_ = GRPC_STATUS_OK, send_error_details_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: _M_p = 0x9895570 "" }, _M_string_length = 0, { _M_local_buf = "\000asePos\000@ύ\002\000\000\000\"", _M_allocated_capacity = 32492013411852544 } }, send_error_message_ = { static npos = 18446744073709551615, _M_dataplus = { <std::allocator<char>> = { <__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, members of std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider: _M_p = 0x9895590 "" }, _M_string_length = 0, { _M_local_buf = "\000Y\211\t\000\000\000\000\240Y\211\t\000\000\000", _M_allocated_capacity = 159996160 } }, trailing_metadata_count_ = 0, metadata_map_ = 0x0, trailing_metadata_ = 0x98955c0, error_message_slice_ = { refcount = 0xa, data = { refcounted = { length = 7598819715831919477, bytes = 0x2200000002007974 <error: Cannot access memory at address 0x2200000002007974> }, inlined = { length = 117 'u', --Type <RET> for more, q to quit, c to continue without paging-- bytes = "serEntity\000\002\000\000\000\"\006\000\000\000\000\000\000" } } } }, <grpc::internal::CallNoOp<3>> = {<No data fields>}, <grpc::internal::CallNoOp<4>> = {<No data fields>}, <grpc::internal::CallNoOp<5>> = {<No data fields>}, <grpc::internal::CallNoOp<6>> = {<No data fields>}, members of grpc::internal::CallOpSet<grpc::internal::CallOpSendInitialMetadata, grpc::internal::CallOpServerSendStatus, grpc::internal::CallNoOp<3>, grpc::internal::CallNoOp<4>, grpc::internal::CallNoOp<5>, grpc::internal::CallNoOp<6> >: core_cq_tag_ = 0x9895528, return_tag_ = 0x9895528, call_ = { call_hook_ = 0x0, cq_ = 0x0, call_ = 0x0, max_receive_message_size_ = -1, client_rpc_info_ = 0x0, server_rpc_info_ = 0x0 }, done_intercepting_ = false, interceptor_methods_ = { <grpc::experimental::InterceptorBatchMethods> = { _vptr.InterceptorBatchMethods = 0x1f9beb0 <vtable for grpc::internal::InterceptorBatchMethodsImpl+16> }, members of grpc::internal::InterceptorBatchMethodsImpl: hooks_ = { _M_elems = {false <repeats 13 times>} }, current_interceptor_index_ = 0, reverse_ = false, ran_hijacking_interceptor_ = false, call_ = 0x0, ops_ = 0x0, callback_ = { <std::_Maybe_unary_or_binary_function<void>> = {<No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0xf, _M_const_object = 0xf, _M_function_pointer = 0xf, _M_member_pointer = &virtual table offset 14, this adjustment 8315446253158941026 }, _M_pod_data = "\017\000\000\000\000\000\000\000bAddOffs" }, _M_manager = 0x0 }, members of std::function<void()>: _M_invoker = 0x4 }, send_message_ = 0x0, fail_send_message_ = 0x0, orig_send_message_ = 0x0, --Type <RET> for more, q to quit, c to continue without paging-- serializer_ = { <std::_Maybe_unary_or_binary_function<grpc::Status, void const*>> = { <std::unary_function<void const*, grpc::Status>> = {<No data fields>}, <No data fields>}, <std::_Function_base> = { static _M_max_size = 16, static _M_max_align = 8, _M_functor = { _M_unused = { _M_object = 0x9895720, _M_const_object = 0x9895720, _M_function_pointer = 0x9895720, _M_member_pointer = (void (std::_Undefined_class::*)(std::_Undefined_class * const)) 0x9895720, this adjustment 159995600 }, _M_pod_data = " W\211\t\000\000\000\000\320V\211\t\000\000\000" }, _M_manager = 0x0 }, members of std::function<grpc::Status(void const*)>: _M_invoker = 0x3 }, send_initial_metadata_ = 0x726964 <fmt::v7::detail::write<char, fmt::v7::detail::buffer_appender<char>, float, 0>(fmt::v7::detail::buffer_appender<char>, float, fmt::v7::basic_format_specs<char>, fmt::v7::detail::locale_ref)+644>, code_ = 0x0, error_details_ = 0x0, error_message_ = 0x0, send_trailing_metadata_ = 0x0, recv_message_ = 0x0, hijacked_recv_message_failed_ = 0x0, recv_initial_metadata_ = 0x0, recv_status_ = 0x0, recv_trailing_metadata_ = 0x0 }, saved_status_ = 117 } }, ```
non_defect
tcpdump determines that a message is received but stream read is not responding gdb prints this stream object : stream vptr serverasyncstreaminginterface vptr asyncwriterinterface vptr asyncreaderinterface members of grpc serverasyncreaderwriter call call hook cq call type for more q to quit c to continue without paging max receive message size client rpc info server rpc info ctx meta ops vptr completionqueuetag grpc internal callnoop grpc internal callnoop grpc internal callnoop grpc internal callnoop hijacked false send false flags initial metadata count metadata map initial metadata maybe compression level is set false level members of grpc internal callopset grpc internal callnoop grpc internal callnoop grpc internal callnoop grpc internal callnoop core cq tag return tag call call hook cq call max receive message size client rpc info server rpc info done intercepting false interceptor methods vptr interceptorbatchmethods members of grpc internal interceptorbatchmethodsimpl hooks m elems false current interceptor index reverse false ran hijacking interceptor false call ops callback type for more q to quit c to continue without paging static m max size static m max align m functor m unused m object m const object m function pointer m member pointer void std undefined class std undefined class const this adjustment m pod data t t m manager members of std function m invoker send message fail send message orig send message serializer static m max size static m max align m functor m unused m object m const object m function pointer m member pointer void std undefined class std undefined class const m pod data t m manager members of std function m invoker send initial metadata code error details error message send trailing metadata recv message hijacked recv message failed recv initial metadata recv status recv trailing metadata saved status read ops vptr completionqueuetag grpc internal callnoop grpc internal callnoop grpc internal callnoop grpc internal callnoop grpc internal callnoop type for more q to quit c to continue without paging got message false message recv buf buffer allow not getting message false hijacked false hijacked recv message failed false members of grpc internal callopset grpc internal callnoop grpc internal callnoop grpc internal callnoop grpc internal callnoop grpc internal callnoop core cq tag return tag call call hook cq call max receive message size client rpc info server rpc info done intercepting false interceptor methods vptr interceptorbatchmethods members of grpc internal interceptorbatchmethodsimpl hooks m elems false false false false false false false false false true false false false current interceptor index reverse true ran hijacking interceptor false call ops callback static m max size static m max align m functor m unused m object m const object m function pointer m member pointer void std undefined class std undefined class const this adjustment m pod data ύ m manager type for more q to quit c to continue without paging members of std function m invoker send message fail send message orig send message serializer static m max size static m max align m functor m unused m object m const object m function pointer m member pointer void std undefined class std undefined class const this adjustment m pod data t m manager members of std function m invoker send initial metadata code error details error message send trailing metadata recv message hijacked recv message failed recv initial metadata recv status recv trailing metadata saved status false write ops vptr completionqueuetag grpc internal callnoop grpc internal callnoop hijacked false send false flags initial metadata count metadata map initial metadata maybe compression level is set false level grpc compress level none type for more q to quit c to continue without paging msg hijacked false failed send false send buf buffer write options flags last message false serializer static m max size static m max align m functor m unused m object m const object m function pointer m member pointer void std undefined class std undefined class const this adjustment m pod data t t m manager members of std function m invoker hijacked false send status available false send status code grpc status ok send error details static npos m dataplus members of std basic string std allocator alloc hider m p m string length m local buf m allocated capacity send error message static npos m dataplus members of std basic string std allocator alloc hider type for more q to quit c to continue without paging m p m string length m local buf t t m allocated capacity trailing metadata count metadata map trailing metadata error message slice refcount data refcounted length bytes inlined length g bytes roupname t members of grpc internal callopset grpc internal callnoop grpc internal callnoop core cq tag return tag call call hook cq call max receive message size client rpc info server rpc info done intercepting false interceptor methods vptr interceptorbatchmethods members of grpc internal interceptorbatchmethodsimpl hooks m elems false current interceptor index reverse true ran hijacking interceptor false call ops callback type for more q to quit c to continue without paging static m max size static m max align m functor m unused m object m const object m function pointer m member pointer void std undefined class std undefined class const this adjustment m pod data m manager members of std function m invoker send message fail send message orig send message serializer static m max size static m max align m functor m unused m object m const object m function pointer m member pointer null m pod data n m manager members of std function m invoker send initial metadata code error details error message send trailing metadata recv message hijacked recv message failed recv initial metadata recv status recv trailing metadata saved status true finish ops vptr completionqueuetag grpc internal callnoop grpc internal callnoop grpc internal callnoop type for more q to quit c to continue without paging hijacked false send false flags initial metadata count metadata map initial metadata maybe compression level is set false level grpc compress level none hijacked false send status available false send status code grpc status ok send error details static npos m dataplus members of std basic string std allocator alloc hider m p m string length m local buf ύ m allocated capacity send error message static npos m dataplus members of std basic string std allocator alloc hider m p m string length m local buf t t m allocated capacity trailing metadata count metadata map trailing metadata error message slice refcount data refcounted length bytes inlined length u type for more q to quit c to continue without paging bytes serentity members of grpc internal callopset grpc internal callnoop grpc internal callnoop grpc internal callnoop core cq tag return tag call call hook cq call max receive message size client rpc info server rpc info done intercepting false interceptor methods vptr interceptorbatchmethods members of grpc internal interceptorbatchmethodsimpl hooks m elems false current interceptor index reverse false ran hijacking interceptor false call ops callback static m max size static m max align m functor m unused m object m const object m function pointer m member pointer virtual table offset this adjustment m pod data m manager members of std function m invoker send message fail send message orig send message type for more q to quit c to continue without paging serializer static m max size static m max align m functor m unused m object m const object m function pointer m member pointer void std undefined class std undefined class const this adjustment m pod data w t t m manager members of std function m invoker send initial metadata float fmt detail buffer appender float fmt basic format specs fmt detail locale ref code error details error message send trailing metadata recv message hijacked recv message failed recv initial metadata recv status recv trailing metadata saved status
0
270,384
8,459,544,527
IssuesEvent
2018-10-22 16:13:22
tandemcode/tandem
https://api.github.com/repos/tandemcode/tandem
opened
Generate [FRAMEWORK HERE] Project
Feature High priority UX
High priority for quick on-boarding. Should contain framework & language target. - `Generate React project (TypeScript)` - `Generate Static HTML project` - `Generate Laravel project (PHP)`
1.0
Generate [FRAMEWORK HERE] Project - High priority for quick on-boarding. Should contain framework & language target. - `Generate React project (TypeScript)` - `Generate Static HTML project` - `Generate Laravel project (PHP)`
non_defect
generate project high priority for quick on boarding should contain framework language target generate react project typescript generate static html project generate laravel project php
0
32,391
7,531,106,144
IssuesEvent
2018-04-15 00:42:02
dahall/TaskScheduler
https://api.github.com/repos/dahall/TaskScheduler
closed
Problem when compiling as DotNET 4.0 client profile
codeplex-disc
Hello! Thanks for the great work. I'm trying to compile under 4.0 client (which is the latest included by default in Windows 7 SP1) and I'm facing this error: > .NETFramework,Version=v2.0,Profile=Client" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend. It is compiling, I see the 4.0 under bin updating. The problem is that after being sure it compiles correctly I'd like to include it directly into my project. How can I disable compiling for all architectures but 4.0 client? I can't seem to find that configuration anywhere, I'm sorry if this is very basic or not directly related to your project. Originally posted: 2016-07-08T09:51:26
1.0
Problem when compiling as DotNET 4.0 client profile - Hello! Thanks for the great work. I'm trying to compile under 4.0 client (which is the latest included by default in Windows 7 SP1) and I'm facing this error: > .NETFramework,Version=v2.0,Profile=Client" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend. It is compiling, I see the 4.0 under bin updating. The problem is that after being sure it compiles correctly I'd like to include it directly into my project. How can I disable compiling for all architectures but 4.0 client? I can't seem to find that configuration anywhere, I'm sorry if this is very basic or not directly related to your project. Originally posted: 2016-07-08T09:51:26
non_defect
problem when compiling as dotnet client profile hello thanks for the great work i m trying to compile under client which is the latest included by default in windows and i m facing this error netframework version profile client were not found to resolve this install the sdk or targeting pack for this framework version or retarget your application to a version of the framework for which you have the sdk or targeting pack installed note that assemblies will be resolved from the global assembly cache gac and will be used in place of reference assemblies therefore your assembly may not be correctly targeted for the framework you intend it is compiling i see the under bin updating the problem is that after being sure it compiles correctly i d like to include it directly into my project how can i disable compiling for all architectures but client i can t seem to find that configuration anywhere i m sorry if this is very basic or not directly related to your project originally posted
0
156,559
19,901,351,215
IssuesEvent
2022-01-25 08:17:56
kedacore/sample-dotnet-worker-servicebus-queue
https://api.github.com/repos/kedacore/sample-dotnet-worker-servicebus-queue
closed
CVE-2017-0256 (Medium) detected in system.net.http.4.3.0.nupkg
security vulnerability
## CVE-2017-0256 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p></summary> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/Keda.Samples.DotNet.Web/Keda.Samples.DotNet.Web.csproj</p> <p>Path to vulnerable library: /usr/share/dotnet/sdk/NuGetFallbackFolder/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p> <p> Dependency Hierarchy: - microsoft.azure.management.servicebus.2.1.0.nupkg (Root Library) - :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kedacore/sample-dotnet-worker-servicebus-queue/commit/abcaa6e51b50b94f21d398225dc8963e81053704">abcaa6e51b50b94f21d398225dc8963e81053704</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A spoofing vulnerability exists when the ASP.NET Core fails to properly sanitize web requests. <p>Publish Date: 2017-05-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0256>CVE-2017-0256</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-0256">https://nvd.nist.gov/vuln/detail/CVE-2017-0256</a></p> <p>Release Date: 2017-05-12</p> <p>Fix Resolution: Microsoft.AspNetCore.Mvc.ApiExplorer - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.Abstractions - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.Core - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.1.3,1.0.4;System.Net.Http - 4.1.2,4.3.2;Microsoft.AspNetCore.Mvc.Razor - 1.1.3,1.0.4;System.Net.Http.WinHttpHandler - 4.0.2,4.3.0-preview1-24530-04;System.Net.Security - 4.3.0-preview1-24530-04,4.0.1;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4,1.1.3;System.Text.Encodings.Web - 4.3.0-preview1-24530-04,4.0.1;Microsoft.AspNetCore.Mvc.Razor.Host - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4,1.1.3;System.Net.WebSockets.Client - 4.3.0-preview1-24530-04,4.0.1;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4,1.1.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-0256 (Medium) detected in system.net.http.4.3.0.nupkg - ## CVE-2017-0256 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p></summary> <p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p> <p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p> <p>Path to dependency file: /src/Keda.Samples.DotNet.Web/Keda.Samples.DotNet.Web.csproj</p> <p>Path to vulnerable library: /usr/share/dotnet/sdk/NuGetFallbackFolder/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p> <p> Dependency Hierarchy: - microsoft.azure.management.servicebus.2.1.0.nupkg (Root Library) - :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kedacore/sample-dotnet-worker-servicebus-queue/commit/abcaa6e51b50b94f21d398225dc8963e81053704">abcaa6e51b50b94f21d398225dc8963e81053704</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A spoofing vulnerability exists when the ASP.NET Core fails to properly sanitize web requests. <p>Publish Date: 2017-05-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-0256>CVE-2017-0256</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-0256">https://nvd.nist.gov/vuln/detail/CVE-2017-0256</a></p> <p>Release Date: 2017-05-12</p> <p>Fix Resolution: Microsoft.AspNetCore.Mvc.ApiExplorer - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.Abstractions - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.Core - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Cors - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.Localization - 1.1.3,1.0.4;System.Net.Http - 4.1.2,4.3.2;Microsoft.AspNetCore.Mvc.Razor - 1.1.3,1.0.4;System.Net.Http.WinHttpHandler - 4.0.2,4.3.0-preview1-24530-04;System.Net.Security - 4.3.0-preview1-24530-04,4.0.1;Microsoft.AspNetCore.Mvc.ViewFeatures - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.TagHelpers - 1.0.4,1.1.3;System.Text.Encodings.Web - 4.3.0-preview1-24530-04,4.0.1;Microsoft.AspNetCore.Mvc.Razor.Host - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.Formatters.Json - 1.0.4,1.1.3;Microsoft.AspNetCore.Mvc.WebApiCompatShim - 1.0.4,1.1.3;System.Net.WebSockets.Client - 4.3.0-preview1-24530-04,4.0.1;Microsoft.AspNetCore.Mvc.Formatters.Xml - 1.1.3,1.0.4;Microsoft.AspNetCore.Mvc.DataAnnotations - 1.0.4,1.1.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in system net http nupkg cve medium severity vulnerability vulnerable library system net http nupkg provides a programming interface for modern http applications including http client components that library home page a href path to dependency file src keda samples dotnet web keda samples dotnet web csproj path to vulnerable library usr share dotnet sdk nugetfallbackfolder system net http system net http nupkg dependency hierarchy microsoft azure management servicebus nupkg root library x system net http nupkg vulnerable library found in head commit a href found in base branch main vulnerability details a spoofing vulnerability exists when the asp net core fails to properly sanitize web requests publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution microsoft aspnetcore mvc apiexplorer microsoft aspnetcore mvc abstractions microsoft aspnetcore mvc core microsoft aspnetcore mvc cors microsoft aspnetcore mvc localization system net http microsoft aspnetcore mvc razor system net http winhttphandler system net security microsoft aspnetcore mvc viewfeatures microsoft aspnetcore mvc taghelpers system text encodings web microsoft aspnetcore mvc razor host microsoft aspnetcore mvc formatters json microsoft aspnetcore mvc webapicompatshim system net websockets client microsoft aspnetcore mvc formatters xml microsoft aspnetcore mvc dataannotations step up your open source security game with whitesource
0
30,748
11,846,381,634
IssuesEvent
2020-03-24 10:05:48
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Don't rely just on presence of the cookie to detect active user session
Feature:NP Migration Feature:Security/Authentication Team:Security blocker
Currently we have a code in the [login view route](https://github.com/elastic/kibana/blob/master/x-pack/legacy/plugins/security/server/routes/views/login.js#L29) that decides whether it should render login view or not (and redirect to the Kibana root) solely based on the presence of the session cookie. That worked well in the legacy world where we had only `basic` authentication provider, but now it has the following issues: * when both `saml\oidc` and `basic` providers are used presence of the session cookie may have 3 different meanings: 1. user is authenticated with the `basic` 2. user is authenticated with the `saml\oidc` 3. user is not authenticated yet, but in the process of SAML\OpenID Connect handshake with an intermediate cookie And if user wishes to open login view in cases **2** or **3** it'd make sense to allow them to do that (and effectively fix https://github.com/elastic/kibana/issues/25257) * NP `KibanaRequest` won't give us direct access to the cookie anymore (justifies `blocker` status for 8.0) It seems currently we have all the necessary pieces to finally fix that issue: `Authenticator` is able to extract session from `KibanaRequest` using internal `sessionStorageFactory` even if the route that handles this request explicitly opted out from mandatory authentication. So essentially we can have a new method on `Authenticator` that would answer whether or not request is associated with the active session (not intermediate) and which provider owns this session using the code similar to one we have in the `authenticate` method. There are two questions we should answer first though: * do we want to treat requests with valid `Authorization` header (or any other header we may support in the future) the same as requests with valid cookie-based session (like we do for normal authentication)? If the answer is `yes`, then we'll have to iterate through all providers trying to figure out which provider can authenticate the request and return user information. My current preference is to answer `no` to precisely replicate current behavior and keep code simpler. We should stop supporting this scenario in the authentication providers eventually anyway and use a dedicated `proxy-headers` authentication provider instead (once Reporting starts using API keys instead of request headers "snapshots" or we can add this special authentication provider automatically as the first one). * are we okay with any side-effects that may happen when we call provider's `authenticate` just to know if the session is active (e.g. access token will be automatically refreshed if expired, maybe there are more side-effects)? cc @restrry **Supersedes: https://github.com/elastic/kibana/issues/41959**
True
Don't rely just on presence of the cookie to detect active user session - Currently we have a code in the [login view route](https://github.com/elastic/kibana/blob/master/x-pack/legacy/plugins/security/server/routes/views/login.js#L29) that decides whether it should render login view or not (and redirect to the Kibana root) solely based on the presence of the session cookie. That worked well in the legacy world where we had only `basic` authentication provider, but now it has the following issues: * when both `saml\oidc` and `basic` providers are used presence of the session cookie may have 3 different meanings: 1. user is authenticated with the `basic` 2. user is authenticated with the `saml\oidc` 3. user is not authenticated yet, but in the process of SAML\OpenID Connect handshake with an intermediate cookie And if user wishes to open login view in cases **2** or **3** it'd make sense to allow them to do that (and effectively fix https://github.com/elastic/kibana/issues/25257) * NP `KibanaRequest` won't give us direct access to the cookie anymore (justifies `blocker` status for 8.0) It seems currently we have all the necessary pieces to finally fix that issue: `Authenticator` is able to extract session from `KibanaRequest` using internal `sessionStorageFactory` even if the route that handles this request explicitly opted out from mandatory authentication. So essentially we can have a new method on `Authenticator` that would answer whether or not request is associated with the active session (not intermediate) and which provider owns this session using the code similar to one we have in the `authenticate` method. There are two questions we should answer first though: * do we want to treat requests with valid `Authorization` header (or any other header we may support in the future) the same as requests with valid cookie-based session (like we do for normal authentication)? If the answer is `yes`, then we'll have to iterate through all providers trying to figure out which provider can authenticate the request and return user information. My current preference is to answer `no` to precisely replicate current behavior and keep code simpler. We should stop supporting this scenario in the authentication providers eventually anyway and use a dedicated `proxy-headers` authentication provider instead (once Reporting starts using API keys instead of request headers "snapshots" or we can add this special authentication provider automatically as the first one). * are we okay with any side-effects that may happen when we call provider's `authenticate` just to know if the session is active (e.g. access token will be automatically refreshed if expired, maybe there are more side-effects)? cc @restrry **Supersedes: https://github.com/elastic/kibana/issues/41959**
non_defect
don t rely just on presence of the cookie to detect active user session currently we have a code in the that decides whether it should render login view or not and redirect to the kibana root solely based on the presence of the session cookie that worked well in the legacy world where we had only basic authentication provider but now it has the following issues when both saml oidc and basic providers are used presence of the session cookie may have different meanings user is authenticated with the basic user is authenticated with the saml oidc user is not authenticated yet but in the process of saml openid connect handshake with an intermediate cookie and if user wishes to open login view in cases or it d make sense to allow them to do that and effectively fix np kibanarequest won t give us direct access to the cookie anymore justifies blocker status for it seems currently we have all the necessary pieces to finally fix that issue authenticator is able to extract session from kibanarequest using internal sessionstoragefactory even if the route that handles this request explicitly opted out from mandatory authentication so essentially we can have a new method on authenticator that would answer whether or not request is associated with the active session not intermediate and which provider owns this session using the code similar to one we have in the authenticate method there are two questions we should answer first though do we want to treat requests with valid authorization header or any other header we may support in the future the same as requests with valid cookie based session like we do for normal authentication if the answer is yes then we ll have to iterate through all providers trying to figure out which provider can authenticate the request and return user information my current preference is to answer no to precisely replicate current behavior and keep code simpler we should stop supporting this scenario in the authentication providers eventually anyway and use a dedicated proxy headers authentication provider instead once reporting starts using api keys instead of request headers snapshots or we can add this special authentication provider automatically as the first one are we okay with any side effects that may happen when we call provider s authenticate just to know if the session is active e g access token will be automatically refreshed if expired maybe there are more side effects cc restrry supersedes
0
68,290
21,607,342,665
IssuesEvent
2022-05-04 06:00:18
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
opened
[🐛 Bug]: wait.until is not usable
I-defect needs-triaging
### What happened? My "wait.until" function worked fine yesterday, but all a sudden now is not runnable. Error message: reason: no instance(s) of type variable(s) V exist so that ExpectedCondition<WebElement> conforms to Function<? super WebDriver, V> I have looked it up online, and people said it is related to the inconsistent versions of Guava and Selenium. However, I have already checked my project structure, and there are no other extra versions of Guava and Selenium. ### How can we reproduce the issue? ```shell I am not sure how to reproduce since I don't know what's wrong. The only thing I can provide is what I am using. Sorry and thanks. Please see my pom.xml below: <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>4.1.3</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>31.1-jre</version> </dependency> ``` The code I have run: ` public void visibilityOfElementLocatedByXpath(String xpath){ WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10)); wait.until**(ExpectedConditions.visibilityOfElementLocated(By.xpath(xpath)))**; }` In addition, I am using JAVA 17, maven project with IntelliJ IDEA with MacBook Air (M1,2020). ``` ### Relevant log output ```shell java: method until in class org.openqa.selenium.support.ui.FluentWait<T> cannot be applied to given types; required: java.util.function.Function<? super org.openqa.selenium.WebDriver,V> found: org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement> reason: cannot infer type-variable(s) V (argument mismatch; org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement> cannot be converted to java.util.function.Function<? super org.openqa.selenium.WebDriver,V>) ``` ### Operating System MacBook Air (M1,2020) - macOS Monterey 12.2.1 ### Selenium version 4.1.3 ### What are the browser(s) and version(s) where you see this issue? Not-related ### What are the browser driver(s) and version(s) where you see this issue? Not-related ### Are you using Selenium Grid? No
1.0
[🐛 Bug]: wait.until is not usable - ### What happened? My "wait.until" function worked fine yesterday, but all a sudden now is not runnable. Error message: reason: no instance(s) of type variable(s) V exist so that ExpectedCondition<WebElement> conforms to Function<? super WebDriver, V> I have looked it up online, and people said it is related to the inconsistent versions of Guava and Selenium. However, I have already checked my project structure, and there are no other extra versions of Guava and Selenium. ### How can we reproduce the issue? ```shell I am not sure how to reproduce since I don't know what's wrong. The only thing I can provide is what I am using. Sorry and thanks. Please see my pom.xml below: <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>4.1.3</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>31.1-jre</version> </dependency> ``` The code I have run: ` public void visibilityOfElementLocatedByXpath(String xpath){ WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10)); wait.until**(ExpectedConditions.visibilityOfElementLocated(By.xpath(xpath)))**; }` In addition, I am using JAVA 17, maven project with IntelliJ IDEA with MacBook Air (M1,2020). ``` ### Relevant log output ```shell java: method until in class org.openqa.selenium.support.ui.FluentWait<T> cannot be applied to given types; required: java.util.function.Function<? super org.openqa.selenium.WebDriver,V> found: org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement> reason: cannot infer type-variable(s) V (argument mismatch; org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement> cannot be converted to java.util.function.Function<? super org.openqa.selenium.WebDriver,V>) ``` ### Operating System MacBook Air (M1,2020) - macOS Monterey 12.2.1 ### Selenium version 4.1.3 ### What are the browser(s) and version(s) where you see this issue? Not-related ### What are the browser driver(s) and version(s) where you see this issue? Not-related ### Are you using Selenium Grid? No
defect
wait until is not usable what happened my wait until function worked fine yesterday but all a sudden now is not runnable error message reason no instance s of type variable s v exist so that expectedcondition conforms to function i have looked it up online and people said it is related to the inconsistent versions of guava and selenium however i have already checked my project structure and there are no other extra versions of guava and selenium how can we reproduce the issue shell i am not sure how to reproduce since i don t know what s wrong the only thing i can provide is what i am using sorry and thanks please see my pom xml below org seleniumhq selenium selenium java com google guava guava jre the code i have run public void visibilityofelementlocatedbyxpath string xpath webdriverwait wait new webdriverwait driver duration ofseconds wait until expectedconditions visibilityofelementlocated by xpath xpath in addition i am using java maven project with intellij idea with macbook air relevant log output shell java method until in class org openqa selenium support ui fluentwait cannot be applied to given types required java util function function found org openqa selenium support ui expectedcondition reason cannot infer type variable s v argument mismatch org openqa selenium support ui expectedcondition cannot be converted to java util function function operating system macbook air macos monterey selenium version what are the browser s and version s where you see this issue not related what are the browser driver s and version s where you see this issue not related are you using selenium grid no
1
653
7,716,043,201
IssuesEvent
2018-05-23 09:29:55
kubevirt/kubevirt-ansible
https://api.github.com/repos/kubevirt/kubevirt-ansible
closed
default openshift to 3.9
automation bug docs enhancement
- main readme.md says pull 3.7 openshift branch - playbooks/cluster/openshift/README.md defaults to 3.7 - kubevirt-ansible/automation/check-patch.sh runs it for 3.7 - kubevirt-ansible/playbooks/cluster/openshift/config.ym defaults it to 3.7 (though it should use vars/all.yml - see issue #146) But vars/all.yml defaults it to 3.9. We should pick one version, and actually default it to 3.9 since it's the latest release and that's where devs will want to target.
1.0
default openshift to 3.9 - - main readme.md says pull 3.7 openshift branch - playbooks/cluster/openshift/README.md defaults to 3.7 - kubevirt-ansible/automation/check-patch.sh runs it for 3.7 - kubevirt-ansible/playbooks/cluster/openshift/config.ym defaults it to 3.7 (though it should use vars/all.yml - see issue #146) But vars/all.yml defaults it to 3.9. We should pick one version, and actually default it to 3.9 since it's the latest release and that's where devs will want to target.
non_defect
default openshift to main readme md says pull openshift branch playbooks cluster openshift readme md defaults to kubevirt ansible automation check patch sh runs it for kubevirt ansible playbooks cluster openshift config ym defaults it to though it should use vars all yml see issue but vars all yml defaults it to we should pick one version and actually default it to since it s the latest release and that s where devs will want to target
0
56,977
11,697,377,632
IssuesEvent
2020-03-06 11:41:14
fac19/week1-guardians
https://api.github.com/repos/fac19/week1-guardians
closed
Photos are unnecessarily large
code review
One of them is 8MB and some of the others are pretty big too. It would be a good idea to make them smaller so as not to waste cellular data.
1.0
Photos are unnecessarily large - One of them is 8MB and some of the others are pretty big too. It would be a good idea to make them smaller so as not to waste cellular data.
non_defect
photos are unnecessarily large one of them is and some of the others are pretty big too it would be a good idea to make them smaller so as not to waste cellular data
0
4,840
2,610,157,949
IssuesEvent
2015-02-26 18:50:10
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
closed
Dooku
auto-migrated Priority-Medium Type-Defect
``` Adjust walk speed for Dooku.. make him a bit slower ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:57
1.0
Dooku - ``` Adjust walk speed for Dooku.. make him a bit slower ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:57
defect
dooku adjust walk speed for dooku make him a bit slower original issue reported on code google com by gmail com on jan at
1
218,484
24,373,313,616
IssuesEvent
2022-10-03 21:24:50
opensearch-project/data-prepper
https://api.github.com/repos/opensearch-project/data-prepper
opened
CVE-2022-42004 (Medium) detected in jackson-databind-2.13.3.jar
security vulnerability
## CVE-2022-42004 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.13.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /data-prepper-plugins/parse-json-processor/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.13.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/5a73bf31a2a3abdaf2b81c0d0784ba51ed19c122">5a73bf31a2a3abdaf2b81c0d0784ba51ed19c122</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2022-42004 (Medium) detected in jackson-databind-2.13.3.jar - ## CVE-2022-42004 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.13.3.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /data-prepper-plugins/parse-json-processor/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/e/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.13.3/56deb9ea2c93a7a556b3afbedd616d342963464e/jackson-databind-2.13.3.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.13.3.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/5a73bf31a2a3abdaf2b81c0d0784ba51ed19c122">5a73bf31a2a3abdaf2b81c0d0784ba51ed19c122</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization. <p>Publish Date: 2022-10-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42004>CVE-2022-42004</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.13.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_defect
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file data prepper plugins parse json processor build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar e caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind check this box to open an automated fix pr
0
78,296
27,417,138,889
IssuesEvent
2023-03-01 14:29:35
dotCMS/core
https://api.github.com/repos/dotCMS/core
closed
Upgrade Task failing
Type : Defect Merged QA : Passed Internal Team : Falcon Release : 23.03
### Problem Statement I have a live db that is failing to upgrade ``` 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ====================================================================== 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ====================================================================== 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ==> Executing upgrade script 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ====================================================================== 16:11:08.141 FATAL runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - Unable to execute SQL upgrade org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:333) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:319) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:295) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:290) ~[postgresql-42.5.1.jar:42.5.1] at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95) ~[HikariCP-3.4.2.jar:?] at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) ~[HikariCP-3.4.2.jar:?] at com.dotmarketing.common.db.DotConnect.executeStatement(DotConnect.java:264) ~[classes/:?] at com.dotmarketing.startup.AbstractJDBCStartupTask.executeUpgrade(AbstractJDBCStartupTask.java:790) ~[classes/:?] at com.dotmarketing.startup.StartupTasksExecutor.executeUpgrades(StartupTasksExecutor.java:199) ~[classes/:?] at com.liferay.portal.servlet.MainServlet.init(MainServlet.java:119) ~[classes/:?] at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1164) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1117) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1010) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4957) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5264) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1185) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1933) ~[catalina.jar:9.0.60] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118) ~[?:?] at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1095) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:477) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1618) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:319) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:946) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:835) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1396) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1386) ~[catalina.jar:9.0.60] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140) ~[?:?] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:919) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:263) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardService.startInternal(StandardService.java:432) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:927) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.Catalina.start(Catalina.java:772) ~[catalina.jar:9.0.60] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345) ~[bootstrap.jar:9.0.60] at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476) ~[bootstrap.jar:9.0.60] 16:11:08.143 ERROR startup.StartupTasksExecutor - FATAL: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. com.dotmarketing.exception.DotDataException: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. at com.dotmarketing.startup.AbstractJDBCStartupTask.executeUpgrade(AbstractJDBCStartupTask.java:793) ~[classes/:?] at com.dotmarketing.startup.StartupTasksExecutor.executeUpgrades(StartupTasksExecutor.java:199) ~[classes/:?] at com.liferay.portal.servlet.MainServlet.init(MainServlet.java:119) ~[classes/:?] at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1164) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1117) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1010) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4957) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5264) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1185) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1933) ~[catalina.jar:9.0.60] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118) ~[?:?] at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1095) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:477) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1618) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:319) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:946) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:835) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1396) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1386) ~[catalina.jar:9.0.60] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140) ~[?:?] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:919) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:263) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardService.startInternal(StandardService.java:432) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:927) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.Catalina.start(Catalina.java:772) ~[catalina.jar:9.0.60] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345) ~[bootstrap.jar:9.0.60] at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476) ~[bootstrap.jar:9.0.60] Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:333) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:319) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:295) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:290) ~[postgresql-42.5.1.jar:42.5.1] at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95) ~[HikariCP-3.4.2.jar:?] at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) ~[HikariCP-3.4.2.jar:?] at com.dotmarketing.common.db.DotConnect.executeStatement(DotConnect.java:264) ~[classes/:?] at com.dotmarketing.startup.AbstractJDBCStartupTask.executeUpgrade(AbstractJDBCStartupTask.java:790) ~[classes/:?] ... 46 more ``` ### Steps to Reproduce Load the custom DB run `insert into db_version values (230111, now()) ;` start dotCMS ### Acceptance Criteria Database should be updated cleanly. ### dotCMS Version 23.02 ### Proposed Objective Cloud Engineering ### Proposed Priority Please Select ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. _No response_ ### Assumptions & Initiation Needs _No response_ ### Sub-Tasks & Estimates _No response_
1.0
Upgrade Task failing - ### Problem Statement I have a live db that is failing to upgrade ``` 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ====================================================================== 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ====================================================================== 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ==> Executing upgrade script 16:11:08.124 INFO runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - ====================================================================== 16:11:08.141 FATAL runonce.Task230119MigrateContentToProperPersonaTagAndRemoveDupTags - Unable to execute SQL upgrade org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:333) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:319) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:295) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:290) ~[postgresql-42.5.1.jar:42.5.1] at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95) ~[HikariCP-3.4.2.jar:?] at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) ~[HikariCP-3.4.2.jar:?] at com.dotmarketing.common.db.DotConnect.executeStatement(DotConnect.java:264) ~[classes/:?] at com.dotmarketing.startup.AbstractJDBCStartupTask.executeUpgrade(AbstractJDBCStartupTask.java:790) ~[classes/:?] at com.dotmarketing.startup.StartupTasksExecutor.executeUpgrades(StartupTasksExecutor.java:199) ~[classes/:?] at com.liferay.portal.servlet.MainServlet.init(MainServlet.java:119) ~[classes/:?] at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1164) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1117) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1010) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4957) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5264) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1185) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1933) ~[catalina.jar:9.0.60] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118) ~[?:?] at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1095) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:477) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1618) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:319) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:946) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:835) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1396) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1386) ~[catalina.jar:9.0.60] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140) ~[?:?] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:919) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:263) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardService.startInternal(StandardService.java:432) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:927) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.Catalina.start(Catalina.java:772) ~[catalina.jar:9.0.60] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345) ~[bootstrap.jar:9.0.60] at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476) ~[bootstrap.jar:9.0.60] 16:11:08.143 ERROR startup.StartupTasksExecutor - FATAL: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. com.dotmarketing.exception.DotDataException: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. at com.dotmarketing.startup.AbstractJDBCStartupTask.executeUpgrade(AbstractJDBCStartupTask.java:793) ~[classes/:?] at com.dotmarketing.startup.StartupTasksExecutor.executeUpgrades(StartupTasksExecutor.java:199) ~[classes/:?] at com.liferay.portal.servlet.MainServlet.init(MainServlet.java:119) ~[classes/:?] at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1164) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1117) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1010) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4957) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5264) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1185) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1933) ~[catalina.jar:9.0.60] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118) ~[?:?] at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1095) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:477) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1618) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:319) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:946) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:835) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1396) ~[catalina.jar:9.0.60] at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1386) ~[catalina.jar:9.0.60] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) ~[tomcat-util.jar:9.0.60] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140) ~[?:?] at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:919) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:263) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardService.startInternal(StandardService.java:432) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:927) ~[catalina.jar:9.0.60] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[catalina.jar:9.0.60] at org.apache.catalina.startup.Catalina.start(Catalina.java:772) ~[catalina.jar:9.0.60] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345) ~[bootstrap.jar:9.0.60] at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476) ~[bootstrap.jar:9.0.60] Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "tag_inode_pkey" Detail: Key (tag_id, inode)=(ab344fa0-c909-4b3c-aa8c-6bacd4f770fa, 7dbb1cb6-1058-46ff-adf4-21bce8f63216) already exists. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:333) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:319) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:295) ~[postgresql-42.5.1.jar:42.5.1] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:290) ~[postgresql-42.5.1.jar:42.5.1] at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95) ~[HikariCP-3.4.2.jar:?] at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) ~[HikariCP-3.4.2.jar:?] at com.dotmarketing.common.db.DotConnect.executeStatement(DotConnect.java:264) ~[classes/:?] at com.dotmarketing.startup.AbstractJDBCStartupTask.executeUpgrade(AbstractJDBCStartupTask.java:790) ~[classes/:?] ... 46 more ``` ### Steps to Reproduce Load the custom DB run `insert into db_version values (230111, now()) ;` start dotCMS ### Acceptance Criteria Database should be updated cleanly. ### dotCMS Version 23.02 ### Proposed Objective Cloud Engineering ### Proposed Priority Please Select ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. _No response_ ### Assumptions & Initiation Needs _No response_ ### Sub-Tasks & Estimates _No response_
defect
upgrade task failing problem statement i have a live db that is failing to upgrade info runonce info runonce info runonce executing upgrade script info runonce fatal runonce unable to execute sql upgrade org postgresql util psqlexception error duplicate key value violates unique constraint tag inode pkey detail key tag id inode already exists at org postgresql core queryexecutorimpl receiveerrorresponse queryexecutorimpl java at org postgresql core queryexecutorimpl processresults queryexecutorimpl java at org postgresql core queryexecutorimpl execute queryexecutorimpl java at org postgresql jdbc pgstatement executeinternal pgstatement java at org postgresql jdbc pgstatement execute pgstatement java at org postgresql jdbc pgstatement executewithflags pgstatement java at org postgresql jdbc pgstatement executecachedsql pgstatement java at org postgresql jdbc pgstatement executewithflags pgstatement java at org postgresql jdbc pgstatement execute pgstatement java at com zaxxer hikari pool proxystatement execute proxystatement java at com zaxxer hikari pool hikariproxystatement execute hikariproxystatement java at com dotmarketing common db dotconnect executestatement dotconnect java at com dotmarketing startup abstractjdbcstartuptask executeupgrade abstractjdbcstartuptask java at com dotmarketing startup startuptasksexecutor executeupgrades startuptasksexecutor java at com liferay portal servlet mainservlet init mainservlet java at org apache catalina core standardwrapper initservlet standardwrapper java at org apache catalina core standardwrapper loadservlet standardwrapper java at org apache catalina core standardwrapper load standardwrapper java at org apache catalina core standardcontext loadonstartup standardcontext java at org apache catalina core standardcontext startinternal standardcontext java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core containerbase addchildinternal containerbase java at org apache catalina core containerbase addchild containerbase java at org apache catalina core standardhost addchild standardhost java at org apache catalina startup hostconfig deploydirectory hostconfig java at org apache catalina startup hostconfig deploydirectory run hostconfig java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at org apache tomcat util threads inlineexecutorservice execute inlineexecutorservice java at java util concurrent abstractexecutorservice submit abstractexecutorservice java at org apache catalina startup hostconfig deploydirectories hostconfig java at org apache catalina startup hostconfig deployapps hostconfig java at org apache catalina startup hostconfig start hostconfig java at org apache catalina startup hostconfig lifecycleevent hostconfig java at org apache catalina util lifecyclebase firelifecycleevent lifecyclebase java at org apache catalina util lifecyclebase setstateinternal lifecyclebase java at org apache catalina util lifecyclebase setstate lifecyclebase java at org apache catalina core containerbase startinternal containerbase java at org apache catalina core standardhost startinternal standardhost java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core containerbase startchild call containerbase java at org apache catalina core containerbase startchild call containerbase java at java util concurrent futuretask run futuretask java at org apache tomcat util threads inlineexecutorservice execute inlineexecutorservice java at java util concurrent abstractexecutorservice submit abstractexecutorservice java at org apache catalina core containerbase startinternal containerbase java at org apache catalina core standardengine startinternal standardengine java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core standardservice startinternal standardservice java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core standardserver startinternal standardserver java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina startup catalina start catalina java at jdk internal reflect nativemethodaccessorimpl native method at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache catalina startup bootstrap start bootstrap java at org apache catalina startup bootstrap main bootstrap java error startup startuptasksexecutor fatal error duplicate key value violates unique constraint tag inode pkey detail key tag id inode already exists com dotmarketing exception dotdataexception error duplicate key value violates unique constraint tag inode pkey detail key tag id inode already exists at com dotmarketing startup abstractjdbcstartuptask executeupgrade abstractjdbcstartuptask java at com dotmarketing startup startuptasksexecutor executeupgrades startuptasksexecutor java at com liferay portal servlet mainservlet init mainservlet java at org apache catalina core standardwrapper initservlet standardwrapper java at org apache catalina core standardwrapper loadservlet standardwrapper java at org apache catalina core standardwrapper load standardwrapper java at org apache catalina core standardcontext loadonstartup standardcontext java at org apache catalina core standardcontext startinternal standardcontext java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core containerbase addchildinternal containerbase java at org apache catalina core containerbase addchild containerbase java at org apache catalina core standardhost addchild standardhost java at org apache catalina startup hostconfig deploydirectory hostconfig java at org apache catalina startup hostconfig deploydirectory run hostconfig java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at org apache tomcat util threads inlineexecutorservice execute inlineexecutorservice java at java util concurrent abstractexecutorservice submit abstractexecutorservice java at org apache catalina startup hostconfig deploydirectories hostconfig java at org apache catalina startup hostconfig deployapps hostconfig java at org apache catalina startup hostconfig start hostconfig java at org apache catalina startup hostconfig lifecycleevent hostconfig java at org apache catalina util lifecyclebase firelifecycleevent lifecyclebase java at org apache catalina util lifecyclebase setstateinternal lifecyclebase java at org apache catalina util lifecyclebase setstate lifecyclebase java at org apache catalina core containerbase startinternal containerbase java at org apache catalina core standardhost startinternal standardhost java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core containerbase startchild call containerbase java at org apache catalina core containerbase startchild call containerbase java at java util concurrent futuretask run futuretask java at org apache tomcat util threads inlineexecutorservice execute inlineexecutorservice java at java util concurrent abstractexecutorservice submit abstractexecutorservice java at org apache catalina core containerbase startinternal containerbase java at org apache catalina core standardengine startinternal standardengine java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core standardservice startinternal standardservice java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core standardserver startinternal standardserver java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina startup catalina start catalina java at jdk internal reflect nativemethodaccessorimpl native method at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache catalina startup bootstrap start bootstrap java at org apache catalina startup bootstrap main bootstrap java caused by org postgresql util psqlexception error duplicate key value violates unique constraint tag inode pkey detail key tag id inode already exists at org postgresql core queryexecutorimpl receiveerrorresponse queryexecutorimpl java at org postgresql core queryexecutorimpl processresults queryexecutorimpl java at org postgresql core queryexecutorimpl execute queryexecutorimpl java at org postgresql jdbc pgstatement executeinternal pgstatement java at org postgresql jdbc pgstatement execute pgstatement java at org postgresql jdbc pgstatement executewithflags pgstatement java at org postgresql jdbc pgstatement executecachedsql pgstatement java at org postgresql jdbc pgstatement executewithflags pgstatement java at org postgresql jdbc pgstatement execute pgstatement java at com zaxxer hikari pool proxystatement execute proxystatement java at com zaxxer hikari pool hikariproxystatement execute hikariproxystatement java at com dotmarketing common db dotconnect executestatement dotconnect java at com dotmarketing startup abstractjdbcstartuptask executeupgrade abstractjdbcstartuptask java more steps to reproduce load the custom db run insert into db version values now start dotcms acceptance criteria database should be updated cleanly dotcms version proposed objective cloud engineering proposed priority please select external links slack conversations support tickets figma designs etc no response assumptions initiation needs no response sub tasks estimates no response
1
76,105
26,243,128,203
IssuesEvent
2023-01-05 13:13:25
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
BlockUI: incorrect overlay position for block with padding
:lady_beetle: defect workaround
### Describe the bug When a blocked element uses padding, the overlay is displayed incorrectly. The size is correct but the position should be shifted. The behavior is correct when the padding is 0. ### Reproducer The issue can be reproduced with the showcase of PrimeFaces 12: 1. Open the blockUI showcase (PrimeFaces 12): https://www.primefaces.org/showcase/ui/misc/blockUI.xhtml 2. Open developer tools to update the CSS (add padding on the blocked elements) 3. Example CSS for Basic: `body .ui-panel { padding: 20px; }` 4. Example CSS for Custom Content: `div.ui-datatable { padding: 20px; }` 5. Check if the Basic overlay covers the panel (Save button). 6. Actual result: Basic overlay size is OK but position should move (to bottom right) 7. Check if the Custom Content overlay covers the table (row navigation buttons) 8. Actual result: Custom Content overlay size is OK but position should move (to bottom right) ### Expected behavior In both cases (Basic or Custom Content): - The overlay should cover the blocked element (panel or table) - The overlay position should take the padding into account (using outerWidth/Height rather than width/height) ### PrimeFaces edition Community ### PrimeFaces version 12.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3 ### Java version 11 ### Browser(s) Chrome 108.0.5359.125
1.0
BlockUI: incorrect overlay position for block with padding - ### Describe the bug When a blocked element uses padding, the overlay is displayed incorrectly. The size is correct but the position should be shifted. The behavior is correct when the padding is 0. ### Reproducer The issue can be reproduced with the showcase of PrimeFaces 12: 1. Open the blockUI showcase (PrimeFaces 12): https://www.primefaces.org/showcase/ui/misc/blockUI.xhtml 2. Open developer tools to update the CSS (add padding on the blocked elements) 3. Example CSS for Basic: `body .ui-panel { padding: 20px; }` 4. Example CSS for Custom Content: `div.ui-datatable { padding: 20px; }` 5. Check if the Basic overlay covers the panel (Save button). 6. Actual result: Basic overlay size is OK but position should move (to bottom right) 7. Check if the Custom Content overlay covers the table (row navigation buttons) 8. Actual result: Custom Content overlay size is OK but position should move (to bottom right) ### Expected behavior In both cases (Basic or Custom Content): - The overlay should cover the blocked element (panel or table) - The overlay position should take the padding into account (using outerWidth/Height rather than width/height) ### PrimeFaces edition Community ### PrimeFaces version 12.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3 ### Java version 11 ### Browser(s) Chrome 108.0.5359.125
defect
blockui incorrect overlay position for block with padding describe the bug when a blocked element uses padding the overlay is displayed incorrectly the size is correct but the position should be shifted the behavior is correct when the padding is reproducer the issue can be reproduced with the showcase of primefaces open the blockui showcase primefaces open developer tools to update the css add padding on the blocked elements example css for basic body ui panel padding example css for custom content div ui datatable padding check if the basic overlay covers the panel save button actual result basic overlay size is ok but position should move to bottom right check if the custom content overlay covers the table row navigation buttons actual result custom content overlay size is ok but position should move to bottom right expected behavior in both cases basic or custom content the overlay should cover the blocked element panel or table the overlay position should take the padding into account using outerwidth height rather than width height primefaces edition community primefaces version theme no response jsf implementation mojarra jsf version java version browser s chrome
1
3,672
9,967,535,259
IssuesEvent
2019-07-08 13:48:51
WITPASS/WebApp
https://api.github.com/repos/WITPASS/WebApp
opened
Update docker compose for micro-services infrastructure
architecture task
We want to run whole solution with single docker compose command instead of running individual projects during development due to many micro-services communicating with each other on event bus.
1.0
Update docker compose for micro-services infrastructure - We want to run whole solution with single docker compose command instead of running individual projects during development due to many micro-services communicating with each other on event bus.
non_defect
update docker compose for micro services infrastructure we want to run whole solution with single docker compose command instead of running individual projects during development due to many micro services communicating with each other on event bus
0
150,949
19,634,313,720
IssuesEvent
2022-01-08 02:15:29
xmidt-org/release-builder-action
https://api.github.com/repos/xmidt-org/release-builder-action
opened
CVE-2021-38561 (High) detected in github.com/golang/text-v0.3.3
security vulnerability
## CVE-2021-38561 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/text-v0.3.3</b></p></summary> <p>[mirror] Go text processing support</p> <p> Dependency Hierarchy: - github.com/spf13/afero-v1.6.0 (Root Library) - :x: **github.com/golang/text-v0.3.3** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/xmidt-org/release-builder-action/commit/d6d7f85bdfbb7f48f30a854f13f8b8f94e3698d1">d6d7f85bdfbb7f48f30a854f13f8b8f94e3698d1</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Due to improper index calculation, an incorrectly formatted language tag can cause Parse to panic, due to an out of bounds read. If Parse is used to process untrusted user inputs, this may be used as a vector for a denial of service attack. <p>Publish Date: 2021-08-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38561>CVE-2021-38561</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0113">https://osv.dev/vulnerability/GO-2021-0113</a></p> <p>Release Date: 2021-08-12</p> <p>Fix Resolution: v0.3.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-38561 (High) detected in github.com/golang/text-v0.3.3 - ## CVE-2021-38561 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/text-v0.3.3</b></p></summary> <p>[mirror] Go text processing support</p> <p> Dependency Hierarchy: - github.com/spf13/afero-v1.6.0 (Root Library) - :x: **github.com/golang/text-v0.3.3** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/xmidt-org/release-builder-action/commit/d6d7f85bdfbb7f48f30a854f13f8b8f94e3698d1">d6d7f85bdfbb7f48f30a854f13f8b8f94e3698d1</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Due to improper index calculation, an incorrectly formatted language tag can cause Parse to panic, due to an out of bounds read. If Parse is used to process untrusted user inputs, this may be used as a vector for a denial of service attack. <p>Publish Date: 2021-08-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38561>CVE-2021-38561</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0113">https://osv.dev/vulnerability/GO-2021-0113</a></p> <p>Release Date: 2021-08-12</p> <p>Fix Resolution: v0.3.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in github com golang text cve high severity vulnerability vulnerable library github com golang text go text processing support dependency hierarchy github com afero root library x github com golang text vulnerable library found in head commit a href found in base branch main vulnerability details due to improper index calculation an incorrectly formatted language tag can cause parse to panic due to an out of bounds read if parse is used to process untrusted user inputs this may be used as a vector for a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
59,661
17,023,195,305
IssuesEvent
2021-07-03 00:48:28
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
ways not connected when placed over existing node
Component: potlatch (flash editor) Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 6.48am, Monday, 7th January 2008]** not sure if this has been resolved as I notice the release of Potlatch has changed a few times since I have noticed this bug. I found that when I was joining ways to another way the these were not connected when I clicked on existing Node. ie if the intersection contained a "blue" node it didn't replace that node with a connection between ways/roads. I have just noticed these but probably crafted them a few months ago. I can't recreate so perhaps it was fixed.
1.0
ways not connected when placed over existing node - **[Submitted to the original trac issue database at 6.48am, Monday, 7th January 2008]** not sure if this has been resolved as I notice the release of Potlatch has changed a few times since I have noticed this bug. I found that when I was joining ways to another way the these were not connected when I clicked on existing Node. ie if the intersection contained a "blue" node it didn't replace that node with a connection between ways/roads. I have just noticed these but probably crafted them a few months ago. I can't recreate so perhaps it was fixed.
defect
ways not connected when placed over existing node not sure if this has been resolved as i notice the release of potlatch has changed a few times since i have noticed this bug i found that when i was joining ways to another way the these were not connected when i clicked on existing node ie if the intersection contained a blue node it didn t replace that node with a connection between ways roads i have just noticed these but probably crafted them a few months ago i can t recreate so perhaps it was fixed
1
12,216
2,685,507,712
IssuesEvent
2015-03-30 01:55:01
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
Fatal error: Call to a member function schemaCollection() on a non-object in ..../cake3/vendor/cakephp/cakephp/src/ORM/Table.php on line 419
Defect ORM
Recently I was adding Acl plugin to CakePHP3 and have got into the situation of: **Fatal error: Call to a member function schemaCollection() on a non-object in ..../cake3/vendor/cakephp/cakephp/src/ORM/Table.php on line 419** This is because **public function connection($conn = null)** returned connection as string 'default' instead of the object. ```php /** * Returns the connection instance or sets a new one * * @param \Cake\Database\Connection|null $conn The new connection instance * @return \Cake\Database\Connection */ public function connection($conn = null) { if ($conn === null) { return $this->_connection; } return $this->_connection = $conn; } } ``` i was able to fix it by replacing the returning value with: ```php public function connection($conn = null) { if ($conn === null) { // return $this->_connection; return is_object($this->_connection) ? $this->_connection : ConnectionManager::get($this->_connection ); } return $this->_connection = $conn; } } ``` Could it be a bug?
1.0
Fatal error: Call to a member function schemaCollection() on a non-object in ..../cake3/vendor/cakephp/cakephp/src/ORM/Table.php on line 419 - Recently I was adding Acl plugin to CakePHP3 and have got into the situation of: **Fatal error: Call to a member function schemaCollection() on a non-object in ..../cake3/vendor/cakephp/cakephp/src/ORM/Table.php on line 419** This is because **public function connection($conn = null)** returned connection as string 'default' instead of the object. ```php /** * Returns the connection instance or sets a new one * * @param \Cake\Database\Connection|null $conn The new connection instance * @return \Cake\Database\Connection */ public function connection($conn = null) { if ($conn === null) { return $this->_connection; } return $this->_connection = $conn; } } ``` i was able to fix it by replacing the returning value with: ```php public function connection($conn = null) { if ($conn === null) { // return $this->_connection; return is_object($this->_connection) ? $this->_connection : ConnectionManager::get($this->_connection ); } return $this->_connection = $conn; } } ``` Could it be a bug?
defect
fatal error call to a member function schemacollection on a non object in vendor cakephp cakephp src orm table php on line recently i was adding acl plugin to and have got into the situation of fatal error call to a member function schemacollection on a non object in vendor cakephp cakephp src orm table php on line this is because public function connection conn null returned connection as string default instead of the object php returns the connection instance or sets a new one param cake database connection null conn the new connection instance return cake database connection public function connection conn null if conn null return this connection return this connection conn i was able to fix it by replacing the returning value with php public function connection conn null if conn null return this connection return is object this connection this connection connectionmanager get this connection return this connection conn could it be a bug
1
7,553
2,610,405,203
IssuesEvent
2015-02-26 20:11:41
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
closed
Hystorical Reference Errors
auto-migrated Priority-Medium Type-Defect
``` In the historical references of Muunilinst and Trandosha there are the following errors: Muunilinst is written as being the financial core of the Empire, which did not exist at the time. Change it either to the InterGalactic Banking Clan or the Republic/CIS. Trandosha has the following typo: "Trandosha is in the same planetary system as Kashyyyk the Wookiee homeworld." There should be a comma after Kashyyyk. ``` ----- Original issue reported on code.google.com by `jkouzman...@gmail.com` on 8 Jul 2011 at 8:48
1.0
Hystorical Reference Errors - ``` In the historical references of Muunilinst and Trandosha there are the following errors: Muunilinst is written as being the financial core of the Empire, which did not exist at the time. Change it either to the InterGalactic Banking Clan or the Republic/CIS. Trandosha has the following typo: "Trandosha is in the same planetary system as Kashyyyk the Wookiee homeworld." There should be a comma after Kashyyyk. ``` ----- Original issue reported on code.google.com by `jkouzman...@gmail.com` on 8 Jul 2011 at 8:48
defect
hystorical reference errors in the historical references of muunilinst and trandosha there are the following errors muunilinst is written as being the financial core of the empire which did not exist at the time change it either to the intergalactic banking clan or the republic cis trandosha has the following typo trandosha is in the same planetary system as kashyyyk the wookiee homeworld there should be a comma after kashyyyk original issue reported on code google com by jkouzman gmail com on jul at
1
43,909
11,880,086,842
IssuesEvent
2020-03-27 10:01:44
contao/contao
https://api.github.com/repos/contao/contao
closed
Von der Synchronisation ausgenommene Ordner und Dateien in FileTree-Widget auswählbar
defect
Im ``FileTree``-Widget sind auch von der Synchronisation ausgenommene Ordner und Dateien auswählbar. Nach Klick auf den Button ``Anwenden`` geht die Auswahl verloren.
1.0
Von der Synchronisation ausgenommene Ordner und Dateien in FileTree-Widget auswählbar - Im ``FileTree``-Widget sind auch von der Synchronisation ausgenommene Ordner und Dateien auswählbar. Nach Klick auf den Button ``Anwenden`` geht die Auswahl verloren.
defect
von der synchronisation ausgenommene ordner und dateien in filetree widget auswählbar im filetree widget sind auch von der synchronisation ausgenommene ordner und dateien auswählbar nach klick auf den button anwenden geht die auswahl verloren
1
30,801
6,288,471,089
IssuesEvent
2017-07-19 17:02:41
googlei18n/libphonenumber
https://api.github.com/repos/googlei18n/libphonenumber
closed
format international numbers with preferred Carrier code (dialing out of Brazil to another country)
priority-medium type-defect
Imported from [Google Code issue #503](https://code.google.com/p/libphonenumber/issues/detail?id=503) created by [kohangel](https://code.google.com/u/118014392314681731740/) on 2014-08-16T12:40:20.000Z: --- <b>What steps will reproduce the problem?</b> 1.missing formatINTERNATIONALNumberWithPreferredCarrierCode() method for use to dial out of Brazil <b>2.</b> <b>3.</b> <b>What is the expected output? What do you see instead?</b> currently, there is a method formatNationalNumberWithPreferredCarrierCode, which works well for formatting Brazilian numbers, for dialing within Brazil. however, there is no other method that we can use to add the preferred Carrier Code for International numbers. <b>What version of the product are you using? On what operating system?</b> geocoder 2.12, libphonenumber 6.1 <b>Please provide any additional information below.</b> For example, to call the number 555-0123 in Washington, D.C. (area code 202), United States (country code 1), using TIM as the chosen carrier (selection code 41), one would dial 00 41 1 202 555 0123.
1.0
format international numbers with preferred Carrier code (dialing out of Brazil to another country) - Imported from [Google Code issue #503](https://code.google.com/p/libphonenumber/issues/detail?id=503) created by [kohangel](https://code.google.com/u/118014392314681731740/) on 2014-08-16T12:40:20.000Z: --- <b>What steps will reproduce the problem?</b> 1.missing formatINTERNATIONALNumberWithPreferredCarrierCode() method for use to dial out of Brazil <b>2.</b> <b>3.</b> <b>What is the expected output? What do you see instead?</b> currently, there is a method formatNationalNumberWithPreferredCarrierCode, which works well for formatting Brazilian numbers, for dialing within Brazil. however, there is no other method that we can use to add the preferred Carrier Code for International numbers. <b>What version of the product are you using? On what operating system?</b> geocoder 2.12, libphonenumber 6.1 <b>Please provide any additional information below.</b> For example, to call the number 555-0123 in Washington, D.C. (area code 202), United States (country code 1), using TIM as the chosen carrier (selection code 41), one would dial 00 41 1 202 555 0123.
defect
format international numbers with preferred carrier code dialing out of brazil to another country imported from created by on what steps will reproduce the problem missing formatinternationalnumberwithpreferredcarriercode method for use to dial out of brazil what is the expected output what do you see instead currently there is a method formatnationalnumberwithpreferredcarriercode which works well for formatting brazilian numbers for dialing within brazil however there is no other method that we can use to add the preferred carrier code for international numbers what version of the product are you using on what operating system geocoder libphonenumber please provide any additional information below for example to call the number in washington d c area code united states country code using tim as the chosen carrier selection code one would dial
1
487,525
14,047,840,815
IssuesEvent
2020-11-02 07:53:41
AY2021S1-CS2103T-W13-3/tp
https://api.github.com/repos/AY2021S1-CS2103T-W13-3/tp
closed
[PE-D] Minor issue with UI
priority.Medium type.Enhancement
Size of schedule tab is different from the other tabs. When navigating from schedule tab to the other tabs, the window becomes empty at one side. Minor problem which doesnt affect functionality though! Just a suggestion to fix the window size :) Would like to add that your UI looks really nice!! Good job and all the best for actual PE! ![image.png](https://raw.githubusercontent.com/g-erm/ped/main/files/9ce30126-cf63-4294-bf6c-3ea900bd30a2.png) <!--session: 1604044865900-bba9a391-864c-42da-8277-a15b7774995a--> ------------- Labels: `severity.VeryLow` `type.FeatureFlaw` original: g-erm/ped#11
1.0
[PE-D] Minor issue with UI - Size of schedule tab is different from the other tabs. When navigating from schedule tab to the other tabs, the window becomes empty at one side. Minor problem which doesnt affect functionality though! Just a suggestion to fix the window size :) Would like to add that your UI looks really nice!! Good job and all the best for actual PE! ![image.png](https://raw.githubusercontent.com/g-erm/ped/main/files/9ce30126-cf63-4294-bf6c-3ea900bd30a2.png) <!--session: 1604044865900-bba9a391-864c-42da-8277-a15b7774995a--> ------------- Labels: `severity.VeryLow` `type.FeatureFlaw` original: g-erm/ped#11
non_defect
minor issue with ui size of schedule tab is different from the other tabs when navigating from schedule tab to the other tabs the window becomes empty at one side minor problem which doesnt affect functionality though just a suggestion to fix the window size would like to add that your ui looks really nice good job and all the best for actual pe labels severity verylow type featureflaw original g erm ped
0
27,833
4,331,284,286
IssuesEvent
2016-07-26 23:01:22
Microsoft/vscode
https://api.github.com/repos/Microsoft/vscode
closed
Test: Integrated terminal IME support
testplan-item
Test for #7045: - [x] Windows - **@rebornix** - [x] OS X - **@kieferrm** - [x] Linux - **@joaomoreno** The integrated terminal now supports input via IMEs (Input Method Editors). During development I tested this across all 3 OS' with Japanese and Korean IMEs as I'm familiar with them. Here are some things to try: - Install/enable an IME for the OS, preferably not Japanese/Korean - Test input, if you're familiar with the IME try all the edge cases - Multiple character compositions? - Multiple keystrokes per character? - Multiple characters reducing to a single character? - Test executing a command using the text (eg. `echo`) See https://github.com/sourcelair/xterm.js/pull/175 for more details on the change and testing that was performed at implementation time.
1.0
Test: Integrated terminal IME support - Test for #7045: - [x] Windows - **@rebornix** - [x] OS X - **@kieferrm** - [x] Linux - **@joaomoreno** The integrated terminal now supports input via IMEs (Input Method Editors). During development I tested this across all 3 OS' with Japanese and Korean IMEs as I'm familiar with them. Here are some things to try: - Install/enable an IME for the OS, preferably not Japanese/Korean - Test input, if you're familiar with the IME try all the edge cases - Multiple character compositions? - Multiple keystrokes per character? - Multiple characters reducing to a single character? - Test executing a command using the text (eg. `echo`) See https://github.com/sourcelair/xterm.js/pull/175 for more details on the change and testing that was performed at implementation time.
non_defect
test integrated terminal ime support test for windows rebornix os x kieferrm linux joaomoreno the integrated terminal now supports input via imes input method editors during development i tested this across all os with japanese and korean imes as i m familiar with them here are some things to try install enable an ime for the os preferably not japanese korean test input if you re familiar with the ime try all the edge cases multiple character compositions multiple keystrokes per character multiple characters reducing to a single character test executing a command using the text eg echo see for more details on the change and testing that was performed at implementation time
0
63,005
17,314,603,658
IssuesEvent
2021-07-27 03:08:34
milvus-io/milvus-insight
https://api.github.com/repos/milvus-io/milvus-insight
opened
Vector search: when there is no collection, it shouldn't be a blank dropdown.
defect
![image](https://user-images.githubusercontent.com/185051/127089044-0f8ec51e-6a35-433c-9484-5e41441e05d8.png) We should tell user, `No collection`
1.0
Vector search: when there is no collection, it shouldn't be a blank dropdown. - ![image](https://user-images.githubusercontent.com/185051/127089044-0f8ec51e-6a35-433c-9484-5e41441e05d8.png) We should tell user, `No collection`
defect
vector search when there is no collection it shouldn t be a blank dropdown we should tell user no collection
1
35,798
7,801,512,432
IssuesEvent
2018-06-09 22:14:13
StrikeNP/trac_test
https://api.github.com/repos/StrikeNP/trac_test
closed
Case ARM 3 year crashes due to floating-point overflow with the current code (Trac #681)
Migrated from Trac betlej@uwm.edu clubb_src defect
'''Description''' We implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time. One configuration that runs for at least a few weeks is: 128 level grid Morrison microphysics dt_main = 6sec dt_rad = 60sec The temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration. Attachments: http://carson.math.uwm.edu/trac/clubb/attachment/ticket/681/cloud_frac.png http://carson.math.uwm.edu/trac/clubb/attachment/ticket/681/arm_3year_25.eps.jpg Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/681 ```json { "status": "closed", "changetime": "2014-08-13T19:55:10", "description": "'''Description'''\nWe implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time.\nOne configuration that runs for at least a few weeks is:\n128 level grid\nMorrison microphysics\ndt_main = 6sec\ndt_rad = 60sec\nThe temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration.", "reporter": "dschanen@uwm.edu", "cc": "vlarson@uwm.edu, raut@uwm.edu", "resolution": "fixed", "_ts": "1407959710652186", "component": "clubb_src", "summary": "Case ARM 3 year crashes due to floating-point overflow with the current code", "priority": "minor", "keywords": "", "time": "2014-05-05T22:32:36", "milestone": "", "owner": "betlej@uwm.edu", "type": "defect" } ```
1.0
Case ARM 3 year crashes due to floating-point overflow with the current code (Trac #681) - '''Description''' We implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time. One configuration that runs for at least a few weeks is: 128 level grid Morrison microphysics dt_main = 6sec dt_rad = 60sec The temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration. Attachments: http://carson.math.uwm.edu/trac/clubb/attachment/ticket/681/cloud_frac.png http://carson.math.uwm.edu/trac/clubb/attachment/ticket/681/arm_3year_25.eps.jpg Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/681 ```json { "status": "closed", "changetime": "2014-08-13T19:55:10", "description": "'''Description'''\nWe implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time.\nOne configuration that runs for at least a few weeks is:\n128 level grid\nMorrison microphysics\ndt_main = 6sec\ndt_rad = 60sec\nThe temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration.", "reporter": "dschanen@uwm.edu", "cc": "vlarson@uwm.edu, raut@uwm.edu", "resolution": "fixed", "_ts": "1407959710652186", "component": "clubb_src", "summary": "Case ARM 3 year crashes due to floating-point overflow with the current code", "priority": "minor", "keywords": "", "time": "2014-05-05T22:32:36", "milestone": "", "owner": "betlej@uwm.edu", "type": "defect" } ```
defect
case arm year crashes due to floating point overflow with the current code trac description we implemented a year long simulation in clubb standalone years ago to test the code recently i attempted to run this case to see if our statistics code works correctly with infrequent one every simulated week output to my surprise this case no longer works it appears that temperatures close to the surface become unrealistically cold and cause a floating point error i think this case would run at one time one configuration that runs for at least a few weeks is level grid morrison microphysics dt main dt rad the temperatures still seem very low and the profile of total water and theta both look unstable this case is probably not of much interest at this point but i wonder if we changed something in how we interface with the microphysics since the case was added it crashes in less than a week in the default configuration attachments migrated from json status closed changetime description description nwe implemented a year long simulation in clubb standalone years ago to test the code recently i attempted to run this case to see if our statistics code works correctly with infrequent one every simulated week output to my surprise this case no longer works it appears that temperatures close to the surface become unrealistically cold and cause a floating point error i think this case would run at one time none configuration that runs for at least a few weeks is level grid nmorrison microphysics ndt main ndt rad nthe temperatures still seem very low and the profile of total water and theta both look unstable this case is probably not of much interest at this point but i wonder if we changed something in how we interface with the microphysics since the case was added it crashes in less than a week in the default configuration reporter dschanen uwm edu cc vlarson uwm edu raut uwm edu resolution fixed ts component clubb src summary case arm year crashes due to floating point overflow with the current code priority minor keywords time milestone owner betlej uwm edu type defect
1
431,459
30,234,441,574
IssuesEvent
2023-07-06 09:15:13
synthead/timex_datalink_client
https://api.github.com/repos/synthead/timex_datalink_client
closed
Inconsistent examples in protocol 7 documentation for phone numbers
bug documentation protocol 7
This is in the documentation for protocol 7 phone numbers: > ## Phone Numbers > > ![image](https://user-images.githubusercontent.com/820984/207311988-ff21dc2e-2530-4437-a6fa-93e7c24be591.png) > > ```ruby > phrase_builder = TimexDatalinkClient::Protocol7::PhraseBuilder.new(database: "pcvocab.mdb") > > dog_sitter = phrase_builder.vocab_ids_for("Dog", "Sitter") > mom_and_dad = phrase_builder.vocab_ids_for("Mom", "And", "Dad") > > phone_numbers = [ > TimexDatalinkClient::Protocol7::Eeprom::PhoneNumber.new( > name: dog_sitter, > number: "8675309" > ), > TimexDatalinkClient::Protocol7::Eeprom::PhoneNumber.new( > name: mom_and_dad, > number: "7133659900" > ) > ] > > TimexDatalinkClient::Protocol7::Eeprom.new(phone_numbers: phone_numbers) > ``` The second name in the example is "eBrain Mom And Dad", but the example only uses "Mom And Dad".
1.0
Inconsistent examples in protocol 7 documentation for phone numbers - This is in the documentation for protocol 7 phone numbers: > ## Phone Numbers > > ![image](https://user-images.githubusercontent.com/820984/207311988-ff21dc2e-2530-4437-a6fa-93e7c24be591.png) > > ```ruby > phrase_builder = TimexDatalinkClient::Protocol7::PhraseBuilder.new(database: "pcvocab.mdb") > > dog_sitter = phrase_builder.vocab_ids_for("Dog", "Sitter") > mom_and_dad = phrase_builder.vocab_ids_for("Mom", "And", "Dad") > > phone_numbers = [ > TimexDatalinkClient::Protocol7::Eeprom::PhoneNumber.new( > name: dog_sitter, > number: "8675309" > ), > TimexDatalinkClient::Protocol7::Eeprom::PhoneNumber.new( > name: mom_and_dad, > number: "7133659900" > ) > ] > > TimexDatalinkClient::Protocol7::Eeprom.new(phone_numbers: phone_numbers) > ``` The second name in the example is "eBrain Mom And Dad", but the example only uses "Mom And Dad".
non_defect
inconsistent examples in protocol documentation for phone numbers this is in the documentation for protocol phone numbers phone numbers ruby phrase builder timexdatalinkclient phrasebuilder new database pcvocab mdb dog sitter phrase builder vocab ids for dog sitter mom and dad phrase builder vocab ids for mom and dad phone numbers timexdatalinkclient eeprom phonenumber new name dog sitter number timexdatalinkclient eeprom phonenumber new name mom and dad number timexdatalinkclient eeprom new phone numbers phone numbers the second name in the example is ebrain mom and dad but the example only uses mom and dad
0
60,091
17,023,332,793
IssuesEvent
2021-07-03 01:28:45
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Preferences API call seemingly hard-coded to API0.5?
Component: merkaartor Priority: major Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 7.36pm, Thursday, 11th December 2008]** On starting merkaartor with the API0.6 checkbox in prefs, will request /api/0.5/user/preferences from the api06 sever (as set in prefs).
1.0
Preferences API call seemingly hard-coded to API0.5? - **[Submitted to the original trac issue database at 7.36pm, Thursday, 11th December 2008]** On starting merkaartor with the API0.6 checkbox in prefs, will request /api/0.5/user/preferences from the api06 sever (as set in prefs).
defect
preferences api call seemingly hard coded to on starting merkaartor with the checkbox in prefs will request api user preferences from the sever as set in prefs
1
18,766
13,097,178,274
IssuesEvent
2020-08-03 16:58:28
eclipse-ee4j/jersey
https://api.github.com/repos/eclipse-ee4j/jersey
closed
Maven metadata doesn't contain all versions of jersey
Component: infrastructure
I'm getting this error when I'm building my jersey application. ``` Could not resolve dependencies for project companyjaxrs2retrofit:jar:1.0-SNAPSHOT: Failed to collect dependencies at org.glassfish.jersey.media:jersey-media-multipart:jar:[2.25.1,2.25.1]: No versions available for org.glassfish.jersey.media:jersey-media-multipart:jar:[2.25.1,2.25.1] within specified range ``` There are no versions in https://repo1.maven.org/maven2/org/glassfish/jersey/media/jersey-media-multipart/maven-metadata.xml <img width="483" alt="image" src="https://user-images.githubusercontent.com/3828802/79851153-99cb6700-83cd-11ea-9c01-94b1b05f4286.png"> For example Guava has all old versions in their metadata https://repo.maven.apache.org/maven2/com/google/guava/guava/maven-metadata.xml <img width="429" alt="image" src="https://user-images.githubusercontent.com/3828802/79851186-a3ed6580-83cd-11ea-9f74-d1a8057c964d.png"> This error occurred after 3.0.0-M1 release
1.0
Maven metadata doesn't contain all versions of jersey - I'm getting this error when I'm building my jersey application. ``` Could not resolve dependencies for project companyjaxrs2retrofit:jar:1.0-SNAPSHOT: Failed to collect dependencies at org.glassfish.jersey.media:jersey-media-multipart:jar:[2.25.1,2.25.1]: No versions available for org.glassfish.jersey.media:jersey-media-multipart:jar:[2.25.1,2.25.1] within specified range ``` There are no versions in https://repo1.maven.org/maven2/org/glassfish/jersey/media/jersey-media-multipart/maven-metadata.xml <img width="483" alt="image" src="https://user-images.githubusercontent.com/3828802/79851153-99cb6700-83cd-11ea-9c01-94b1b05f4286.png"> For example Guava has all old versions in their metadata https://repo.maven.apache.org/maven2/com/google/guava/guava/maven-metadata.xml <img width="429" alt="image" src="https://user-images.githubusercontent.com/3828802/79851186-a3ed6580-83cd-11ea-9f74-d1a8057c964d.png"> This error occurred after 3.0.0-M1 release
non_defect
maven metadata doesn t contain all versions of jersey i m getting this error when i m building my jersey application could not resolve dependencies for project jar snapshot failed to collect dependencies at org glassfish jersey media jersey media multipart jar no versions available for org glassfish jersey media jersey media multipart jar within specified range there are no versions in img width alt image src for example guava has all old versions in their metadata img width alt image src this error occurred after release
0
51,643
13,207,555,339
IssuesEvent
2020-08-14 23:34:30
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
Make std-processing use lazy imports (Trac #762)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/762">https://code.icecube.wisc.edu/projects/icecube/ticket/762</a>, reported by jvansanten</summary> <p> ```json { "status": "closed", "changetime": "2014-10-20T16:13:40", "_ts": "1413821620554162", "description": "Currently most of the Python modules in std-processing import projects at the top of the file. This is bad, because __init__.py imports every module, and as a consequence, you can't even read values out of level2_globals if you don't have, say, the dst-extractor project.\n\nProjects should only be imported within the tray segments where they're used.", "reporter": "jvansanten", "cc": "david.schultz@icecube.wisc.edu", "resolution": "fixed", "time": "2014-09-18T14:33:35", "component": "combo reconstruction", "summary": "Make std-processing use lazy imports", "priority": "normal", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
1.0
Make std-processing use lazy imports (Trac #762) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/762">https://code.icecube.wisc.edu/projects/icecube/ticket/762</a>, reported by jvansanten</summary> <p> ```json { "status": "closed", "changetime": "2014-10-20T16:13:40", "_ts": "1413821620554162", "description": "Currently most of the Python modules in std-processing import projects at the top of the file. This is bad, because __init__.py imports every module, and as a consequence, you can't even read values out of level2_globals if you don't have, say, the dst-extractor project.\n\nProjects should only be imported within the tray segments where they're used.", "reporter": "jvansanten", "cc": "david.schultz@icecube.wisc.edu", "resolution": "fixed", "time": "2014-09-18T14:33:35", "component": "combo reconstruction", "summary": "Make std-processing use lazy imports", "priority": "normal", "keywords": "", "milestone": "", "owner": "", "type": "defect" } ``` </p> </details>
defect
make std processing use lazy imports trac migrated from json status closed changetime ts description currently most of the python modules in std processing import projects at the top of the file this is bad because init py imports every module and as a consequence you can t even read values out of globals if you don t have say the dst extractor project n nprojects should only be imported within the tray segments where they re used reporter jvansanten cc david schultz icecube wisc edu resolution fixed time component combo reconstruction summary make std processing use lazy imports priority normal keywords milestone owner type defect
1
59,048
14,527,139,605
IssuesEvent
2020-12-14 15:01:14
pgRouting/pgrouting
https://api.github.com/repos/pgRouting/pgrouting
closed
Remove warnings when using clang compiler
Build CI clang
**Is your feature request related to a problem? Please describe.** Before attempting to fix #1725 remove warning from compilation using `clang` **Describe the solution you'd like** - [x] add action that compiles with `clang` - [x] Remove warnings in order of the configuration file
1.0
Remove warnings when using clang compiler - **Is your feature request related to a problem? Please describe.** Before attempting to fix #1725 remove warning from compilation using `clang` **Describe the solution you'd like** - [x] add action that compiles with `clang` - [x] Remove warnings in order of the configuration file
non_defect
remove warnings when using clang compiler is your feature request related to a problem please describe before attempting to fix remove warning from compilation using clang describe the solution you d like add action that compiles with clang remove warnings in order of the configuration file
0
39,977
16,163,466,228
IssuesEvent
2021-05-01 04:00:34
airavata-courses/PingIntelligence
https://api.github.com/repos/airavata-courses/PingIntelligence
closed
Deploying 2 live clusters on IU Jetstream & TACC Jetstream
Kubernetes-installation Service Mesh enhancement
- Deploying clusters using [Milestone 2 automation script](https://github.com/airavata-courses/PingIntelligence/tree/automation-script). - ngrokking the UI URL - Adding new ngrok URL to firebase and domain tables.
1.0
Deploying 2 live clusters on IU Jetstream & TACC Jetstream - - Deploying clusters using [Milestone 2 automation script](https://github.com/airavata-courses/PingIntelligence/tree/automation-script). - ngrokking the UI URL - Adding new ngrok URL to firebase and domain tables.
non_defect
deploying live clusters on iu jetstream tacc jetstream deploying clusters using ngrokking the ui url adding new ngrok url to firebase and domain tables
0
5,152
2,610,182,104
IssuesEvent
2015-02-26 18:58:03
chrsmith/quchuseban
https://api.github.com/repos/chrsmith/quchuseban
opened
力荐怎样能去色斑
auto-migrated Priority-Medium Type-Defect
``` 《摘要》 我不相信生活会无故地变得富裕而有情趣,那只是浪漫的妄想。因为我一生都充满动荡和不安。失败不会主动传授你知识,我通过芸芸众生的无知、胆怯和愚笨来获取真知。有了它们,生活会变得轻松,也更成功。因此,我要和大家分享一些姗姗来迟的道理,期待能让一些人避免重蹈我的覆辙。怎样能去色斑, 《客户案例》   要说起祛斑,相信很多美眉都乐于讨论,今天我可是又�� �说了一个很奇特的祛斑方法哦,这可是亲身体验的妙招啊,� ��定有效的.<br>   今天去一朋友家玩,见朋友妈妈都40岁的人了,皮肤却还 是很好,白皙白皙的,就像30的人呐,按捺不住好奇心,我就 问阿姨怎么保养的这么好,阿姨笑了笑,就给我说了一个独�� �密诏哦!原来阿姨以前皮肤也不怎么好的,比较有那么大的年 纪了嘛,更重要的是脸上也有色斑哦,但是现在一点都看不�� �来呢,后来她听说桂圆可以祛斑,就试试了,每天都喝桂圆� ��糖鹌鹑汤,一直坚持了5年啊,真佩服阿姨的恒心哦!效果也� ��超级棒的,不过得坚持。因为食疗可不是一天两天的事情。 <br>   想想阿姨都40岁了,效果还挺好的,那么我们20出头的年� ��,应该效果更佳的,嘻嘻,家里还有几个桂圆呢,回去做做 看哦,先尝尝味道怎么样!,呵呵,有心的妹妹可以试试哦!<br >   对了,桂圆和鹌鹑及冰糖的比例可以根据个人口味而定�� �没有特别要求 阅读了怎样能去色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 怎样能去色斑,同时为您分享祛斑小方法 介绍一些祛斑妙方供参考, 1.洗脸时,在水中加1-2汤匙的食醋,有减轻色素沉着的作用。 2.将鲜明萝卜辟碎挤汁,取10-30毫升,每日上晚洗完脸后涂抹� ��待干后,洗净。此外,每日喝一杯胡萝卜,可美白肌肤。3.� ��柠檬汁搅汁,加糖水适量饮用。柠檬中含有大量维生素C、�� �、磷、铁等。常饮柠檬汁不仅可美白肌肤,还能使黑色素沉� ��,达到祛斑的作用。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:42
1.0
力荐怎样能去色斑 - ``` 《摘要》 我不相信生活会无故地变得富裕而有情趣,那只是浪漫的妄想。因为我一生都充满动荡和不安。失败不会主动传授你知识,我通过芸芸众生的无知、胆怯和愚笨来获取真知。有了它们,生活会变得轻松,也更成功。因此,我要和大家分享一些姗姗来迟的道理,期待能让一些人避免重蹈我的覆辙。怎样能去色斑, 《客户案例》   要说起祛斑,相信很多美眉都乐于讨论,今天我可是又�� �说了一个很奇特的祛斑方法哦,这可是亲身体验的妙招啊,� ��定有效的.<br>   今天去一朋友家玩,见朋友妈妈都40岁的人了,皮肤却还 是很好,白皙白皙的,就像30的人呐,按捺不住好奇心,我就 问阿姨怎么保养的这么好,阿姨笑了笑,就给我说了一个独�� �密诏哦!原来阿姨以前皮肤也不怎么好的,比较有那么大的年 纪了嘛,更重要的是脸上也有色斑哦,但是现在一点都看不�� �来呢,后来她听说桂圆可以祛斑,就试试了,每天都喝桂圆� ��糖鹌鹑汤,一直坚持了5年啊,真佩服阿姨的恒心哦!效果也� ��超级棒的,不过得坚持。因为食疗可不是一天两天的事情。 <br>   想想阿姨都40岁了,效果还挺好的,那么我们20出头的年� ��,应该效果更佳的,嘻嘻,家里还有几个桂圆呢,回去做做 看哦,先尝尝味道怎么样!,呵呵,有心的妹妹可以试试哦!<br >   对了,桂圆和鹌鹑及冰糖的比例可以根据个人口味而定�� �没有特别要求 阅读了怎样能去色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》   1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐�� �去掉吗?   答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新�� �客都是通过老顾客介绍而来,口碑由此而来!   2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技�� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!   3,去除黄褐斑之后,会反弹吗?   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌!我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗?   4,你们的价格有点贵,能不能便宜一点?   答:如果您使用西药最少需要2000元,煎服的药最少需要3 000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗   5,我适合用黛芙薇尔精华液吗?   答:黛芙薇尔适用人群:   1、生理紊乱引起的黄褐斑人群   2、生育引起的妊娠斑人群   3、年纪增长引起的老年斑人群   4、化妆品色素沉积、辐射斑人群   5、长期日照引起的日晒斑人群   6、肌肤暗淡急需美白的人群 《祛斑小方法》 怎样能去色斑,同时为您分享祛斑小方法 介绍一些祛斑妙方供参考, 1.洗脸时,在水中加1-2汤匙的食醋,有减轻色素沉着的作用。 2.将鲜明萝卜辟碎挤汁,取10-30毫升,每日上晚洗完脸后涂抹� ��待干后,洗净。此外,每日喝一杯胡萝卜,可美白肌肤。3.� ��柠檬汁搅汁,加糖水适量饮用。柠檬中含有大量维生素C、�� �、磷、铁等。常饮柠檬汁不仅可美白肌肤,还能使黑色素沉� ��,达到祛斑的作用。 ``` ----- Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:42
defect
力荐怎样能去色斑 《摘要》 我不相信生活会无故地变得富裕而有情趣,那只是浪漫的妄想。因为我一生都充满动荡和不安。失败不会主动传授你知识,我通过芸芸众生的无知、胆怯和愚笨来获取真知。有了它们,生活会变得轻松,也更成功。因此,我要和大家分享一些姗姗来迟的道理,期待能让一些人避免重蹈我的覆辙。怎样能去色斑, 《客户案例》   要说起祛斑,相信很多美眉都乐于讨论,今天我可是又�� �说了一个很奇特的祛斑方法哦,这可是亲身体验的妙招啊,� ��定有效的   今天去一朋友家玩, ,皮肤却还 是很好,白皙白皙的, ,按捺不住好奇心,我就 问阿姨怎么保养的这么好,阿姨笑了笑,就给我说了一个独�� �密诏哦 原来阿姨以前皮肤也不怎么好的,比较有那么大的年 纪了嘛,更重要的是脸上也有色斑哦,但是现在一点都看不�� �来呢,后来她听说桂圆可以祛斑,就试试了,每天都喝桂圆� ��糖鹌鹑汤, ,真佩服阿姨的恒心哦 效果也� ��超级棒的,不过得坚持。因为食疗可不是一天两天的事情。    ,效果还挺好的, � ��,应该效果更佳的,嘻嘻,家里还有几个桂圆呢,回去做做 看哦,先尝尝味道怎么样 ,呵呵,有心的妹妹可以试试哦 br   对了,桂圆和鹌鹑及冰糖的比例可以根据个人口味而定�� �没有特别要求 阅读了怎样能去色斑,再看脸上容易长斑的原因: 《色斑形成原因》   内部因素   一、压力   当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。   二、荷尔蒙分泌失调   避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。   三、新陈代谢缓慢   肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。   四、错误的使用化妆品   使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。   外部因素   一、紫外线   照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。   二、不良的清洁习惯   因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。   三、遗传基因   父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》    黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗   答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来    ,服用黛芙薇尔美白,会伤身体吗 有副作用吗   答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖    ,去除黄褐斑之后,会反弹吗   答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗    ,你们的价格有点贵,能不能便宜一点   答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗    ,我适合用黛芙薇尔精华液吗   答:黛芙薇尔适用人群:    、生理紊乱引起的黄褐斑人群    、生育引起的妊娠斑人群    、年纪增长引起的老年斑人群    、化妆品色素沉积、辐射斑人群    、长期日照引起的日晒斑人群    、肌肤暗淡急需美白的人群 《祛斑小方法》 怎样能去色斑,同时为您分享祛斑小方法 介绍一些祛斑妙方供参考, 洗脸时, ,有减轻色素沉着的作用。 将鲜明萝卜辟碎挤汁, ,每日上晚洗完脸后涂抹� ��待干后,洗净。此外,每日喝一杯胡萝卜,可美白肌肤。 � ��柠檬汁搅汁,加糖水适量饮用。柠檬中含有大量维生素c、�� �、磷、铁等。常饮柠檬汁不仅可美白肌肤,还能使黑色素沉� ��,达到祛斑的作用。 original issue reported on code google com by additive gmail com on jul at
1
1,050
2,602,142,923
IssuesEvent
2015-02-24 05:04:52
gambitph/Page-Builder-Sandwich
https://api.github.com/repos/gambitph/Page-Builder-Sandwich
closed
Portfolio shortcode
high priority shortcode
This should simply follow Jetpack's `portfolio` shortcode: http://en.support.wordpress.com/portfolios/portfolio-shortcode/ Just implement shortcake and put the same attributes there
1.0
Portfolio shortcode - This should simply follow Jetpack's `portfolio` shortcode: http://en.support.wordpress.com/portfolios/portfolio-shortcode/ Just implement shortcake and put the same attributes there
non_defect
portfolio shortcode this should simply follow jetpack s portfolio shortcode just implement shortcake and put the same attributes there
0
229,581
17,572,615,425
IssuesEvent
2021-08-15 01:54:27
UnBArqDsw2021-1/2021.1_G6_Curumim
https://api.github.com/repos/UnBArqDsw2021-1/2021.1_G6_Curumim
closed
Diagrama de pacotes
documentation
### Descrição: Issue direcionada a criação do diagrama de pacotes. ### Tarefas: - [x] Estudar sobre o diagrama de pacotes. - [x] Fazer o diagrama. ### Critérios de aceitação: - [ ] Diagrama criado da maneira correta; - [x] Todos os integrantes derem suas pontuações no Planning Poker.
1.0
Diagrama de pacotes - ### Descrição: Issue direcionada a criação do diagrama de pacotes. ### Tarefas: - [x] Estudar sobre o diagrama de pacotes. - [x] Fazer o diagrama. ### Critérios de aceitação: - [ ] Diagrama criado da maneira correta; - [x] Todos os integrantes derem suas pontuações no Planning Poker.
non_defect
diagrama de pacotes descrição issue direcionada a criação do diagrama de pacotes tarefas estudar sobre o diagrama de pacotes fazer o diagrama critérios de aceitação diagrama criado da maneira correta todos os integrantes derem suas pontuações no planning poker
0
552,155
16,196,235,814
IssuesEvent
2021-05-04 14:53:06
enso-org/ide
https://api.github.com/repos/enso-org/ide
closed
Add an option for passing options to backend.
Good First Issue Priority: Highest Type: Enhancement
### Summary There should be a CLI option which allows to set some set of options to the backend. ### Value <!-- - This section should describe the value of this task. - This value can be for users, to the team, etc. --> ### Specification The example usages should look like: `--backend-opts --opt` `--backend-opts "--opt1 foo --opt2"` ### Acceptance Criteria & Test Cases <!-- - Any criteria that must be satisfied for the task to be accepted. - The test plan for the feature, related to the acceptance criteria. -->
1.0
Add an option for passing options to backend. - ### Summary There should be a CLI option which allows to set some set of options to the backend. ### Value <!-- - This section should describe the value of this task. - This value can be for users, to the team, etc. --> ### Specification The example usages should look like: `--backend-opts --opt` `--backend-opts "--opt1 foo --opt2"` ### Acceptance Criteria & Test Cases <!-- - Any criteria that must be satisfied for the task to be accepted. - The test plan for the feature, related to the acceptance criteria. -->
non_defect
add an option for passing options to backend summary there should be a cli option which allows to set some set of options to the backend value this section should describe the value of this task this value can be for users to the team etc specification the example usages should look like backend opts opt backend opts foo acceptance criteria test cases any criteria that must be satisfied for the task to be accepted the test plan for the feature related to the acceptance criteria
0
82,040
31,877,066,462
IssuesEvent
2023-09-16 00:51:06
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
zstdcheck fails
Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Distribution Version | Kernel Version | Architecture | OpenZFS Version |70ea484e3ec56c529c6c5027ffc43840100ce224 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing The `checkstyle` github action always fails, because `make zstdcheck` fails. For example, see [here](https://github.com/openzfs/zfs/actions/runs/6203803487/job/16844968949?pr=15281). ``` ... 0000000000004c80 g F .text 0000000000000010 __pfx_zfs_ZSTD_DCtx_reset 0000000000004d30 g F .text 0000000000000010 __pfx_zfs_ZSTD_sizeof_DStream 0000000000004d90 g F .text 0000000000000010 __pfx_zfs_ZSTD_decodingBufferSize_min 0000000000004dd0 g F .text 0000000000000010 __pfx_zfs_ZSTD_estimateDStreamSize 0000000000004e10 g F .text 0000000000000010 __pfx_zfs_ZSTD_estimateDStreamSize_fromFrame 0000000000005190 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressStream 0000000000005c60 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressStream_simpleArgs zstd/lib/decompress/zstd_decompress_block.o: file format elf64-x86-64 00000000000044f0 g F .text 0000000000000010 __pfx_zfs_ZSTD_getcBlockSize 0000000000004560 g F .text 0000000000000010 __pfx_zfs_ZSTD_decodeLiteralsBlock 00000000000049b0 g F .text 0000000000000010 __pfx_zfs_ZSTD_buildFSETable 0000000000004e90 g F .text 0000000000000010 __pfx_zfs_ZSTD_decodeSeqHeaders 00000000000050d0 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressBlock_internal 0000000000005310 g F .text 0000000000000010 __pfx_zfs_ZSTD_checkContinuity 0000000000005370 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressBlock make: *** [Makefile:14103: zstdcheck] Error 2 ``` ### Describe how to reproduce the problem `make zstdcheck` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
1.0
zstdcheck fails - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Distribution Version | Kernel Version | Architecture | OpenZFS Version |70ea484e3ec56c529c6c5027ffc43840100ce224 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing The `checkstyle` github action always fails, because `make zstdcheck` fails. For example, see [here](https://github.com/openzfs/zfs/actions/runs/6203803487/job/16844968949?pr=15281). ``` ... 0000000000004c80 g F .text 0000000000000010 __pfx_zfs_ZSTD_DCtx_reset 0000000000004d30 g F .text 0000000000000010 __pfx_zfs_ZSTD_sizeof_DStream 0000000000004d90 g F .text 0000000000000010 __pfx_zfs_ZSTD_decodingBufferSize_min 0000000000004dd0 g F .text 0000000000000010 __pfx_zfs_ZSTD_estimateDStreamSize 0000000000004e10 g F .text 0000000000000010 __pfx_zfs_ZSTD_estimateDStreamSize_fromFrame 0000000000005190 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressStream 0000000000005c60 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressStream_simpleArgs zstd/lib/decompress/zstd_decompress_block.o: file format elf64-x86-64 00000000000044f0 g F .text 0000000000000010 __pfx_zfs_ZSTD_getcBlockSize 0000000000004560 g F .text 0000000000000010 __pfx_zfs_ZSTD_decodeLiteralsBlock 00000000000049b0 g F .text 0000000000000010 __pfx_zfs_ZSTD_buildFSETable 0000000000004e90 g F .text 0000000000000010 __pfx_zfs_ZSTD_decodeSeqHeaders 00000000000050d0 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressBlock_internal 0000000000005310 g F .text 0000000000000010 __pfx_zfs_ZSTD_checkContinuity 0000000000005370 g F .text 0000000000000010 __pfx_zfs_ZSTD_decompressBlock make: *** [Makefile:14103: zstdcheck] Error 2 ``` ### Describe how to reproduce the problem `make zstdcheck` ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
defect
zstdcheck fails thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name distribution version kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing the checkstyle github action always fails because make zstdcheck fails for example see g f text pfx zfs zstd dctx reset g f text pfx zfs zstd sizeof dstream g f text pfx zfs zstd decodingbuffersize min g f text pfx zfs zstd estimatedstreamsize g f text pfx zfs zstd estimatedstreamsize fromframe g f text pfx zfs zstd decompressstream g f text pfx zfs zstd decompressstream simpleargs zstd lib decompress zstd decompress block o file format g f text pfx zfs zstd getcblocksize g f text pfx zfs zstd decodeliteralsblock g f text pfx zfs zstd buildfsetable g f text pfx zfs zstd decodeseqheaders g f text pfx zfs zstd decompressblock internal g f text pfx zfs zstd checkcontinuity g f text pfx zfs zstd decompressblock make error describe how to reproduce the problem make zstdcheck include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
1
78,458
27,530,223,169
IssuesEvent
2023-03-06 21:24:49
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
opened
FE: Tables are being generated with an extra column
Defect Needs refining
## Describe the defect Tables are being generated on the FE with an extra column on the Front end. It looks like it might be an order column as the content in the cells is 0, 1, 2, 3, etc. This column is not in the Edit or View screen in the CMS which is why I think it's somehow being created on the content-build ## Screenshots **Edit screen** **View Screen** ![image](https://user-images.githubusercontent.com/106678594/223233661-b64820c7-ce67-4953-8bd9-2769aeaa05e5.png) **Front end** ![image](https://user-images.githubusercontent.com/106678594/223233365-dc20e066-55ca-42b9-bfe1-f0b5e7333c13.png) ## To Reproduce Steps to reproduce the behavior: 1. Go to https://www.va.gov/education/survivor-dependent-benefits/ 2. Scroll down and notice that the table as a 3rd column with numbers in it 3. Go to https://prod.cms.va.gov/education/survivor-dependent-benefits 4. Scroll down and notice the table does not have this 3rd column ## AC / Expected behavior - [ ] The table on the front end of VA.gov should match what is in the CMS ## Additional context - Note that this ticket was created the day Randi found the issue, March 6th - I went back a bit and found another piece of content that has this same issue, it's only in draft mode but it has this weird 3rd column: http://preview-prod.vfs.va.gov/preview?nodeId=53961 - This was modified on March 2nd - I found a 3rd piece of content with a table that did not have this issue: https://www.va.gov/records/discharge-documents/ - This was most recently modified Feb 28th Was something changed between the 28th and the 2nd that is introducing a bug in the table FE template that introduces this 3rd column when the table is saved? ### Team Please check the team(s) that will do this work. - [ ] `CMS Team` - [x] `Public Websites` - [ ] `Facilities` - [ ] `User support`
1.0
FE: Tables are being generated with an extra column - ## Describe the defect Tables are being generated on the FE with an extra column on the Front end. It looks like it might be an order column as the content in the cells is 0, 1, 2, 3, etc. This column is not in the Edit or View screen in the CMS which is why I think it's somehow being created on the content-build ## Screenshots **Edit screen** **View Screen** ![image](https://user-images.githubusercontent.com/106678594/223233661-b64820c7-ce67-4953-8bd9-2769aeaa05e5.png) **Front end** ![image](https://user-images.githubusercontent.com/106678594/223233365-dc20e066-55ca-42b9-bfe1-f0b5e7333c13.png) ## To Reproduce Steps to reproduce the behavior: 1. Go to https://www.va.gov/education/survivor-dependent-benefits/ 2. Scroll down and notice that the table as a 3rd column with numbers in it 3. Go to https://prod.cms.va.gov/education/survivor-dependent-benefits 4. Scroll down and notice the table does not have this 3rd column ## AC / Expected behavior - [ ] The table on the front end of VA.gov should match what is in the CMS ## Additional context - Note that this ticket was created the day Randi found the issue, March 6th - I went back a bit and found another piece of content that has this same issue, it's only in draft mode but it has this weird 3rd column: http://preview-prod.vfs.va.gov/preview?nodeId=53961 - This was modified on March 2nd - I found a 3rd piece of content with a table that did not have this issue: https://www.va.gov/records/discharge-documents/ - This was most recently modified Feb 28th Was something changed between the 28th and the 2nd that is introducing a bug in the table FE template that introduces this 3rd column when the table is saved? ### Team Please check the team(s) that will do this work. - [ ] `CMS Team` - [x] `Public Websites` - [ ] `Facilities` - [ ] `User support`
defect
fe tables are being generated with an extra column describe the defect tables are being generated on the fe with an extra column on the front end it looks like it might be an order column as the content in the cells is etc this column is not in the edit or view screen in the cms which is why i think it s somehow being created on the content build screenshots edit screen view screen front end to reproduce steps to reproduce the behavior go to scroll down and notice that the table as a column with numbers in it go to scroll down and notice the table does not have this column ac expected behavior the table on the front end of va gov should match what is in the cms additional context note that this ticket was created the day randi found the issue march i went back a bit and found another piece of content that has this same issue it s only in draft mode but it has this weird column this was modified on march i found a piece of content with a table that did not have this issue this was most recently modified feb was something changed between the and the that is introducing a bug in the table fe template that introduces this column when the table is saved team please check the team s that will do this work cms team public websites facilities user support
1
18,572
13,044,462,296
IssuesEvent
2020-07-29 04:46:33
dueapp/Due-macOS
https://api.github.com/repos/dueapp/Due-macOS
closed
Support right-clicking on menu bar icon to open menu
usability
It seems like that's one of the ways users are trying to get to Preferences again. So this, in addition to #7 should help
True
Support right-clicking on menu bar icon to open menu - It seems like that's one of the ways users are trying to get to Preferences again. So this, in addition to #7 should help
non_defect
support right clicking on menu bar icon to open menu it seems like that s one of the ways users are trying to get to preferences again so this in addition to should help
0
4,150
2,610,088,288
IssuesEvent
2015-02-26 18:26:45
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳痤疮怎么祛除比较好
auto-migrated Priority-Medium Type-Defect
``` 深圳痤疮怎么祛除比较好【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:26
1.0
深圳痤疮怎么祛除比较好 - ``` 深圳痤疮怎么祛除比较好【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:26
defect
深圳痤疮怎么祛除比较好 深圳痤疮怎么祛除比较好【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at
1
42,853
11,306,249,125
IssuesEvent
2020-01-18 12:41:27
WebAppTeams/AngularApp
https://api.github.com/repos/WebAppTeams/AngularApp
closed
SECURITY ALERT
Defect
We found a potential security vulnerability in one of your dependencies. - serialize-javascript
1.0
SECURITY ALERT - We found a potential security vulnerability in one of your dependencies. - serialize-javascript
defect
security alert we found a potential security vulnerability in one of your dependencies serialize javascript
1
60,262
17,023,382,923
IssuesEvent
2021-07-03 01:44:33
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Spurious oneway arrow drawn in wrong direction
Component: mapnik Priority: minor Resolution: invalid Type: defect
**[Submitted to the original trac issue database at 3.54pm, Thursday, 9th April 2009]** http://www.openstreetmap.org/?lat=53.793646&lon=-2.988274&zoom=18&layers=B000FTFT At the western end of the slip road from the roundabout to the end of the westbound motorway/beginning of the dual carriageway, there is a oneway arrow drawn in the wrong direction. All the other arrows are correct.
1.0
Spurious oneway arrow drawn in wrong direction - **[Submitted to the original trac issue database at 3.54pm, Thursday, 9th April 2009]** http://www.openstreetmap.org/?lat=53.793646&lon=-2.988274&zoom=18&layers=B000FTFT At the western end of the slip road from the roundabout to the end of the westbound motorway/beginning of the dual carriageway, there is a oneway arrow drawn in the wrong direction. All the other arrows are correct.
defect
spurious oneway arrow drawn in wrong direction at the western end of the slip road from the roundabout to the end of the westbound motorway beginning of the dual carriageway there is a oneway arrow drawn in the wrong direction all the other arrows are correct
1
56,929
7,017,447,979
IssuesEvent
2017-12-21 09:41:12
owncloud/owncloud.org
https://api.github.com/repos/owncloud/owncloud.org
closed
Desktop client's changelog problems
before relaunch Broken Link Design
- [ ] Download links to binaries older than `v2.1.1` are broken (e.g. https://download.owncloud.com/desktop/stable/ownCloud-2.1.0.5683-setup.exe) - [ ] `v2.2.1` links are not even present in the changelog. - [ ] All the Linux download links point to the stable version OBS repository Also, It would be nice to have a delimiter (e.g. horizontal rule), clearly stating from which versions of the client is "deprecated"/have issues with the server: https://doc.owncloud.org/desktop/2.2/introduction.html#introduction: > Because of various technical issues, desktop sync clients older than 1.7 will not allowed to connect and sync with the ownCloud 8.1+ server. It is highly recommended to keep your client updated. Note: some package managers, like chocolatey on Windows, rely on this links to offer some [old versions of the client](https://chocolatey.org/packages/owncloud-client/1.7.1.4382) and therefore are unable to install.
1.0
Desktop client's changelog problems - - [ ] Download links to binaries older than `v2.1.1` are broken (e.g. https://download.owncloud.com/desktop/stable/ownCloud-2.1.0.5683-setup.exe) - [ ] `v2.2.1` links are not even present in the changelog. - [ ] All the Linux download links point to the stable version OBS repository Also, It would be nice to have a delimiter (e.g. horizontal rule), clearly stating from which versions of the client is "deprecated"/have issues with the server: https://doc.owncloud.org/desktop/2.2/introduction.html#introduction: > Because of various technical issues, desktop sync clients older than 1.7 will not allowed to connect and sync with the ownCloud 8.1+ server. It is highly recommended to keep your client updated. Note: some package managers, like chocolatey on Windows, rely on this links to offer some [old versions of the client](https://chocolatey.org/packages/owncloud-client/1.7.1.4382) and therefore are unable to install.
non_defect
desktop client s changelog problems download links to binaries older than are broken e g links are not even present in the changelog all the linux download links point to the stable version obs repository also it would be nice to have a delimiter e g horizontal rule clearly stating from which versions of the client is deprecated have issues with the server because of various technical issues desktop sync clients older than will not allowed to connect and sync with the owncloud server it is highly recommended to keep your client updated note some package managers like chocolatey on windows rely on this links to offer some and therefore are unable to install
0
99,158
4,048,677,208
IssuesEvent
2016-05-23 11:19:17
openshift/origin
https://api.github.com/repos/openshift/origin
opened
oc project cannot recognize invalid contexts
component/cli priority/P3
When trying to switch to a context that doesn't exist: ``` $ oc project test/10-0-2-15:8443/tes error: invalid resource name "test/10-0-2-15:8443/tes": name may not contain "/" ``` `oc project NAME` tries to find the context from a map (kubeconfig) and if it doesn't exist there then (if `NAME` is a context and not a project name) the project client will think it is a resource name. @fabianofranz @deads2k
1.0
oc project cannot recognize invalid contexts - When trying to switch to a context that doesn't exist: ``` $ oc project test/10-0-2-15:8443/tes error: invalid resource name "test/10-0-2-15:8443/tes": name may not contain "/" ``` `oc project NAME` tries to find the context from a map (kubeconfig) and if it doesn't exist there then (if `NAME` is a context and not a project name) the project client will think it is a resource name. @fabianofranz @deads2k
non_defect
oc project cannot recognize invalid contexts when trying to switch to a context that doesn t exist oc project test tes error invalid resource name test tes name may not contain oc project name tries to find the context from a map kubeconfig and if it doesn t exist there then if name is a context and not a project name the project client will think it is a resource name fabianofranz
0
324,289
9,887,177,350
IssuesEvent
2019-06-25 08:38:02
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
calendar.google.com - design is broken
browser-firefox engine-gecko priority-critical
<!-- @browser: Firefox 67.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0 --> <!-- @reported_with: web --> **URL**: https://calendar.google.com/ **Browser / Version**: Firefox 67.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Design is broken **Description**: Scrollable regions in editing overlay are not scrollable **Steps to Reproduce**: The editing overlay has a scrolling feature for long sections using the touchpad/mouse wheel. It isn't working on Firefox 67-68. (Works in Chrome.) [![Screenshot Description](https://webcompat.com/uploads/2019/6/443b9994-031f-4929-bf4b-f9add51c3ba8-thumb.jpg)](https://webcompat.com/uploads/2019/6/443b9994-031f-4929-bf4b-f9add51c3ba8.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
calendar.google.com - design is broken - <!-- @browser: Firefox 67.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0 --> <!-- @reported_with: web --> **URL**: https://calendar.google.com/ **Browser / Version**: Firefox 67.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Design is broken **Description**: Scrollable regions in editing overlay are not scrollable **Steps to Reproduce**: The editing overlay has a scrolling feature for long sections using the touchpad/mouse wheel. It isn't working on Firefox 67-68. (Works in Chrome.) [![Screenshot Description](https://webcompat.com/uploads/2019/6/443b9994-031f-4929-bf4b-f9add51c3ba8-thumb.jpg)](https://webcompat.com/uploads/2019/6/443b9994-031f-4929-bf4b-f9add51c3ba8.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
calendar google com design is broken url browser version firefox operating system windows tested another browser yes problem type design is broken description scrollable regions in editing overlay are not scrollable steps to reproduce the editing overlay has a scrolling feature for long sections using the touchpad mouse wheel it isn t working on firefox works in chrome browser configuration none from with ❤️
0
22,278
3,619,806,082
IssuesEvent
2016-02-08 17:22:05
miracle091/transmission-remote-dotnet
https://api.github.com/repos/miracle091/transmission-remote-dotnet
closed
BUGFIX !! :) for rss feeds .... invalid specified file
Priority-Medium Type-Defect
``` in rss feed .. when you click a movie or sth. you see invalid file specified error. with this file you can add torrents from rss feed without any problem replace file then rebuild the solution ``` Original issue reported on code.google.com by `denizler...@gmail.com` on 29 Dec 2012 at 2:28 Attachments: * [RssForm.cs](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-459/comment-0/RssForm.cs)
1.0
BUGFIX !! :) for rss feeds .... invalid specified file - ``` in rss feed .. when you click a movie or sth. you see invalid file specified error. with this file you can add torrents from rss feed without any problem replace file then rebuild the solution ``` Original issue reported on code.google.com by `denizler...@gmail.com` on 29 Dec 2012 at 2:28 Attachments: * [RssForm.cs](https://storage.googleapis.com/google-code-attachments/transmission-remote-dotnet/issue-459/comment-0/RssForm.cs)
defect
bugfix for rss feeds invalid specified file in rss feed when you click a movie or sth you see invalid file specified error with this file you can add torrents from rss feed without any problem replace file then rebuild the solution original issue reported on code google com by denizler gmail com on dec at attachments
1
2,990
8,670,249,676
IssuesEvent
2018-11-29 16:05:15
poanetwork/blockscout
https://api.github.com/repos/poanetwork/blockscout
opened
Import block reward from beneficiaries
enhancement priority: high team: architecture
We are querying the Parity `block_trace` call but are not extracting the block rewards from each block. We should start importing this data into the `block_rewards` table. This table should also include a `fetched_at` time which indicates if this process has been completed. `NULL` should be used if we have not received a block reward for a particular block. The catchup fetcher should run async looking for any missed block rewards.
1.0
Import block reward from beneficiaries - We are querying the Parity `block_trace` call but are not extracting the block rewards from each block. We should start importing this data into the `block_rewards` table. This table should also include a `fetched_at` time which indicates if this process has been completed. `NULL` should be used if we have not received a block reward for a particular block. The catchup fetcher should run async looking for any missed block rewards.
non_defect
import block reward from beneficiaries we are querying the parity block trace call but are not extracting the block rewards from each block we should start importing this data into the block rewards table this table should also include a fetched at time which indicates if this process has been completed null should be used if we have not received a block reward for a particular block the catchup fetcher should run async looking for any missed block rewards
0
182,365
14,912,522,158
IssuesEvent
2021-01-22 12:48:42
FujiMakoto/saucebot
https://api.github.com/repos/FujiMakoto/saucebot
opened
Provide better error messages
documentation enhancement
Provide better / more helpful error messages, rather than just defaulting to "Sorry, I can't do that!" to everything
1.0
Provide better error messages - Provide better / more helpful error messages, rather than just defaulting to "Sorry, I can't do that!" to everything
non_defect
provide better error messages provide better more helpful error messages rather than just defaulting to sorry i can t do that to everything
0
39,432
19,976,354,002
IssuesEvent
2022-01-29 06:05:08
gem/oq-engine
https://api.github.com/repos/gem/oq-engine
opened
Call `MultiFaultSource.iter_ruptures` only in the workers
performance
A crucial feature of UCERFSource was that .iter_ruptures is called in the workers. We lost this feature when switching to MultiFaultSources, so now we are 320 times slower than needed.
True
Call `MultiFaultSource.iter_ruptures` only in the workers - A crucial feature of UCERFSource was that .iter_ruptures is called in the workers. We lost this feature when switching to MultiFaultSources, so now we are 320 times slower than needed.
non_defect
call multifaultsource iter ruptures only in the workers a crucial feature of ucerfsource was that iter ruptures is called in the workers we lost this feature when switching to multifaultsources so now we are times slower than needed
0
105,289
23,023,966,346
IssuesEvent
2022-07-22 07:44:01
gitpod-io/gitpod
https://api.github.com/repos/gitpod-io/gitpod
closed
Overriddes VSCode config for SSH Config Location
type: bug team: IDE editor: code (desktop)
### Bug description this literally changes the vscode global config for this to work, and it is *quite* annoying to have to switch it back every single time as I have workspaces on remote hardware that I need to be able to access quickly ### Steps to reproduce have `~/.ssh/config` set as your default config *locally* then click "open in vscode" on gitpod, ### Workspace affected _No response_ ### Expected behavior _No response_ ### Example repository _No response_ ### Anything else? It is very annoying and I believe there should be an way to not do this, or rather add an *tempory* config into the pre-defined `.ssh/config` that is then deleted after use
1.0
Overriddes VSCode config for SSH Config Location - ### Bug description this literally changes the vscode global config for this to work, and it is *quite* annoying to have to switch it back every single time as I have workspaces on remote hardware that I need to be able to access quickly ### Steps to reproduce have `~/.ssh/config` set as your default config *locally* then click "open in vscode" on gitpod, ### Workspace affected _No response_ ### Expected behavior _No response_ ### Example repository _No response_ ### Anything else? It is very annoying and I believe there should be an way to not do this, or rather add an *tempory* config into the pre-defined `.ssh/config` that is then deleted after use
non_defect
overriddes vscode config for ssh config location bug description this literally changes the vscode global config for this to work and it is quite annoying to have to switch it back every single time as i have workspaces on remote hardware that i need to be able to access quickly steps to reproduce have ssh config set as your default config locally then click open in vscode on gitpod workspace affected no response expected behavior no response example repository no response anything else it is very annoying and i believe there should be an way to not do this or rather add an tempory config into the pre defined ssh config that is then deleted after use
0
71,077
23,437,598,354
IssuesEvent
2022-08-15 11:42:51
primefaces/primereact
https://api.github.com/repos/primefaces/primereact
closed
PickList - Selection Change properties throw errors
defect 👍 confirmed
Hi, I would like to report a bug. In the documentation for the component PickList, there are two properties for SelectionChange events. onSourceSelectionChange and onTargetSelectionChange are the properties. When you feed in a method into these properties it throws the error below. Cannot read properties of null (reading 'length') DEMO [https://codesandbox.io/s/recursing-wright-erddje?file=/src/demo/PickListDemo.js:2325-2348](https://codesandbox.io/s/recursing-wright-erddje?file=/src/demo/PickListDemo.js:2325-2348) This is what my PickList looks like. It's straight out of the demo. `<PickList source={source} target={target} itemTemplate={itemTemplate} sourceHeader="Available" targetHeader="Selected" sourceStyle={{ height: "342px" }} targetStyle={{ height: "342px" }} onChange={onChange} filterBy="name" sourceFilterPlaceholder="Search by name" onTargetSelectionChange={SelectionChange} />` And this is what my SelectionChange method looks like... `const SelectionChange = (s) => { console.log("Clicked!"); };` It's very simple, and is straight from the demo. But it simply does not work. These properties are well documented in the prime faces documentation. [https://www.primefaces.org/primereact/picklist/](https://www.primefaces.org/primereact/picklist/) -Paul
1.0
PickList - Selection Change properties throw errors - Hi, I would like to report a bug. In the documentation for the component PickList, there are two properties for SelectionChange events. onSourceSelectionChange and onTargetSelectionChange are the properties. When you feed in a method into these properties it throws the error below. Cannot read properties of null (reading 'length') DEMO [https://codesandbox.io/s/recursing-wright-erddje?file=/src/demo/PickListDemo.js:2325-2348](https://codesandbox.io/s/recursing-wright-erddje?file=/src/demo/PickListDemo.js:2325-2348) This is what my PickList looks like. It's straight out of the demo. `<PickList source={source} target={target} itemTemplate={itemTemplate} sourceHeader="Available" targetHeader="Selected" sourceStyle={{ height: "342px" }} targetStyle={{ height: "342px" }} onChange={onChange} filterBy="name" sourceFilterPlaceholder="Search by name" onTargetSelectionChange={SelectionChange} />` And this is what my SelectionChange method looks like... `const SelectionChange = (s) => { console.log("Clicked!"); };` It's very simple, and is straight from the demo. But it simply does not work. These properties are well documented in the prime faces documentation. [https://www.primefaces.org/primereact/picklist/](https://www.primefaces.org/primereact/picklist/) -Paul
defect
picklist selection change properties throw errors hi i would like to report a bug in the documentation for the component picklist there are two properties for selectionchange events onsourceselectionchange and ontargetselectionchange are the properties when you feed in a method into these properties it throws the error below cannot read properties of null reading length demo this is what my picklist looks like it s straight out of the demo and this is what my selectionchange method looks like const selectionchange s console log clicked it s very simple and is straight from the demo but it simply does not work these properties are well documented in the prime faces documentation paul
1
430,907
12,467,934,719
IssuesEvent
2020-05-28 17:56:57
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Flexvolume master driver is blocking controller-manager from moving to distroless
area/dependency area/release-eng area/security kind/feature lifecycle/frozen priority/important-soon sig/release sig/storage
There is an effort underway to move to core k8s component images to distroless. One side effect is that shell access is unavailable inside these containers. Dummy Flexvolume drivers in e2e tests are written as shell scripts, and it's possible that many drivers are written this way as well. These drivers are a problem for controller managers going distroless - the driver Init() call will fail. Going forward here are some of the possible options: 1. Don't move to distroless for controller manager 2. Deprecate Flex (but we've already issued a statement that k8s 1.x is always going to support Flex) 3. Limit Flexvolume support on master to only drivers that don't require shell 4. Ban Flexvolume master installations. If distroless for controller manager is a definite requirement, #3 sounds the most reasonable. Translating the Flexvolume API from shell to a different language shouldn't be too bad, and presumably drivers running in production already have something written in non-shell. SIG storage what do you think? /sig storage /cc @yuwenma @tallclair @saad-ali @chakri-nelluri /priority important-soon
1.0
Flexvolume master driver is blocking controller-manager from moving to distroless - There is an effort underway to move to core k8s component images to distroless. One side effect is that shell access is unavailable inside these containers. Dummy Flexvolume drivers in e2e tests are written as shell scripts, and it's possible that many drivers are written this way as well. These drivers are a problem for controller managers going distroless - the driver Init() call will fail. Going forward here are some of the possible options: 1. Don't move to distroless for controller manager 2. Deprecate Flex (but we've already issued a statement that k8s 1.x is always going to support Flex) 3. Limit Flexvolume support on master to only drivers that don't require shell 4. Ban Flexvolume master installations. If distroless for controller manager is a definite requirement, #3 sounds the most reasonable. Translating the Flexvolume API from shell to a different language shouldn't be too bad, and presumably drivers running in production already have something written in non-shell. SIG storage what do you think? /sig storage /cc @yuwenma @tallclair @saad-ali @chakri-nelluri /priority important-soon
non_defect
flexvolume master driver is blocking controller manager from moving to distroless there is an effort underway to move to core component images to distroless one side effect is that shell access is unavailable inside these containers dummy flexvolume drivers in tests are written as shell scripts and it s possible that many drivers are written this way as well these drivers are a problem for controller managers going distroless the driver init call will fail going forward here are some of the possible options don t move to distroless for controller manager deprecate flex but we ve already issued a statement that x is always going to support flex limit flexvolume support on master to only drivers that don t require shell ban flexvolume master installations if distroless for controller manager is a definite requirement sounds the most reasonable translating the flexvolume api from shell to a different language shouldn t be too bad and presumably drivers running in production already have something written in non shell sig storage what do you think sig storage cc yuwenma tallclair saad ali chakri nelluri priority important soon
0
57,137
15,705,917,622
IssuesEvent
2021-03-26 16:44:53
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Wrong precision for H2 timestamp when using domains
T: Defect
### Expected behavior Using a timestamp in H2 as a domain is consistent with using it directly, i.e. it should by default have a precision of 6. SQLDataType.TIMESTAMPWITHTIMEZONE(6) ### Actual behavior Creating a domain in H2 as a timestamp and generating sources for the schema public (domains cant be created in other schemas) gives the wrong precision in Domains.java. Instead of precision 6, it generates precision 32. SQLDataType.TIMESTAMPWITHTIMEZONE(32) ### Steps to reproduce the problem - If the problem relates to code generation, please post your code generation configuration see MCVE, important part is including the PUBLIC schema to have Domains.java generated - A complete set of DDL statements can help re-create the setup you're having see MCVE, but the core line is CREATE DOMAIN domaints AS TIMESTAMP WITH TIME ZONE; - An MCVE can be helpful to provide a complete reproduction case: https://github.com/jOOQ/jOOQ-mcve Can be found here: https://github.com/muued/jOOQ-mcve ### Versions - jOOQ: 3.14.8 - Java: 15 - Database (include vendor): H2 1.4.200 - OS: macOS 10.15.7 - JDBC Driver (include name if inofficial driver):
1.0
Wrong precision for H2 timestamp when using domains - ### Expected behavior Using a timestamp in H2 as a domain is consistent with using it directly, i.e. it should by default have a precision of 6. SQLDataType.TIMESTAMPWITHTIMEZONE(6) ### Actual behavior Creating a domain in H2 as a timestamp and generating sources for the schema public (domains cant be created in other schemas) gives the wrong precision in Domains.java. Instead of precision 6, it generates precision 32. SQLDataType.TIMESTAMPWITHTIMEZONE(32) ### Steps to reproduce the problem - If the problem relates to code generation, please post your code generation configuration see MCVE, important part is including the PUBLIC schema to have Domains.java generated - A complete set of DDL statements can help re-create the setup you're having see MCVE, but the core line is CREATE DOMAIN domaints AS TIMESTAMP WITH TIME ZONE; - An MCVE can be helpful to provide a complete reproduction case: https://github.com/jOOQ/jOOQ-mcve Can be found here: https://github.com/muued/jOOQ-mcve ### Versions - jOOQ: 3.14.8 - Java: 15 - Database (include vendor): H2 1.4.200 - OS: macOS 10.15.7 - JDBC Driver (include name if inofficial driver):
defect
wrong precision for timestamp when using domains expected behavior using a timestamp in as a domain is consistent with using it directly i e it should by default have a precision of sqldatatype timestampwithtimezone actual behavior creating a domain in as a timestamp and generating sources for the schema public domains cant be created in other schemas gives the wrong precision in domains java instead of precision it generates precision sqldatatype timestampwithtimezone steps to reproduce the problem if the problem relates to code generation please post your code generation configuration see mcve important part is including the public schema to have domains java generated a complete set of ddl statements can help re create the setup you re having see mcve but the core line is create domain domaints as timestamp with time zone an mcve can be helpful to provide a complete reproduction case can be found here versions jooq java database include vendor os macos jdbc driver include name if inofficial driver
1
39,295
9,380,737,912
IssuesEvent
2019-04-04 17:50:39
idaholab/moose
https://api.github.com/repos/idaholab/moose
closed
Jacobian for ADDGKernels wrong for coupled variables
T: defect
## Bug Description Jacobian for ADDGKernels wrong for coupled variables ## Steps to Reproduce 1. Write a residual in an `ADDGKernel` that involves a coupled variable 2. Run `-snes_test_jacobian` 3. Observe the bad ratio ## Impact AD is not currently beautiful for coupling in `ADDGKernels`
1.0
Jacobian for ADDGKernels wrong for coupled variables - ## Bug Description Jacobian for ADDGKernels wrong for coupled variables ## Steps to Reproduce 1. Write a residual in an `ADDGKernel` that involves a coupled variable 2. Run `-snes_test_jacobian` 3. Observe the bad ratio ## Impact AD is not currently beautiful for coupling in `ADDGKernels`
defect
jacobian for addgkernels wrong for coupled variables bug description jacobian for addgkernels wrong for coupled variables steps to reproduce write a residual in an addgkernel that involves a coupled variable run snes test jacobian observe the bad ratio impact ad is not currently beautiful for coupling in addgkernels
1
14,817
2,831,389,824
IssuesEvent
2015-05-24 15:54:40
nobodyguy/dslrdashboard
https://api.github.com/repos/nobodyguy/dslrdashboard
closed
Crash in LRTimelapse mode with TP-Link TL-MR3040 and Canon 6D
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Use TP-Link TL-MR3040 and Canon 6D 2. Connect camera wirelessly 3. Take an image in LRTimelapse mode and preview image - app crashes before preview is seen What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Latest from play store, on a Samsung Note 3 8 inch Please provide any additional information below. I have submitted a crash report via the play store - hopefully this will have more detail. ``` Original issue reported on code.google.com by `tcsh...@gmail.com` on 12 Nov 2013 at 9:40
1.0
Crash in LRTimelapse mode with TP-Link TL-MR3040 and Canon 6D - ``` What steps will reproduce the problem? 1. Use TP-Link TL-MR3040 and Canon 6D 2. Connect camera wirelessly 3. Take an image in LRTimelapse mode and preview image - app crashes before preview is seen What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Latest from play store, on a Samsung Note 3 8 inch Please provide any additional information below. I have submitted a crash report via the play store - hopefully this will have more detail. ``` Original issue reported on code.google.com by `tcsh...@gmail.com` on 12 Nov 2013 at 9:40
defect
crash in lrtimelapse mode with tp link tl and canon what steps will reproduce the problem use tp link tl and canon connect camera wirelessly take an image in lrtimelapse mode and preview image app crashes before preview is seen what is the expected output what do you see instead what version of the product are you using on what operating system latest from play store on a samsung note inch please provide any additional information below i have submitted a crash report via the play store hopefully this will have more detail original issue reported on code google com by tcsh gmail com on nov at
1
57,850
3,084,067,567
IssuesEvent
2015-08-24 13:12:18
pavel-pimenov/flylinkdc-r5xx
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
closed
[zzxy] Слишком большое количество символов в путях к файлам - сомнительно.
bug Component-Logic imported Priority-Medium
_From [zzzxzzzy...@gmail.com](https://code.google.com/u/111612712877897236331/) on August 01, 2013 17:17:32_ (Все версии, все клиенты, итд) (Так и не исправили) БАГ 152 символов. Обнаруженный в связи с: багом когда тянут с меня файл, а на половине сбрасывается и заново перезакачивается. PS: (тянется тут ещё со стронга, есть и в нём последнем) PS: (произошло как раз в момент написания, вчера, и судя по шаре и прчм.- 100&#37; явно не атака, а баг ADC или чьего то клиента, название клиента сокрыто хабом как и у др.пользователей, на других его не обнаружил) PS: (я и сам не раз сталкивался, иногда на куче рядом-расположенных мелких файлов) PS: Другие тесты(и необходимости): из StrongDC++242 в FlyLink502-beta24(или AirDC++ 2.30) - успешно скачало; из FlyLink502-beta24 в StrongDC++242(или ApexDC 1.5.4 или RSX 1.2.1) - последний выдал ошибку что путь слишком длиннен("Target filename too long"). PSS: (притом что в имени файла всего то! 194 символа, без учёта пути; пробовал даже качать в "C:\123\"; из версий: из-за длины пути в самой шаре - 242 символа(впрочем в л.сл.стандарт то минимум - 260), сам Стронг по кр.мере тоже прошарил его успешно... и отдаёт тоже - успешно,Флаю понятно) PSS: врем.премещение в корень показало что в данном сл.таки дело в ДЛИНЕ ЗАГРУЖАЕМОГО(КОНЕЧНОГО) ПУТИ, а не шары... PSS: Эксперименты по обрезанию показали что РЕАЛЬНО-совместимая-максимально-приемлемая длина пути вкл.имя диска = 152 символа!... PS: Учитывапя что именно на таком нескачивающемся(в Стронге и минимум почти всех производных клиентах) файле - проблема зацикленности скачки, вполне возможно что, этот баг в том числе связан (непостижимыми тропами багов) с длиной конечного полного пути... Хоть и без гарантий, в любом случае баг с длиной критичен ибо НАКОНЕЦ ТО объясняет почему иногда в группе файлов недостаёт части файлов при скачке. PS: Что делать другим клиентам(в том числе замороженным вроде Стронга) и тем более пользователям(тем более не собирающимся спешить обновлять), не знаю. Было бы неплохо - если бы вы хотя бы других разрабов проинформировали, насколько возможно; а, так же на своём сайте на самом видном вывесили предупреждение: «Внимание: Было обнаруженно что ряд клиентов(напр.StrongDC++ 2.42) могут недо-скачивать файлы полностью, причём если в группе - то даже скрытно: выводя мельком только в статусной строке, и в зависимости от клиента - не добавляя в список закачки файлов: файлы невмещающиеся в ДЛИНУ ЗАГРУЖАЕМОГО(КОНЕЧНОГО) ПУТИ С ИМЕНЕМ файла большего чем 152 символа (прим: длина в шаре тут не влияет, правда при условии что скачиваемый файл - не предполагает быть в подкаталоге(-ах) суммарно с конечным - превысящем указанную величину). Независомо от DC-клиента раздающего. Флайлинк(~мин.начиная с серии 5.02) этой проблемы скачки - не имеет». А, самому Флаю - явно СИЛЬНО не хватает предупреждения при хэшировании: «Внимание: длина имени файла больше условных 120 символов(152_максимум_для_конечного_пути минус резерв), возможна проблема [неявной]невозможности скачивания некоторыми DC-клиентами». PS: Не исключено, что БАГ 152 символов - всё же не связан с "на половине сбрасывается и заново перезакачивается"... проверить затруднительно. _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1144_
1.0
[zzxy] Слишком большое количество символов в путях к файлам - сомнительно. - _From [zzzxzzzy...@gmail.com](https://code.google.com/u/111612712877897236331/) on August 01, 2013 17:17:32_ (Все версии, все клиенты, итд) (Так и не исправили) БАГ 152 символов. Обнаруженный в связи с: багом когда тянут с меня файл, а на половине сбрасывается и заново перезакачивается. PS: (тянется тут ещё со стронга, есть и в нём последнем) PS: (произошло как раз в момент написания, вчера, и судя по шаре и прчм.- 100&#37; явно не атака, а баг ADC или чьего то клиента, название клиента сокрыто хабом как и у др.пользователей, на других его не обнаружил) PS: (я и сам не раз сталкивался, иногда на куче рядом-расположенных мелких файлов) PS: Другие тесты(и необходимости): из StrongDC++242 в FlyLink502-beta24(или AirDC++ 2.30) - успешно скачало; из FlyLink502-beta24 в StrongDC++242(или ApexDC 1.5.4 или RSX 1.2.1) - последний выдал ошибку что путь слишком длиннен("Target filename too long"). PSS: (притом что в имени файла всего то! 194 символа, без учёта пути; пробовал даже качать в "C:\123\"; из версий: из-за длины пути в самой шаре - 242 символа(впрочем в л.сл.стандарт то минимум - 260), сам Стронг по кр.мере тоже прошарил его успешно... и отдаёт тоже - успешно,Флаю понятно) PSS: врем.премещение в корень показало что в данном сл.таки дело в ДЛИНЕ ЗАГРУЖАЕМОГО(КОНЕЧНОГО) ПУТИ, а не шары... PSS: Эксперименты по обрезанию показали что РЕАЛЬНО-совместимая-максимально-приемлемая длина пути вкл.имя диска = 152 символа!... PS: Учитывапя что именно на таком нескачивающемся(в Стронге и минимум почти всех производных клиентах) файле - проблема зацикленности скачки, вполне возможно что, этот баг в том числе связан (непостижимыми тропами багов) с длиной конечного полного пути... Хоть и без гарантий, в любом случае баг с длиной критичен ибо НАКОНЕЦ ТО объясняет почему иногда в группе файлов недостаёт части файлов при скачке. PS: Что делать другим клиентам(в том числе замороженным вроде Стронга) и тем более пользователям(тем более не собирающимся спешить обновлять), не знаю. Было бы неплохо - если бы вы хотя бы других разрабов проинформировали, насколько возможно; а, так же на своём сайте на самом видном вывесили предупреждение: «Внимание: Было обнаруженно что ряд клиентов(напр.StrongDC++ 2.42) могут недо-скачивать файлы полностью, причём если в группе - то даже скрытно: выводя мельком только в статусной строке, и в зависимости от клиента - не добавляя в список закачки файлов: файлы невмещающиеся в ДЛИНУ ЗАГРУЖАЕМОГО(КОНЕЧНОГО) ПУТИ С ИМЕНЕМ файла большего чем 152 символа (прим: длина в шаре тут не влияет, правда при условии что скачиваемый файл - не предполагает быть в подкаталоге(-ах) суммарно с конечным - превысящем указанную величину). Независомо от DC-клиента раздающего. Флайлинк(~мин.начиная с серии 5.02) этой проблемы скачки - не имеет». А, самому Флаю - явно СИЛЬНО не хватает предупреждения при хэшировании: «Внимание: длина имени файла больше условных 120 символов(152_максимум_для_конечного_пути минус резерв), возможна проблема [неявной]невозможности скачивания некоторыми DC-клиентами». PS: Не исключено, что БАГ 152 символов - всё же не связан с "на половине сбрасывается и заново перезакачивается"... проверить затруднительно. _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1144_
non_defect
слишком большое количество символов в путях к файлам сомнительно from on august все версии все клиенты итд так и не исправили баг символов обнаруженный в связи с багом когда тянут с меня файл а на половине сбрасывается и заново перезакачивается ps тянется тут ещё со стронга есть и в нём последнем ps произошло как раз в момент написания вчера и судя по шаре и прчм явно не атака а баг adc или чьего то клиента название клиента сокрыто хабом как и у др пользователей на других его не обнаружил ps я и сам не раз сталкивался иногда на куче рядом расположенных мелких файлов ps другие тесты и необходимости из strongdc в или airdc успешно скачало из в strongdc или apexdc или rsx последний выдал ошибку что путь слишком длиннен target filename too long pss притом что в имени файла всего то символа без учёта пути пробовал даже качать в c из версий из за длины пути в самой шаре символа впрочем в л сл стандарт то минимум сам стронг по кр мере тоже прошарил его успешно и отдаёт тоже успешно флаю понятно pss врем премещение в корень показало что в данном сл таки дело в длине загружаемого конечного пути а не шары pss эксперименты по обрезанию показали что реально совместимая максимально приемлемая длина пути вкл имя диска символа ps учитывапя что именно на таком нескачивающемся в стронге и минимум почти всех производных клиентах файле проблема зацикленности скачки вполне возможно что этот баг в том числе связан непостижимыми тропами багов с длиной конечного полного пути хоть и без гарантий в любом случае баг с длиной критичен ибо наконец то объясняет почему иногда в группе файлов недостаёт части файлов при скачке ps что делать другим клиентам в том числе замороженным вроде стронга и тем более пользователям тем более не собирающимся спешить обновлять не знаю было бы неплохо если бы вы хотя бы других разрабов проинформировали насколько возможно а так же на своём сайте на самом видном вывесили предупреждение «внимание было обнаруженно что ряд клиентов напр strongdc могут недо скачивать файлы полностью причём если в группе то даже скрытно выводя мельком только в статусной строке и в зависимости от клиента не добавляя в список закачки файлов файлы невмещающиеся в длину загружаемого конечного пути с именем файла большего чем символа прим длина в шаре тут не влияет правда при условии что скачиваемый файл не предполагает быть в подкаталоге ах суммарно с конечным превысящем указанную величину независомо от dc клиента раздающего флайлинк мин начиная с серии этой проблемы скачки не имеет» а самому флаю явно сильно не хватает предупреждения при хэшировании «внимание длина имени файла больше условных символов максимум для конечного пути минус резерв возможна проблема невозможности скачивания некоторыми dc клиентами» ps не исключено что баг символов всё же не связан с на половине сбрасывается и заново перезакачивается проверить затруднительно original issue
0
334,094
10,135,960,910
IssuesEvent
2019-08-02 11:39:31
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
accounts.google.com - see bug description
browser-firefox engine-gecko priority-critical
<!-- @browser: Firefox 69.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://accounts.google.com/signin/v2/identifier?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin **Browser / Version**: Firefox 69.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: whenever i am trying to sign in on my gmail it never goes on the password.every time i am entering my email but when i click next or press enter to submit my password it does not show the coloum of password **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/8/dab41d71-aca4-44eb-b8c4-9f2d1e3d0bf1-thumb.jpeg)](https://webcompat.com/uploads/2019/8/dab41d71-aca4-44eb-b8c4-9f2d1e3d0bf1.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190730004747</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.en_GB.d0NYMMBLwnI.O/am=BhiYDiSCAAAAAAAAAAABAAADC4cMYj5FcPsb/d=0/rs=ABkqax2Iog5XkwBFPNKsrynApnsv1WkctA/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy9f,sy9g,m5Z1Eb,sy5r,sy5s,sy5t,sy9m,sy9n,sy9u,sy9o,em3j,sy8h,em3s,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3t,em3k,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Content Security Policy: Ignoring x-frame-options because of frame-ancestors directive."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "MouseEvent.mozPressure is deprecated. Use PointerEvent.pressure instead." {file: "https://accounts.google.com/ServiceLogin?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1" line: 1832}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.en_GB.d0NYMMBLwnI.O/am=BhiYDiSCAAAAAAAAAAABAAADC4cMYj5FcPsb/d=0/rs=ABkqax2Iog5XkwBFPNKsrynApnsv1WkctA/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy9f,sy9g,m5Z1Eb,sy5r,sy5s,sy5t,sy9m,sy9n,sy9u,sy9o,em3j,sy8h,em3s,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3t,em3k,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]'] </pre> </details> Submitted in the name of `@meet` _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
accounts.google.com - see bug description - <!-- @browser: Firefox 69.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 --> <!-- @reported_with: desktop-reporter --> **URL**: https://accounts.google.com/signin/v2/identifier?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin **Browser / Version**: Firefox 69.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes **Problem type**: Something else **Description**: whenever i am trying to sign in on my gmail it never goes on the password.every time i am entering my email but when i click next or press enter to submit my password it does not show the coloum of password **Steps to Reproduce**: [![Screenshot Description](https://webcompat.com/uploads/2019/8/dab41d71-aca4-44eb-b8c4-9f2d1e3d0bf1-thumb.jpeg)](https://webcompat.com/uploads/2019/8/dab41d71-aca4-44eb-b8c4-9f2d1e3d0bf1.jpeg) <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190730004747</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.en_GB.d0NYMMBLwnI.O/am=BhiYDiSCAAAAAAAAAAABAAADC4cMYj5FcPsb/d=0/rs=ABkqax2Iog5XkwBFPNKsrynApnsv1WkctA/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy9f,sy9g,m5Z1Eb,sy5r,sy5s,sy5t,sy9m,sy9n,sy9u,sy9o,em3j,sy8h,em3s,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3t,em3k,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]', u'[JavaScript Warning: "Content Security Policy: Ignoring x-frame-options because of frame-ancestors directive."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "MouseEvent.mozPressure is deprecated. Use PointerEvent.pressure instead." {file: "https://accounts.google.com/ServiceLogin?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1" line: 1832}]', u'[JavaScript Warning: "Loading failed for the <script> with source https://ssl.gstatic.com/accounts/static/_/js/k=gaia.gaiafe_glif.en_GB.d0NYMMBLwnI.O/am=BhiYDiSCAAAAAAAAAAABAAADC4cMYj5FcPsb/d=0/rs=ABkqax2Iog5XkwBFPNKsrynApnsv1WkctA/m=SF3gsd,wI7Sfc,pB6Zqd,rHjpXd,o02Jie,lCVo3d,MB66Qc,sy9f,sy9g,m5Z1Eb,sy5r,sy5s,sy5t,sy9m,sy9n,sy9u,sy9o,em3j,sy8h,em3s,em3r,em3q,em3p,em3o,em3n,em3m,em3l,em3t,em3k,YmeC5c." {file: "https://accounts.google.com/signin/v2/identifier?continue=https%3A%2F%2Fmail.google.com%2Fmail%2F&service=mail&sacu=1&rip=1&flowName=GlifWebSignIn&flowEntry=ServiceLogin" line: 1}]'] </pre> </details> Submitted in the name of `@meet` _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
accounts google com see bug description url browser version firefox operating system windows tested another browser yes problem type something else description whenever i am trying to sign in on my gmail it never goes on the password every time i am entering my email but when i click next or press enter to submit my password it does not show the coloum of password steps to reproduce browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages u u u u u u u submitted in the name of meet from with ❤️
0
227,447
18,063,445,272
IssuesEvent
2021-09-20 16:15:45
dotnet/machinelearning-modelbuilder
https://api.github.com/repos/dotnet/machinelearning-modelbuilder
closed
Error loading mbconfig file: Required property 'FilePath' not found in JSON. Path ‘’, line 7, position 17.
Priority:0 Test Team Stale
**System Information (please complete the following information):** - Windows OS: Windows-10-Enterprise-21H1 - ML.Net Model Builder (Preview): 16.6.3.2136604 (Main Branch) - Microsoft Visual Studio Enterprise 2019: 16.10.2 - .Net: 5.0.301 **To Reproduce** Steps to reproduce the behavior: 1. Select Create a new project from the Visual Studio 2019 start window; 2. Choose the C# Console App (.NET Core) project template with .Net 5.0; 3. Add model builder by right click on the project; 4. Select Image classification or Object detection scenario; 5. Save and close the model builder; 6. Re-open the model builder, pop up the error dialog that: Error loading mbconfig file: Required property 'FilePath' not found in JSON. Path ‘’, line 7, position 17. 7. Close the error, find that no scenario is selected. **Expected behavior** No error after re-open model builder after selected Image classification or Object detection scenario, and the scenario should still be selected. **Screenshots** If applicable, add screenshots to help explain your problem. ![image](https://user-images.githubusercontent.com/81727020/126129080-2bd5c100-9d26-4031-8859-e83b5d8e2f5d.png) **Additional context** 1. This issue only occurs on Image classification and Object detection scenario. 2. This issue doesn't occur if reopen the model builder after input the data or training completed.
1.0
Error loading mbconfig file: Required property 'FilePath' not found in JSON. Path ‘’, line 7, position 17. - **System Information (please complete the following information):** - Windows OS: Windows-10-Enterprise-21H1 - ML.Net Model Builder (Preview): 16.6.3.2136604 (Main Branch) - Microsoft Visual Studio Enterprise 2019: 16.10.2 - .Net: 5.0.301 **To Reproduce** Steps to reproduce the behavior: 1. Select Create a new project from the Visual Studio 2019 start window; 2. Choose the C# Console App (.NET Core) project template with .Net 5.0; 3. Add model builder by right click on the project; 4. Select Image classification or Object detection scenario; 5. Save and close the model builder; 6. Re-open the model builder, pop up the error dialog that: Error loading mbconfig file: Required property 'FilePath' not found in JSON. Path ‘’, line 7, position 17. 7. Close the error, find that no scenario is selected. **Expected behavior** No error after re-open model builder after selected Image classification or Object detection scenario, and the scenario should still be selected. **Screenshots** If applicable, add screenshots to help explain your problem. ![image](https://user-images.githubusercontent.com/81727020/126129080-2bd5c100-9d26-4031-8859-e83b5d8e2f5d.png) **Additional context** 1. This issue only occurs on Image classification and Object detection scenario. 2. This issue doesn't occur if reopen the model builder after input the data or training completed.
non_defect
error loading mbconfig file required property filepath not found in json path ‘’ line position system information please complete the following information windows os windows enterprise ml net model builder preview main branch microsoft visual studio enterprise net to reproduce steps to reproduce the behavior select create a new project from the visual studio start window choose the c console app net core project template with net add model builder by right click on the project select image classification or object detection scenario save and close the model builder re open the model builder pop up the error dialog that error loading mbconfig file required property filepath not found in json path ‘’ line position close the error find that no scenario is selected expected behavior no error after re open model builder after selected image classification or object detection scenario and the scenario should still be selected screenshots if applicable add screenshots to help explain your problem additional context this issue only occurs on image classification and object detection scenario this issue doesn t occur if reopen the model builder after input the data or training completed
0
2,212
2,893,831,373
IssuesEvent
2015-06-15 20:03:08
golang/go
https://api.github.com/repos/golang/go
closed
x/build: subrepos need to test both go1.4 and go tip
Builders
The subrepo builders need to test subrepos against both go1.4 (or the past N go releases) as well as go tip. Currently it's only against go tip.
1.0
x/build: subrepos need to test both go1.4 and go tip - The subrepo builders need to test subrepos against both go1.4 (or the past N go releases) as well as go tip. Currently it's only against go tip.
non_defect
x build subrepos need to test both and go tip the subrepo builders need to test subrepos against both or the past n go releases as well as go tip currently it s only against go tip
0
320,913
27,493,128,677
IssuesEvent
2023-03-04 21:26:58
flutter/flutter
https://api.github.com/repos/flutter/flutter
closed
Would like to have pana coverage as part of testing for flutter/plugins
a: tests plugin p: tooling P4
Not all of our plugins 100% pass pana (dart package analyzer). Seems like something our flutter/plugins post-commit testing should cover? Currently for example android_alarm_manager doesn't pass: https://pub.dartlang.org/packages/android_alarm_manager#-analysis-tab- (FYI @bkonyi) FYI @cbracken
1.0
Would like to have pana coverage as part of testing for flutter/plugins - Not all of our plugins 100% pass pana (dart package analyzer). Seems like something our flutter/plugins post-commit testing should cover? Currently for example android_alarm_manager doesn't pass: https://pub.dartlang.org/packages/android_alarm_manager#-analysis-tab- (FYI @bkonyi) FYI @cbracken
non_defect
would like to have pana coverage as part of testing for flutter plugins not all of our plugins pass pana dart package analyzer seems like something our flutter plugins post commit testing should cover currently for example android alarm manager doesn t pass fyi bkonyi fyi cbracken
0
5,052
2,610,166,196
IssuesEvent
2015-02-26 18:52:42
chrsmith/republic-at-war
https://api.github.com/repos/chrsmith/republic-at-war
closed
Gameplay Error
auto-migrated Priority-Medium Type-Defect
``` enemy ships don't show up on the radar ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 4 May 2011 at 9:45
1.0
Gameplay Error - ``` enemy ships don't show up on the radar ``` ----- Original issue reported on code.google.com by `z3r0...@gmail.com` on 4 May 2011 at 9:45
defect
gameplay error enemy ships don t show up on the radar original issue reported on code google com by gmail com on may at
1
681,063
23,296,039,263
IssuesEvent
2022-08-06 15:33:53
VueTubeApp/VueTube
https://api.github.com/repos/VueTubeApp/VueTube
closed
App gets locked in landscape mode when exiting fullscreen using back button
bug Priority: High
### Steps to reproduce 1. play any video in full screen mode 2. use system back button instead of UI button to exit full screen mode ### Expected behavior App should go back to portrait mode ### Actual behavior App gets locked in landscape mode and even autorotate does not work. ### VueTube version 1.0 ### Android version 12 ### Other details _No response_ ### Acknowledgements - [X] I have searched the existing issues and this is a new ticket, **NOT** a duplicate or related to another open issue. - [X] I have written a short but informative title. - [X] I have updated the app to unstable version **[Latest](https://vuetube.app/install/)**. - [X] I will fill out all of the requested information in this form.
1.0
App gets locked in landscape mode when exiting fullscreen using back button - ### Steps to reproduce 1. play any video in full screen mode 2. use system back button instead of UI button to exit full screen mode ### Expected behavior App should go back to portrait mode ### Actual behavior App gets locked in landscape mode and even autorotate does not work. ### VueTube version 1.0 ### Android version 12 ### Other details _No response_ ### Acknowledgements - [X] I have searched the existing issues and this is a new ticket, **NOT** a duplicate or related to another open issue. - [X] I have written a short but informative title. - [X] I have updated the app to unstable version **[Latest](https://vuetube.app/install/)**. - [X] I will fill out all of the requested information in this form.
non_defect
app gets locked in landscape mode when exiting fullscreen using back button steps to reproduce play any video in full screen mode use system back button instead of ui button to exit full screen mode expected behavior app should go back to portrait mode actual behavior app gets locked in landscape mode and even autorotate does not work vuetube version android version other details no response acknowledgements i have searched the existing issues and this is a new ticket not a duplicate or related to another open issue i have written a short but informative title i have updated the app to unstable version i will fill out all of the requested information in this form
0
5,999
2,610,219,211
IssuesEvent
2015-02-26 19:09:40
chrsmith/somefinders
https://api.github.com/repos/chrsmith/somefinders
opened
precomp exe
auto-migrated Priority-Medium Type-Defect
``` '''Владлен Сергеев''' День добрый никак не могу найти .precomp exe. как то выкладывали уже '''Арефий Макаров''' Вот хороший сайт где можно скачать http://bit.ly/1aph6IW '''Вартан Мясников''' Спасибо вроде то но просит телефон вводить '''Винцент Пестов''' Не это не влияет на баланс '''Алим Федосеев''' Неа все ок у меня ничего не списало Информация о файле: precomp exe Загружен: В этом месяце Скачан раз: 1433 Рейтинг: 175 Средняя скорость скачивания: 1408 Похожих файлов: 17 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 12:34
1.0
precomp exe - ``` '''Владлен Сергеев''' День добрый никак не могу найти .precomp exe. как то выкладывали уже '''Арефий Макаров''' Вот хороший сайт где можно скачать http://bit.ly/1aph6IW '''Вартан Мясников''' Спасибо вроде то но просит телефон вводить '''Винцент Пестов''' Не это не влияет на баланс '''Алим Федосеев''' Неа все ок у меня ничего не списало Информация о файле: precomp exe Загружен: В этом месяце Скачан раз: 1433 Рейтинг: 175 Средняя скорость скачивания: 1408 Похожих файлов: 17 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 12:34
defect
precomp exe владлен сергеев день добрый никак не могу найти precomp exe как то выкладывали уже арефий макаров вот хороший сайт где можно скачать вартан мясников спасибо вроде то но просит телефон вводить винцент пестов не это не влияет на баланс алим федосеев неа все ок у меня ничего не списало информация о файле precomp exe загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
1
726,854
25,013,545,276
IssuesEvent
2022-11-03 16:55:57
GoogleCloudPlatform/asl-ml-immersion
https://api.github.com/repos/GoogleCloudPlatform/asl-ml-immersion
closed
Move `classify_text_with_bert` lab from training_data_analyst to asl-ml-immersion
enhancement priority: medium
The lab on text classification with bert is currently not in the asl-ml-immersion repo. It has to be copied to this repo and tested again.
1.0
Move `classify_text_with_bert` lab from training_data_analyst to asl-ml-immersion - The lab on text classification with bert is currently not in the asl-ml-immersion repo. It has to be copied to this repo and tested again.
non_defect
move classify text with bert lab from training data analyst to asl ml immersion the lab on text classification with bert is currently not in the asl ml immersion repo it has to be copied to this repo and tested again
0
74,978
25,459,290,361
IssuesEvent
2022-11-24 16:54:37
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Cypress: Your Element is misconfigured flake
T-Defect S-Minor A-Testing A-Developer-Experience O-Occasional Z-Flaky-Test
### Steps to reproduce Example https://dashboard.cypress.io/projects/ppvnzg/runs/4072/test-results/96ab4e37-931b-45d9-b1b6-c5988afee57f ![image](https://user-images.githubusercontent.com/6216686/190645929-d5d480df-6956-42a5-a9c7-4c013d4ac1b1.png) This is different from the „blank screen“ issue: ![image](https://user-images.githubusercontent.com/6216686/190646027-386910d4-7c8a-4b3e-a242-15bf75595fa7.png)
1.0
Cypress: Your Element is misconfigured flake - ### Steps to reproduce Example https://dashboard.cypress.io/projects/ppvnzg/runs/4072/test-results/96ab4e37-931b-45d9-b1b6-c5988afee57f ![image](https://user-images.githubusercontent.com/6216686/190645929-d5d480df-6956-42a5-a9c7-4c013d4ac1b1.png) This is different from the „blank screen“ issue: ![image](https://user-images.githubusercontent.com/6216686/190646027-386910d4-7c8a-4b3e-a242-15bf75595fa7.png)
defect
cypress your element is misconfigured flake steps to reproduce example this is different from the „blank screen“ issue
1
67,264
20,961,596,709
IssuesEvent
2022-03-27 21:46:24
abedmaatalla/imsdroid
https://api.github.com/repos/abedmaatalla/imsdroid
closed
some thing wrong with Enhanced Address Book
Priority-Medium Type-Defect auto-migrated
``` a) Before posting your issue you MUST answer to the questions otherwise it will be rejected (invalid status) by us b) Please check the issue tacker to avoid duplication c) Please provide network capture (wireshark) or Android log (DDMS output) if you want quick response What steps will reproduce the problem? 1.we can see the screen shot of Enhanced Address Book from the home page which indicates that it can show us who is online who is not ,but when i download the latest version of apk imsdroid-2.0.487 ,and install it on my HTC G11,I goto the address book,i can not see who is online who is not because the icons are all dark. What is the expected output? What do you see instead? I Can see who is online who is not in Address Book What version of the product are you using? On what operating system? imsdroid-2.0.487 on android 2.3.6 Please provide any additional information below. ``` Original issue reported on code.google.com by `david.w...@lavainternational.in` on 9 May 2012 at 2:35
1.0
some thing wrong with Enhanced Address Book - ``` a) Before posting your issue you MUST answer to the questions otherwise it will be rejected (invalid status) by us b) Please check the issue tacker to avoid duplication c) Please provide network capture (wireshark) or Android log (DDMS output) if you want quick response What steps will reproduce the problem? 1.we can see the screen shot of Enhanced Address Book from the home page which indicates that it can show us who is online who is not ,but when i download the latest version of apk imsdroid-2.0.487 ,and install it on my HTC G11,I goto the address book,i can not see who is online who is not because the icons are all dark. What is the expected output? What do you see instead? I Can see who is online who is not in Address Book What version of the product are you using? On what operating system? imsdroid-2.0.487 on android 2.3.6 Please provide any additional information below. ``` Original issue reported on code.google.com by `david.w...@lavainternational.in` on 9 May 2012 at 2:35
defect
some thing wrong with enhanced address book a before posting your issue you must answer to the questions otherwise it will be rejected invalid status by us b please check the issue tacker to avoid duplication c please provide network capture wireshark or android log ddms output if you want quick response what steps will reproduce the problem we can see the screen shot of enhanced address book from the home page which indicates that it can show us who is online who is not but when i download the latest version of apk imsdroid and install it on my htc i goto the address book i can not see who is online who is not because the icons are all dark what is the expected output what do you see instead i can see who is online who is not in address book what version of the product are you using on what operating system imsdroid on android please provide any additional information below original issue reported on code google com by david w lavainternational in on may at
1
80,775
30,523,562,346
IssuesEvent
2023-07-19 09:40:57
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Room (with only myself) content gone
T-Defect X-Needs-Info S-Critical A-Room-Upgrades A-3PIDs O-Uncommon
### Steps to reproduce 1. I have a room only with myself, I use it to journal for many months. 2. Today I tried to access it from Web app and from Linux app, the room has no content besides this message: Waiting for users to join Element Once invited users have joined Element, you will be able to chat and the room will be end-to-end encrypted @eyallior:matrix.org revoked the invitation for Someone to join the room. The room was still accessible from my android app. I could still access room settings, as I am the admin. Then I changed to public but still nothing changed. I then changed to space members and had to press "upgrade" - and since then, the room content is gone also from my android app. This is really bad. ### Outcome #### What did you expect? Not losing my content #### What happened instead? My content is gone. Luckily I have a recent backup. But I can't trust this app anymore. ### Operating system Linux, Android ### Browser information Firefox ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? Yes
1.0
Room (with only myself) content gone - ### Steps to reproduce 1. I have a room only with myself, I use it to journal for many months. 2. Today I tried to access it from Web app and from Linux app, the room has no content besides this message: Waiting for users to join Element Once invited users have joined Element, you will be able to chat and the room will be end-to-end encrypted @eyallior:matrix.org revoked the invitation for Someone to join the room. The room was still accessible from my android app. I could still access room settings, as I am the admin. Then I changed to public but still nothing changed. I then changed to space members and had to press "upgrade" - and since then, the room content is gone also from my android app. This is really bad. ### Outcome #### What did you expect? Not losing my content #### What happened instead? My content is gone. Luckily I have a recent backup. But I can't trust this app anymore. ### Operating system Linux, Android ### Browser information Firefox ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? Yes
defect
room with only myself content gone steps to reproduce i have a room only with myself i use it to journal for many months today i tried to access it from web app and from linux app the room has no content besides this message waiting for users to join element once invited users have joined element you will be able to chat and the room will be end to end encrypted eyallior matrix org revoked the invitation for someone to join the room the room was still accessible from my android app i could still access room settings as i am the admin then i changed to public but still nothing changed i then changed to space members and had to press upgrade and since then the room content is gone also from my android app this is really bad outcome what did you expect not losing my content what happened instead my content is gone luckily i have a recent backup but i can t trust this app anymore operating system linux android browser information firefox url for webapp no response application version no response homeserver no response will you send logs yes
1
2,706
2,607,936,842
IssuesEvent
2015-02-26 00:29:12
chrsmithdemos/minify
https://api.github.com/repos/chrsmithdemos/minify
opened
Incorrect minification of js code without semicolons
auto-migrated Priority-Low Type-Defect
``` What steps will reproduce the problem? 1. Get https://github.com/twitter/bootstrap/blob/master/js/bootstrap-dropdown.js 2. Try to minify There are two lines listed below that have broked by minifier. <code> clearMenus() !isActive && $parent.toggleClass('open') </code> Expected: clearMenus() !isActive&&$parent.toggleClass('open') Currently: clearMenus()!isActive&&$parent.toggleClass('open') ``` ----- Original issue reported on code.google.com by `d.menshi...@gmail.com` on 28 Mar 2012 at 1:06
1.0
Incorrect minification of js code without semicolons - ``` What steps will reproduce the problem? 1. Get https://github.com/twitter/bootstrap/blob/master/js/bootstrap-dropdown.js 2. Try to minify There are two lines listed below that have broked by minifier. <code> clearMenus() !isActive && $parent.toggleClass('open') </code> Expected: clearMenus() !isActive&&$parent.toggleClass('open') Currently: clearMenus()!isActive&&$parent.toggleClass('open') ``` ----- Original issue reported on code.google.com by `d.menshi...@gmail.com` on 28 Mar 2012 at 1:06
defect
incorrect minification of js code without semicolons what steps will reproduce the problem get try to minify there are two lines listed below that have broked by minifier clearmenus isactive parent toggleclass open expected clearmenus isactive parent toggleclass open currently clearmenus isactive parent toggleclass open original issue reported on code google com by d menshi gmail com on mar at
1
29,737
5,849,227,324
IssuesEvent
2017-05-10 23:07:02
Beatz4me68/busybox-android
https://api.github.com/repos/Beatz4me68/busybox-android
closed
On every update new binary message
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Install update through play store 2. Click on notification of new binary avaliable 3. Click install What is the expected output? What do you see instead? What I expect is that no notification comes up after every update, but only at updates that really have a new busy box binary. What version of the product are you using? On what operating system? I'm using the newest version at all times, on Android 2.3.6 rooted Please provide any additional information below. Hope this will get fixed in another update, and keep the good work up, good and simple way to install busybox ``` Original issue reported on code.google.com by `whdevelo...@gmail.com` on 18 Aug 2013 at 10:35
1.0
On every update new binary message - ``` What steps will reproduce the problem? 1. Install update through play store 2. Click on notification of new binary avaliable 3. Click install What is the expected output? What do you see instead? What I expect is that no notification comes up after every update, but only at updates that really have a new busy box binary. What version of the product are you using? On what operating system? I'm using the newest version at all times, on Android 2.3.6 rooted Please provide any additional information below. Hope this will get fixed in another update, and keep the good work up, good and simple way to install busybox ``` Original issue reported on code.google.com by `whdevelo...@gmail.com` on 18 Aug 2013 at 10:35
defect
on every update new binary message what steps will reproduce the problem install update through play store click on notification of new binary avaliable click install what is the expected output what do you see instead what i expect is that no notification comes up after every update but only at updates that really have a new busy box binary what version of the product are you using on what operating system i m using the newest version at all times on android rooted please provide any additional information below hope this will get fixed in another update and keep the good work up good and simple way to install busybox original issue reported on code google com by whdevelo gmail com on aug at
1
102,949
8,871,620,039
IssuesEvent
2019-01-11 13:13:46
ugcs/dronelogbook
https://api.github.com/repos/ugcs/dronelogbook
closed
Drone serial numbers are duplicating
Ready for testing Release 1.3
As per title, in the list of drones their serial numbers are written two times.
1.0
Drone serial numbers are duplicating - As per title, in the list of drones their serial numbers are written two times.
non_defect
drone serial numbers are duplicating as per title in the list of drones their serial numbers are written two times
0
81,446
30,852,439,983
IssuesEvent
2023-08-02 17:48:08
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
6.5 compat: UBSAN now complains about flex-array declarations with array[1]
Type: Defect
See https://www.spinics.net/lists/linux-xfs/msg73588.html for more information on changes which were also made to xfs to fix such issues. <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Ubuntu Distribution Version | Lunar Kernel Version | 6.5-rc4 Architecture | x86_64 OpenZFS Version | 2.2.0-rc3 w/ 6.5 patches ( https://github.com/openzfs/zfs/pull/15138 https://github.com/openzfs/zfs/pull/15100 https://github.com/openzfs/zfs/commit/325505e5c4e48f32e1a03e42a694509bf4c02670 https://github.com/openzfs/zfs/pull/15101 https://github.com/openzfs/zfs/pull/15099 ) <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing ``` [ 27.714798] ZFS: Loaded module v2.2.0rc3-lunar3, ZFS pool version 5000, ZFS filesystem version 5 [ 27.811205] ================================================================================ [ 27.812311] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_leaf.c:395:26 [ 27.813393] index 365 is out of range for type 'uint16_t [1]' [ 27.814362] CPU: 1 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.815338] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.816313] Call Trace: [ 27.817280] <TASK> [ 27.818237] dump_stack_lvl+0x48/0x60 [ 27.819200] dump_stack+0x10/0x20 [ 27.820146] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.821092] zap_leaf_lookup+0x175/0x180 [zfs] [ 27.822208] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 27.823164] fzap_lookup+0xda/0x1c0 [zfs] [ 27.824247] zap_lookup_impl+0xae/0x390 [zfs] [ 27.825331] zap_lookup+0xa7/0x100 [zfs] [ 27.826409] spa_ld_trusted_config+0x77/0x7e0 [zfs] [ 27.827499] ? dsl_pool_init+0x36/0x70 [zfs] [ 27.828578] spa_ld_mos_with_trusted_config.part.0+0x20/0xb0 [zfs] [ 27.829662] spa_load+0x139/0x1950 [zfs] [ 27.830784] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.831714] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.832634] spa_tryimport+0x15b/0x460 [zfs] [ 27.833576] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.834506] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.835453] ? kvmalloc_node+0x4b/0xe0 [ 27.836233] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.837189] __x64_sys_ioctl+0x94/0xd0 [ 27.837999] do_syscall_64+0x55/0x80 [ 27.838814] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.839562] ? handle_mm_fault+0xad/0x360 [ 27.840298] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.841029] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.841748] ? irqentry_exit+0x33/0x40 [ 27.842451] ? exc_page_fault+0x89/0x170 [ 27.843172] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.843856] RIP: 0033:0x7f19be9459ef [ 27.844526] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.845249] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.845980] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.846731] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.847442] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.848141] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.848828] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.849506] </TASK> [ 27.850176] ================================================================================ [ 27.854916] ================================================================================ [ 27.855655] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:314:44 [ 27.856383] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 27.857099] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.857819] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.858544] Call Trace: [ 27.859280] <TASK> [ 27.859989] dump_stack_lvl+0x48/0x60 [ 27.860687] dump_stack+0x10/0x20 [ 27.861364] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.862045] zap_lockdir_impl+0x86b/0x880 [zfs] [ 27.862894] zap_lockdir+0x91/0xb0 [zfs] [ 27.863712] zap_cursor_retrieve+0x1d7/0x390 [zfs] [ 27.864532] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 27.865213] spa_features_check+0xbd/0x1b0 [zfs] [ 27.866029] spa_load+0x6cd/0x1950 [zfs] [ 27.866854] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.867631] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.868400] spa_tryimport+0x15b/0x460 [zfs] [ 27.869204] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.870002] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.870798] ? kvmalloc_node+0x4b/0xe0 [ 27.871423] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.872170] __x64_sys_ioctl+0x94/0xd0 [ 27.872766] do_syscall_64+0x55/0x80 [ 27.873353] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.873939] ? handle_mm_fault+0xad/0x360 [ 27.874512] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.875054] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.875569] ? irqentry_exit+0x33/0x40 [ 27.876082] ? exc_page_fault+0x89/0x170 [ 27.876595] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.877111] RIP: 0033:0x7f19be9459ef [ 27.877625] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.878194] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.878808] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.879397] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.879988] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.880579] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.881168] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.881757] </TASK> [ 27.882346] ================================================================================ [ 27.882977] ================================================================================ [ 27.883569] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:473:34 [ 27.884170] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 27.884772] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.885380] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.885997] Call Trace: [ 27.886638] <TASK> [ 27.887252] dump_stack_lvl+0x48/0x60 [ 27.887868] dump_stack+0x10/0x20 [ 27.888480] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.889094] zap_lockdir_impl+0x849/0x880 [zfs] [ 27.889841] zap_lockdir+0x91/0xb0 [zfs] [ 27.890601] zap_cursor_retrieve+0x1d7/0x390 [zfs] [ 27.891363] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 27.891989] spa_features_check+0xbd/0x1b0 [zfs] [ 27.892735] spa_load+0x6cd/0x1950 [zfs] [ 27.893453] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.894109] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.894788] spa_tryimport+0x15b/0x460 [zfs] [ 27.895470] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.896146] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.896823] ? kvmalloc_node+0x4b/0xe0 [ 27.897366] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.898030] __x64_sys_ioctl+0x94/0xd0 [ 27.898560] do_syscall_64+0x55/0x80 [ 27.899124] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.899654] ? handle_mm_fault+0xad/0x360 [ 27.900175] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.900694] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.901205] ? irqentry_exit+0x33/0x40 [ 27.901705] ? exc_page_fault+0x89/0x170 [ 27.902209] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.902742] RIP: 0033:0x7f19be9459ef [ 27.903248] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.903811] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.904389] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.904973] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.905557] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.906142] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.906758] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.907347] </TASK> [ 27.907935] ================================================================================ [ 27.908573] ================================================================================ [ 27.909251] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1632:28 [ 27.909928] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 27.910621] CPU: 1 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.911305] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.911998] Call Trace: [ 27.912690] <TASK> [ 27.913379] dump_stack_lvl+0x48/0x60 [ 27.914073] dump_stack+0x10/0x20 [ 27.914763] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.915455] zap_cursor_retrieve+0x35d/0x390 [zfs] [ 27.916305] spa_features_check+0xbd/0x1b0 [zfs] [ 27.917146] spa_load+0x6cd/0x1950 [zfs] [ 27.917994] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.918811] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.919624] spa_tryimport+0x15b/0x460 [zfs] [ 27.920435] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.921205] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.921972] ? kvmalloc_node+0x4b/0xe0 [ 27.922589] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.923346] __x64_sys_ioctl+0x94/0xd0 [ 27.923956] do_syscall_64+0x55/0x80 [ 27.924562] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.925167] ? handle_mm_fault+0xad/0x360 [ 27.925765] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.926372] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.926941] ? irqentry_exit+0x33/0x40 [ 27.927462] ? exc_page_fault+0x89/0x170 [ 27.927979] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.928490] RIP: 0033:0x7f19be9459ef [ 27.928993] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.929555] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.930135] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.930747] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.931334] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.931921] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.932509] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.933101] </TASK> [ 27.933693] ================================================================================ [ 27.934566] ================================================================================ [ 27.935254] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:339:46 [ 27.935938] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 27.936620] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.937307] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.938001] Call Trace: [ 27.938702] <TASK> [ 27.939316] dump_stack_lvl+0x48/0x60 [ 27.939934] dump_stack+0x10/0x20 [ 27.940548] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.941167] mze_find+0xfa/0x110 [zfs] [ 27.941916] zap_lookup_impl+0x103/0x390 [zfs] [ 27.942692] zap_lookup+0xa7/0x100 [zfs] [ 27.943438] feature_get_refcount_from_disk+0x62/0xd0 [zfs] [ 27.944187] spa_load+0x7b4/0x1950 [zfs] [ 27.944944] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.945672] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.946399] spa_tryimport+0x15b/0x460 [zfs] [ 27.947188] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.947908] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.948598] ? kvmalloc_node+0x4b/0xe0 [ 27.949151] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.949828] __x64_sys_ioctl+0x94/0xd0 [ 27.950376] do_syscall_64+0x55/0x80 [ 27.950948] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.951496] ? handle_mm_fault+0xad/0x360 [ 27.952036] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.952574] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.953114] ? irqentry_exit+0x33/0x40 [ 27.953642] ? exc_page_fault+0x89/0x170 [ 27.954165] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.954715] RIP: 0033:0x7f19be9459ef [ 27.955227] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.955786] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.956367] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.956951] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.957539] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.958127] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.958742] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.959333] </TASK> [ 27.959937] ================================================================================ [ 27.960558] ================================================================================ [ 27.961152] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1021:27 [ 27.961757] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 27.962361] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.963000] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.963619] Call Trace: [ 27.964237] <TASK> [ 27.964852] dump_stack_lvl+0x48/0x60 [ 27.965471] dump_stack+0x10/0x20 [ 27.966085] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.966732] zap_lookup_impl+0x34e/0x390 [zfs] [ 27.967482] zap_lookup+0xa7/0x100 [zfs] [ 27.968226] feature_get_refcount_from_disk+0x62/0xd0 [zfs] [ 27.968977] spa_load+0x7b4/0x1950 [zfs] [ 27.969732] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.970459] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.971209] spa_tryimport+0x15b/0x460 [zfs] [ 27.971933] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.972619] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.973303] ? kvmalloc_node+0x4b/0xe0 [ 27.973854] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.974529] __x64_sys_ioctl+0x94/0xd0 [ 27.975083] do_syscall_64+0x55/0x80 [ 27.975627] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.976168] ? handle_mm_fault+0xad/0x360 [ 27.976703] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.977243] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.977776] ? irqentry_exit+0x33/0x40 [ 27.978297] ? exc_page_fault+0x89/0x170 [ 27.978818] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.979331] RIP: 0033:0x7f19be9459ef [ 27.979834] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.980396] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.980976] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.981561] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.982147] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.982740] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.983328] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.983920] </TASK> [ 27.984511] ================================================================================ [ 28.110649] ================================================================================ [ 28.111338] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1387:22 [ 28.112019] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 28.112699] CPU: 0 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.113386] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.114080] Call Trace: [ 28.114779] <TASK> [ 28.115469] dump_stack_lvl+0x48/0x60 [ 28.116166] dump_stack+0x10/0x20 [ 28.116857] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.117551] zap_update+0x2a0/0x2b0 [zfs] [ 28.118426] feature_sync+0x57/0x150 [zfs] [ 28.119280] spa_feature_incr+0x74/0x120 [zfs] [ 28.120121] space_map_alloc+0x73/0x80 [zfs] [ 28.120966] spa_generate_syncing_log_sm+0xd1/0x250 [zfs] [ 28.121819] spa_flush_metaslabs+0xa8/0x410 [zfs] [ 28.122675] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.123508] ? mutex_lock+0x12/0x40 [ 28.124167] spa_sync+0x626/0x1040 [zfs] [ 28.124978] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.125792] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.126616] ? spl_kmem_free+0x29/0x30 [spl] [ 28.127178] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.127860] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.128535] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.129084] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.129629] kthread+0xef/0x120 [ 28.130165] ? kthread_complete_and_exit+0x20/0x20 [ 28.130697] ret_from_fork+0x36/0x50 [ 28.131218] ? kthread_complete_and_exit+0x20/0x20 [ 28.131738] ret_from_fork_asm+0x11/0x20 [ 28.132253] </TASK> [ 28.132760] ================================================================================ [ 28.134081] ================================================================================ [ 28.134628] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1447:4 [ 28.135220] index 48 is out of range for type 'mzap_ent_phys_t [1]' [ 28.135814] CPU: 2 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.136414] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.137023] Call Trace: [ 28.137630] <TASK> [ 28.138231] dump_stack_lvl+0x48/0x60 [ 28.138822] dump_stack+0x10/0x20 [ 28.139353] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.139888] zap_remove_impl+0x1be/0x1d0 [zfs] [ 28.140554] zap_remove+0x8b/0xe0 [zfs] [ 28.141217] zap_remove_int+0x6b/0x90 [zfs] [ 28.141877] spa_cleanup_old_sm_logs+0xfe/0x180 [zfs] [ 28.142547] metaslab_unflushed_bump+0x123/0x160 [zfs] [ 28.143222] spa_flush_metaslabs+0x211/0x410 [zfs] [ 28.143893] spa_sync+0x626/0x1040 [zfs] [ 28.144564] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.145235] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.145905] ? spl_kmem_free+0x29/0x30 [spl] [ 28.146451] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.147125] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.147793] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.148339] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.148885] kthread+0xef/0x120 [ 28.149382] ? kthread_complete_and_exit+0x20/0x20 [ 28.149852] ret_from_fork+0x36/0x50 [ 28.150315] ? kthread_complete_and_exit+0x20/0x20 [ 28.150780] ret_from_fork_asm+0x11/0x20 [ 28.151243] </TASK> [ 28.151702] ================================================================================ [ 28.172651] ================================================================================ [ 28.173435] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:314:44 [ 28.174086] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 28.174624] CPU: 1 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.175152] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.175678] Call Trace: [ 28.176197] <TASK> [ 28.176706] dump_stack_lvl+0x48/0x60 [ 28.177212] dump_stack+0x10/0x20 [ 28.177713] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.178221] mzap_addent+0x2b7/0x2d0 [zfs] [ 28.178909] zap_add_impl+0x33d/0x360 [zfs] [ 28.179566] zap_add+0xca/0xf0 [zfs] [ 28.180219] zap_add_int_key+0x7d/0xa0 [zfs] [ 28.180874] spa_generate_syncing_log_sm+0xe9/0x250 [zfs] [ 28.181536] spa_flush_metaslabs+0xa8/0x410 [zfs] [ 28.182198] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.182847] ? mutex_lock+0x12/0x40 [ 28.183354] spa_sync+0x626/0x1040 [zfs] [ 28.184046] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.184748] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.185452] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.186155] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.186683] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.187206] kthread+0xef/0x120 [ 28.187714] ? kthread_complete_and_exit+0x20/0x20 [ 28.188229] ret_from_fork+0x36/0x50 [ 28.188740] ? kthread_complete_and_exit+0x20/0x20 [ 28.189254] ret_from_fork_asm+0x11/0x20 [ 28.189769] </TASK> [ 28.190280] ================================================================================ [ 28.208303] ================================================================================ [ 28.209136] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1234:52 [ 28.209784] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 28.210317] CPU: 3 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.210860] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.211409] Call Trace: [ 28.211945] <TASK> [ 28.212471] dump_stack_lvl+0x48/0x60 [ 28.212998] dump_stack+0x10/0x20 [ 28.213509] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.214017] mzap_addent+0x298/0x2d0 [zfs] [ 28.214709] zap_add_impl+0x33d/0x360 [zfs] [ 28.215369] zap_add+0xca/0xf0 [zfs] [ 28.216028] zap_add_int_key+0x7d/0xa0 [zfs] [ 28.216686] spa_generate_syncing_log_sm+0xe9/0x250 [zfs] [ 28.217354] spa_flush_metaslabs+0xa8/0x410 [zfs] [ 28.218021] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.218690] ? mutex_lock+0x12/0x40 [ 28.219206] spa_sync+0x626/0x1040 [zfs] [ 28.219874] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.220538] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.221204] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.221869] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.222395] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.222926] kthread+0xef/0x120 [ 28.223438] ? kthread_complete_and_exit+0x20/0x20 [ 28.223957] ret_from_fork+0x36/0x50 [ 28.224473] ? kthread_complete_and_exit+0x20/0x20 [ 28.224991] ret_from_fork_asm+0x11/0x20 [ 28.225510] </TASK> [ 28.226029] ================================================================================ [ 28.273173] ================================================================================ [ 28.273733] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_leaf.c:444:49 [ 28.274275] index 1 is out of range for type 'uint16_t [1]' [ 28.274840] CPU: 0 PID: 424 Comm: zfs Tainted: P OE 6.5.0-rc4 #1 [ 28.275385] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.275930] Call Trace: [ 28.276468] <TASK> [ 28.276996] dump_stack_lvl+0x48/0x60 [ 28.277522] dump_stack+0x10/0x20 [ 28.278043] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.278569] zap_leaf_lookup_closest+0x203/0x220 [zfs] [ 28.279280] fzap_cursor_retrieve+0x10f/0x380 [zfs] [ 28.279958] zap_cursor_retrieve+0x266/0x390 [zfs] [ 28.280635] ? dbuf_cache_multilist_index_func+0x31/0x40 [zfs] [ 28.281303] ? mutex_lock+0x12/0x40 [ 28.281839] dsl_prop_get_all_impl+0x247/0x7b0 [zfs] [ 28.282525] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.283200] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 28.283746] ? __kmalloc_node+0x52/0xd0 [ 28.284279] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 28.284821] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 28.285359] ? strlcat+0x56/0x80 [ 28.285882] ? dsl_dir_name+0x104/0x1a0 [zfs] [ 28.286555] ? strlcat+0x56/0x80 [ 28.287037] ? dsl_dir_name+0x104/0x1a0 [zfs] [ 28.287633] ? strlcat+0x56/0x80 [ 28.288095] dsl_prop_get_all_ds+0xcf/0x1a0 [zfs] [ 28.288694] dsl_prop_get_all+0x13/0x20 [zfs] [ 28.289289] zfs_ioc_objset_stats_impl+0x79/0x110 [zfs] [ 28.289882] zfs_ioc_objset_stats+0x66/0x80 [zfs] [ 28.290475] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 28.291094] ? kvmalloc_node+0x4b/0xe0 [ 28.291558] zfsdev_ioctl+0x57/0xe0 [zfs] [ 28.292179] __x64_sys_ioctl+0x94/0xd0 [ 28.292667] do_syscall_64+0x55/0x80 [ 28.293145] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.293619] ? irqentry_exit_to_user_mode+0x9/0x20 [ 28.294086] ? irqentry_exit+0x33/0x40 [ 28.294544] ? exc_page_fault+0x89/0x170 [ 28.295002] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.295433] RIP: 0033:0x7ff30dd899ef [ 28.295863] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 28.296350] RSP: 002b:00007ffd29f1c430 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 28.296854] RAX: ffffffffffffffda RBX: 0000555cec9ffd10 RCX: 00007ff30dd899ef [ 28.297360] RDX: 00007ffd29f1c4c0 RSI: 0000000000005a12 RDI: 0000000000000003 [ 28.297870] RBP: 00007ffd29f1c4b0 R08: 00000000ffffffff R09: 0000000000000000 [ 28.298379] R10: 0000000000000022 R11: 0000000000000246 R12: 00007ffd29f1c4c0 [ 28.298924] R13: 0000555cec9fd2c0 R14: 00007ffd29f1c4c0 R15: 00007ff30d500758 [ 28.299438] </TASK> [ 28.299952] ================================================================================ [ 28.743998] ================================================================================ [ 28.744910] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/sa.c:339:4 [ 28.745681] index 1 is out of range for type 'uint16_t [1]' [ 28.746338] CPU: 0 PID: 542 Comm: run-init Tainted: P OE 6.5.0-rc4 #1 [ 28.746944] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.747533] Call Trace: [ 28.748120] <TASK> [ 28.748707] dump_stack_lvl+0x48/0x60 [ 28.749299] dump_stack+0x10/0x20 [ 28.749882] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.750469] sa_attr_op+0x408/0x460 [zfs] [ 28.751268] sa_lookup_uio+0x8a/0x110 [zfs] [ 28.752006] zfs_readlink+0x10b/0x180 [zfs] [ 28.752729] zpl_get_link_common.constprop.0+0xdf/0x150 [zfs] [ 28.753457] ? zpl_get_link_common.constprop.0+0x150/0x150 [zfs] [ 28.754185] zpl_get_link+0x36/0x70 [zfs] [ 28.754919] step_into+0x657/0x740 [ 28.755504] walk_component+0x51/0x170 [ 28.756089] link_path_walk.part.0.constprop.0+0x269/0x3a0 [ 28.756682] ? path_init+0x28c/0x3c0 [ 28.757273] path_lookupat+0x3e/0x190 [ 28.757864] filename_lookup+0xe4/0x1e0 [ 28.758454] user_path_at_empty+0x3e/0x60 [ 28.759006] do_faccessat+0x111/0x2f0 [ 28.759530] __x64_sys_access+0x1c/0x20 [ 28.760052] do_syscall_64+0x55/0x80 [ 28.760575] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.761099] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.761624] ? do_syscall_64+0x61/0x80 [ 28.762147] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.762699] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.763225] ? __x64_sys_statfs+0x16/0x20 [ 28.763750] ? do_syscall_64+0x61/0x80 [ 28.764275] ? exc_page_fault+0x89/0x170 [ 28.764800] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.765327] RIP: 0033:0x7fdb651aaaab [ 28.765854] Code: 77 05 c3 0f 1f 40 00 48 8b 15 69 a3 0e 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff c3 0f 1f 40 00 f3 0f 1e fa b8 15 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 39 a3 0e 00 f7 d8 [ 28.766436] RSP: 002b:00007ffca41e01c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000015 [ 28.767063] RAX: ffffffffffffffda RBX: 00007ffca41e04e8 RCX: 00007fdb651aaaab [ 28.767664] RDX: 00007ffca41e0278 RSI: 0000000000000001 RDI: 00007ffca41e0d3f [ 28.768233] RBP: 000056205c3c81a9 R08: 0000000000000000 R09: 0000000000000000 [ 28.768771] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001 [ 28.769329] R13: 000056205c3c39a6 R14: 0000000000000002 R15: 000056205c3c8ad8 [ 28.769861] </TASK> [ 28.770416] ================================================================================ [ 28.772136] ================================================================================ [ 28.772748] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/sa.c:1733:24 [ 28.773357] index 1 is out of range for type 'uint16_t [1]' [ 28.773959] CPU: 3 PID: 542 Comm: run-init Tainted: P OE 6.5.0-rc4 #1 [ 28.774564] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.775182] Call Trace: [ 28.775792] <TASK> [ 28.776401] dump_stack_lvl+0x48/0x60 [ 28.777017] dump_stack+0x10/0x20 [ 28.777621] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.778228] sa_find_idx_tab+0x239/0x270 [zfs] [ 28.779021] sa_build_index+0xa3/0x360 [zfs] [ 28.779784] sa_handle_get_from_db+0x120/0x170 [zfs] [ 28.780545] zfs_znode_sa_init+0xb1/0xe0 [zfs] [ 28.781329] zfs_znode_alloc+0x19d/0x7c0 [zfs] [ 28.782117] ? aggsum_add+0x19f/0x1b0 [zfs] [ 28.782889] zfs_zget+0x25b/0x2a0 [zfs] [ 28.783639] zfs_dirent_lock+0x3e8/0x6a0 [zfs] [ 28.784384] zfs_dirlook+0xaa/0x2e0 [zfs] [ 28.785124] ? zfs_zaccess+0x2a0/0x480 [zfs] [ 28.785863] zfs_lookup+0x258/0x410 [zfs] [ 28.786606] zpl_lookup+0xe0/0x210 [zfs] [ 28.787335] __lookup_slow+0x7f/0x120 [ 28.787913] walk_component+0x100/0x170 [ 28.788483] path_lookupat+0x67/0x190 [ 28.789046] filename_lookup+0xe4/0x1e0 [ 28.789601] ? zpl_ioctl_fideduperange+0x20/0x20 [zfs] [ 28.790298] user_path_at_empty+0x3e/0x60 [ 28.790838] do_faccessat+0x111/0x2f0 [ 28.791331] __x64_sys_access+0x1c/0x20 [ 28.791826] do_syscall_64+0x55/0x80 [ 28.792319] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.792816] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.793311] ? do_syscall_64+0x61/0x80 [ 28.793806] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.794303] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.794821] ? __x64_sys_statfs+0x16/0x20 [ 28.795313] ? do_syscall_64+0x61/0x80 [ 28.795804] ? exc_page_fault+0x89/0x170 [ 28.796296] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.796789] RIP: 0033:0x7fdb651aaaab [ 28.797278] Code: 77 05 c3 0f 1f 40 00 48 8b 15 69 a3 0e 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff c3 0f 1f 40 00 f3 0f 1e fa b8 15 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 39 a3 0e 00 f7 d8 [ 28.797824] RSP: 002b:00007ffca41e01c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000015 [ 28.798387] RAX: ffffffffffffffda RBX: 00007ffca41e04e8 RCX: 00007fdb651aaaab [ 28.798978] RDX: 00007ffca41e0278 RSI: 0000000000000001 RDI: 00007ffca41e0d3f [ 28.799547] RBP: 000056205c3c81a9 R08: 0000000000000000 R09: 0000000000000000 [ 28.800119] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001 [ 28.800691] R13: 000056205c3c39a6 R14: 0000000000000002 R15: 000056205c3c8ad8 [ 28.801266] </TASK> [ 28.801851] ================================================================================ [ 28.858889] ================================================================================ [ 28.860019] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_leaf.c:661:24 [ 28.861023] index 18 is out of range for type 'uint16_t [1]' [ 28.861878] CPU: 3 PID: 1 Comm: run-init Tainted: P OE 6.5.0-rc4 #1 [ 28.862619] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.863316] Call Trace: [ 28.863996] <TASK> [ 28.864671] dump_stack_lvl+0x48/0x60 [ 28.865314] dump_stack+0x10/0x20 [ 28.865914] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.866514] zap_entry_normalization_conflict+0x188/0x1a0 [zfs] [ 28.867297] fzap_lookup+0x146/0x1c0 [zfs] [ 28.868045] zap_lookup_impl+0xae/0x390 [zfs] [ 28.868790] zap_lookup_norm+0xb0/0x110 [zfs] [ 28.869531] zfs_dirent_lock+0x375/0x6a0 [zfs] [ 28.870306] zfs_dirlook+0xaa/0x2e0 [zfs] [ 28.871057] ? zfs_zaccess+0x2a0/0x480 [zfs] [ 28.871792] zfs_lookup+0x258/0x410 [zfs] [ 28.872553] zpl_lookup+0xe0/0x210 [zfs] [ 28.873305] path_openat+0x639/0x1140 [ 28.873897] ? rrm_exit+0x59/0xc0 [zfs] [ 28.874653] do_filp_open+0xaf/0x160 [ 28.875141] ? zpl_ioctl_fideduperange+0x20/0x20 [zfs] [ 28.875763] ? zpl_ioctl_fideduperange+0x20/0x20 [zfs] [ 28.876380] do_open_execat+0x5a/0xf0 [ 28.876871] open_exec+0x2b/0x50 [ 28.877358] load_elf_binary+0x210/0x1780 [ 28.877847] bprm_execve+0x28a/0x670 [ 28.878333] do_execveat_common.isra.0+0x1a9/0x250 [ 28.878845] __x64_sys_execve+0x37/0x50 [ 28.879330] do_syscall_64+0x55/0x80 [ 28.879816] ? do_syscall_64+0x61/0x80 [ 28.880300] ? do_syscall_64+0x61/0x80 [ 28.880781] ? do_syscall_64+0x61/0x80 [ 28.881257] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.881739] RIP: 0033:0x7f991965a52b [ 28.882220] Code: b5 11 00 5b 5d 41 5c 41 5d 41 5e 41 5f e9 8d 94 fa ff 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 f3 0f 1e fa b8 3b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d bd 38 11 00 f7 d8 64 89 01 48 [ 28.882788] RSP: 002b:00007ffdee85ac88 EFLAGS: 00000246 ORIG_RAX: 000000000000003b [ 28.883341] RAX: ffffffffffffffda RBX: 00007ffdee85afa0 RCX: 00007f991965a52b [ 28.883898] RDX: 00007ffdee85afb8 RSI: 00007ffdee85afa8 RDI: 00007ffdee85ce84 [ 28.884455] RBP: 00005621c0ee01a9 R08: 0000000000000000 R09: 0000000000000000 [ 28.885015] R10: 0000000000002000 R11: 0000000000000246 R12: 0000000000000000 [ 28.885576] R13: 00005621c0edb9a6 R14: 0000000000000002 R15: 00005621c0ee0ad8 [ 28.886139] </TASK> [ 28.887918] ================================================================================ ``` ### Describe how to reproduce the problem I have a packaged PPA for use with ubuntu 22.10 (Ubuntu Lunar) here: https://launchpad.net/~satadru-umich/+archive/ubuntu/zfs-experimental/ ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
1.0
6.5 compat: UBSAN now complains about flex-array declarations with array[1] - See https://www.spinics.net/lists/linux-xfs/msg73588.html for more information on changes which were also made to xfs to fix such issues. <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Ubuntu Distribution Version | Lunar Kernel Version | 6.5-rc4 Architecture | x86_64 OpenZFS Version | 2.2.0-rc3 w/ 6.5 patches ( https://github.com/openzfs/zfs/pull/15138 https://github.com/openzfs/zfs/pull/15100 https://github.com/openzfs/zfs/commit/325505e5c4e48f32e1a03e42a694509bf4c02670 https://github.com/openzfs/zfs/pull/15101 https://github.com/openzfs/zfs/pull/15099 ) <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing ``` [ 27.714798] ZFS: Loaded module v2.2.0rc3-lunar3, ZFS pool version 5000, ZFS filesystem version 5 [ 27.811205] ================================================================================ [ 27.812311] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_leaf.c:395:26 [ 27.813393] index 365 is out of range for type 'uint16_t [1]' [ 27.814362] CPU: 1 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.815338] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.816313] Call Trace: [ 27.817280] <TASK> [ 27.818237] dump_stack_lvl+0x48/0x60 [ 27.819200] dump_stack+0x10/0x20 [ 27.820146] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.821092] zap_leaf_lookup+0x175/0x180 [zfs] [ 27.822208] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 27.823164] fzap_lookup+0xda/0x1c0 [zfs] [ 27.824247] zap_lookup_impl+0xae/0x390 [zfs] [ 27.825331] zap_lookup+0xa7/0x100 [zfs] [ 27.826409] spa_ld_trusted_config+0x77/0x7e0 [zfs] [ 27.827499] ? dsl_pool_init+0x36/0x70 [zfs] [ 27.828578] spa_ld_mos_with_trusted_config.part.0+0x20/0xb0 [zfs] [ 27.829662] spa_load+0x139/0x1950 [zfs] [ 27.830784] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.831714] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.832634] spa_tryimport+0x15b/0x460 [zfs] [ 27.833576] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.834506] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.835453] ? kvmalloc_node+0x4b/0xe0 [ 27.836233] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.837189] __x64_sys_ioctl+0x94/0xd0 [ 27.837999] do_syscall_64+0x55/0x80 [ 27.838814] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.839562] ? handle_mm_fault+0xad/0x360 [ 27.840298] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.841029] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.841748] ? irqentry_exit+0x33/0x40 [ 27.842451] ? exc_page_fault+0x89/0x170 [ 27.843172] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.843856] RIP: 0033:0x7f19be9459ef [ 27.844526] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.845249] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.845980] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.846731] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.847442] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.848141] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.848828] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.849506] </TASK> [ 27.850176] ================================================================================ [ 27.854916] ================================================================================ [ 27.855655] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:314:44 [ 27.856383] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 27.857099] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.857819] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.858544] Call Trace: [ 27.859280] <TASK> [ 27.859989] dump_stack_lvl+0x48/0x60 [ 27.860687] dump_stack+0x10/0x20 [ 27.861364] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.862045] zap_lockdir_impl+0x86b/0x880 [zfs] [ 27.862894] zap_lockdir+0x91/0xb0 [zfs] [ 27.863712] zap_cursor_retrieve+0x1d7/0x390 [zfs] [ 27.864532] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 27.865213] spa_features_check+0xbd/0x1b0 [zfs] [ 27.866029] spa_load+0x6cd/0x1950 [zfs] [ 27.866854] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.867631] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.868400] spa_tryimport+0x15b/0x460 [zfs] [ 27.869204] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.870002] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.870798] ? kvmalloc_node+0x4b/0xe0 [ 27.871423] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.872170] __x64_sys_ioctl+0x94/0xd0 [ 27.872766] do_syscall_64+0x55/0x80 [ 27.873353] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.873939] ? handle_mm_fault+0xad/0x360 [ 27.874512] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.875054] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.875569] ? irqentry_exit+0x33/0x40 [ 27.876082] ? exc_page_fault+0x89/0x170 [ 27.876595] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.877111] RIP: 0033:0x7f19be9459ef [ 27.877625] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.878194] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.878808] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.879397] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.879988] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.880579] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.881168] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.881757] </TASK> [ 27.882346] ================================================================================ [ 27.882977] ================================================================================ [ 27.883569] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:473:34 [ 27.884170] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 27.884772] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.885380] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.885997] Call Trace: [ 27.886638] <TASK> [ 27.887252] dump_stack_lvl+0x48/0x60 [ 27.887868] dump_stack+0x10/0x20 [ 27.888480] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.889094] zap_lockdir_impl+0x849/0x880 [zfs] [ 27.889841] zap_lockdir+0x91/0xb0 [zfs] [ 27.890601] zap_cursor_retrieve+0x1d7/0x390 [zfs] [ 27.891363] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 27.891989] spa_features_check+0xbd/0x1b0 [zfs] [ 27.892735] spa_load+0x6cd/0x1950 [zfs] [ 27.893453] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.894109] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.894788] spa_tryimport+0x15b/0x460 [zfs] [ 27.895470] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.896146] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.896823] ? kvmalloc_node+0x4b/0xe0 [ 27.897366] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.898030] __x64_sys_ioctl+0x94/0xd0 [ 27.898560] do_syscall_64+0x55/0x80 [ 27.899124] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.899654] ? handle_mm_fault+0xad/0x360 [ 27.900175] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.900694] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.901205] ? irqentry_exit+0x33/0x40 [ 27.901705] ? exc_page_fault+0x89/0x170 [ 27.902209] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.902742] RIP: 0033:0x7f19be9459ef [ 27.903248] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.903811] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.904389] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.904973] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.905557] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.906142] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.906758] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.907347] </TASK> [ 27.907935] ================================================================================ [ 27.908573] ================================================================================ [ 27.909251] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1632:28 [ 27.909928] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 27.910621] CPU: 1 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.911305] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.911998] Call Trace: [ 27.912690] <TASK> [ 27.913379] dump_stack_lvl+0x48/0x60 [ 27.914073] dump_stack+0x10/0x20 [ 27.914763] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.915455] zap_cursor_retrieve+0x35d/0x390 [zfs] [ 27.916305] spa_features_check+0xbd/0x1b0 [zfs] [ 27.917146] spa_load+0x6cd/0x1950 [zfs] [ 27.917994] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.918811] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.919624] spa_tryimport+0x15b/0x460 [zfs] [ 27.920435] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.921205] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.921972] ? kvmalloc_node+0x4b/0xe0 [ 27.922589] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.923346] __x64_sys_ioctl+0x94/0xd0 [ 27.923956] do_syscall_64+0x55/0x80 [ 27.924562] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.925167] ? handle_mm_fault+0xad/0x360 [ 27.925765] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.926372] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.926941] ? irqentry_exit+0x33/0x40 [ 27.927462] ? exc_page_fault+0x89/0x170 [ 27.927979] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.928490] RIP: 0033:0x7f19be9459ef [ 27.928993] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.929555] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.930135] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.930747] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.931334] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.931921] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.932509] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.933101] </TASK> [ 27.933693] ================================================================================ [ 27.934566] ================================================================================ [ 27.935254] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:339:46 [ 27.935938] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 27.936620] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.937307] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.938001] Call Trace: [ 27.938702] <TASK> [ 27.939316] dump_stack_lvl+0x48/0x60 [ 27.939934] dump_stack+0x10/0x20 [ 27.940548] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.941167] mze_find+0xfa/0x110 [zfs] [ 27.941916] zap_lookup_impl+0x103/0x390 [zfs] [ 27.942692] zap_lookup+0xa7/0x100 [zfs] [ 27.943438] feature_get_refcount_from_disk+0x62/0xd0 [zfs] [ 27.944187] spa_load+0x7b4/0x1950 [zfs] [ 27.944944] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.945672] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.946399] spa_tryimport+0x15b/0x460 [zfs] [ 27.947188] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.947908] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.948598] ? kvmalloc_node+0x4b/0xe0 [ 27.949151] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.949828] __x64_sys_ioctl+0x94/0xd0 [ 27.950376] do_syscall_64+0x55/0x80 [ 27.950948] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.951496] ? handle_mm_fault+0xad/0x360 [ 27.952036] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.952574] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.953114] ? irqentry_exit+0x33/0x40 [ 27.953642] ? exc_page_fault+0x89/0x170 [ 27.954165] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.954715] RIP: 0033:0x7f19be9459ef [ 27.955227] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.955786] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.956367] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.956951] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.957539] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.958127] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.958742] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.959333] </TASK> [ 27.959937] ================================================================================ [ 27.960558] ================================================================================ [ 27.961152] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1021:27 [ 27.961757] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 27.962361] CPU: 0 PID: 306 Comm: zpool Tainted: P OE 6.5.0-rc4 #1 [ 27.963000] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 27.963619] Call Trace: [ 27.964237] <TASK> [ 27.964852] dump_stack_lvl+0x48/0x60 [ 27.965471] dump_stack+0x10/0x20 [ 27.966085] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 27.966732] zap_lookup_impl+0x34e/0x390 [zfs] [ 27.967482] zap_lookup+0xa7/0x100 [zfs] [ 27.968226] feature_get_refcount_from_disk+0x62/0xd0 [zfs] [ 27.968977] spa_load+0x7b4/0x1950 [zfs] [ 27.969732] ? zpool_get_load_policy+0x194/0x1a0 [zfs] [ 27.970459] ? nvt_lookup_name_type.isra.0+0x6f/0xb0 [zfs] [ 27.971209] spa_tryimport+0x15b/0x460 [zfs] [ 27.971933] zfs_ioc_pool_tryimport+0x79/0xd0 [zfs] [ 27.972619] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 27.973303] ? kvmalloc_node+0x4b/0xe0 [ 27.973854] zfsdev_ioctl+0x57/0xe0 [zfs] [ 27.974529] __x64_sys_ioctl+0x94/0xd0 [ 27.975083] do_syscall_64+0x55/0x80 [ 27.975627] ? count_memcg_events.constprop.0+0x1e/0x30 [ 27.976168] ? handle_mm_fault+0xad/0x360 [ 27.976703] ? exit_to_user_mode_prepare+0x35/0x170 [ 27.977243] ? irqentry_exit_to_user_mode+0x9/0x20 [ 27.977776] ? irqentry_exit+0x33/0x40 [ 27.978297] ? exc_page_fault+0x89/0x170 [ 27.978818] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 27.979331] RIP: 0033:0x7f19be9459ef [ 27.979834] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 27.980396] RSP: 002b:00007ffe1b29ee40 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 27.980976] RAX: ffffffffffffffda RBX: 000055c823b3e570 RCX: 00007f19be9459ef [ 27.981561] RDX: 00007ffe1b29eeb0 RSI: 0000000000005a06 RDI: 0000000000000003 [ 27.982147] RBP: 00007ffe1b2a2490 R08: 0000000000000000 R09: 00007f19bea2b420 [ 27.982740] R10: 000055c823b5f000 R11: 0000000000000246 R12: 000055c823ae42c0 [ 27.983328] R13: 00007ffe1b29eeb0 R14: 00007ffe1b2a2570 R15: 0000000000000000 [ 27.983920] </TASK> [ 27.984511] ================================================================================ [ 28.110649] ================================================================================ [ 28.111338] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1387:22 [ 28.112019] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 28.112699] CPU: 0 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.113386] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.114080] Call Trace: [ 28.114779] <TASK> [ 28.115469] dump_stack_lvl+0x48/0x60 [ 28.116166] dump_stack+0x10/0x20 [ 28.116857] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.117551] zap_update+0x2a0/0x2b0 [zfs] [ 28.118426] feature_sync+0x57/0x150 [zfs] [ 28.119280] spa_feature_incr+0x74/0x120 [zfs] [ 28.120121] space_map_alloc+0x73/0x80 [zfs] [ 28.120966] spa_generate_syncing_log_sm+0xd1/0x250 [zfs] [ 28.121819] spa_flush_metaslabs+0xa8/0x410 [zfs] [ 28.122675] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.123508] ? mutex_lock+0x12/0x40 [ 28.124167] spa_sync+0x626/0x1040 [zfs] [ 28.124978] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.125792] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.126616] ? spl_kmem_free+0x29/0x30 [spl] [ 28.127178] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.127860] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.128535] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.129084] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.129629] kthread+0xef/0x120 [ 28.130165] ? kthread_complete_and_exit+0x20/0x20 [ 28.130697] ret_from_fork+0x36/0x50 [ 28.131218] ? kthread_complete_and_exit+0x20/0x20 [ 28.131738] ret_from_fork_asm+0x11/0x20 [ 28.132253] </TASK> [ 28.132760] ================================================================================ [ 28.134081] ================================================================================ [ 28.134628] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1447:4 [ 28.135220] index 48 is out of range for type 'mzap_ent_phys_t [1]' [ 28.135814] CPU: 2 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.136414] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.137023] Call Trace: [ 28.137630] <TASK> [ 28.138231] dump_stack_lvl+0x48/0x60 [ 28.138822] dump_stack+0x10/0x20 [ 28.139353] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.139888] zap_remove_impl+0x1be/0x1d0 [zfs] [ 28.140554] zap_remove+0x8b/0xe0 [zfs] [ 28.141217] zap_remove_int+0x6b/0x90 [zfs] [ 28.141877] spa_cleanup_old_sm_logs+0xfe/0x180 [zfs] [ 28.142547] metaslab_unflushed_bump+0x123/0x160 [zfs] [ 28.143222] spa_flush_metaslabs+0x211/0x410 [zfs] [ 28.143893] spa_sync+0x626/0x1040 [zfs] [ 28.144564] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.145235] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.145905] ? spl_kmem_free+0x29/0x30 [spl] [ 28.146451] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.147125] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.147793] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.148339] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.148885] kthread+0xef/0x120 [ 28.149382] ? kthread_complete_and_exit+0x20/0x20 [ 28.149852] ret_from_fork+0x36/0x50 [ 28.150315] ? kthread_complete_and_exit+0x20/0x20 [ 28.150780] ret_from_fork_asm+0x11/0x20 [ 28.151243] </TASK> [ 28.151702] ================================================================================ [ 28.172651] ================================================================================ [ 28.173435] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:314:44 [ 28.174086] index 1 is out of range for type 'mzap_ent_phys_t [1]' [ 28.174624] CPU: 1 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.175152] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.175678] Call Trace: [ 28.176197] <TASK> [ 28.176706] dump_stack_lvl+0x48/0x60 [ 28.177212] dump_stack+0x10/0x20 [ 28.177713] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.178221] mzap_addent+0x2b7/0x2d0 [zfs] [ 28.178909] zap_add_impl+0x33d/0x360 [zfs] [ 28.179566] zap_add+0xca/0xf0 [zfs] [ 28.180219] zap_add_int_key+0x7d/0xa0 [zfs] [ 28.180874] spa_generate_syncing_log_sm+0xe9/0x250 [zfs] [ 28.181536] spa_flush_metaslabs+0xa8/0x410 [zfs] [ 28.182198] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.182847] ? mutex_lock+0x12/0x40 [ 28.183354] spa_sync+0x626/0x1040 [zfs] [ 28.184046] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.184748] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.185452] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.186155] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.186683] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.187206] kthread+0xef/0x120 [ 28.187714] ? kthread_complete_and_exit+0x20/0x20 [ 28.188229] ret_from_fork+0x36/0x50 [ 28.188740] ? kthread_complete_and_exit+0x20/0x20 [ 28.189254] ret_from_fork_asm+0x11/0x20 [ 28.189769] </TASK> [ 28.190280] ================================================================================ [ 28.208303] ================================================================================ [ 28.209136] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_micro.c:1234:52 [ 28.209784] index 2 is out of range for type 'mzap_ent_phys_t [1]' [ 28.210317] CPU: 3 PID: 396 Comm: txg_sync Tainted: P OE 6.5.0-rc4 #1 [ 28.210860] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.211409] Call Trace: [ 28.211945] <TASK> [ 28.212471] dump_stack_lvl+0x48/0x60 [ 28.212998] dump_stack+0x10/0x20 [ 28.213509] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.214017] mzap_addent+0x298/0x2d0 [zfs] [ 28.214709] zap_add_impl+0x33d/0x360 [zfs] [ 28.215369] zap_add+0xca/0xf0 [zfs] [ 28.216028] zap_add_int_key+0x7d/0xa0 [zfs] [ 28.216686] spa_generate_syncing_log_sm+0xe9/0x250 [zfs] [ 28.217354] spa_flush_metaslabs+0xa8/0x410 [zfs] [ 28.218021] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.218690] ? mutex_lock+0x12/0x40 [ 28.219206] spa_sync+0x626/0x1040 [zfs] [ 28.219874] ? spa_txg_history_init_io+0x114/0x120 [zfs] [ 28.220538] txg_sync_thread+0x1fd/0x390 [zfs] [ 28.221204] ? txg_register_callbacks+0xb0/0xb0 [zfs] [ 28.221869] ? spl_taskq_fini+0x80/0x80 [spl] [ 28.222395] thread_generic_wrapper+0x5c/0x70 [spl] [ 28.222926] kthread+0xef/0x120 [ 28.223438] ? kthread_complete_and_exit+0x20/0x20 [ 28.223957] ret_from_fork+0x36/0x50 [ 28.224473] ? kthread_complete_and_exit+0x20/0x20 [ 28.224991] ret_from_fork_asm+0x11/0x20 [ 28.225510] </TASK> [ 28.226029] ================================================================================ [ 28.273173] ================================================================================ [ 28.273733] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_leaf.c:444:49 [ 28.274275] index 1 is out of range for type 'uint16_t [1]' [ 28.274840] CPU: 0 PID: 424 Comm: zfs Tainted: P OE 6.5.0-rc4 #1 [ 28.275385] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.275930] Call Trace: [ 28.276468] <TASK> [ 28.276996] dump_stack_lvl+0x48/0x60 [ 28.277522] dump_stack+0x10/0x20 [ 28.278043] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.278569] zap_leaf_lookup_closest+0x203/0x220 [zfs] [ 28.279280] fzap_cursor_retrieve+0x10f/0x380 [zfs] [ 28.279958] zap_cursor_retrieve+0x266/0x390 [zfs] [ 28.280635] ? dbuf_cache_multilist_index_func+0x31/0x40 [zfs] [ 28.281303] ? mutex_lock+0x12/0x40 [ 28.281839] dsl_prop_get_all_impl+0x247/0x7b0 [zfs] [ 28.282525] ? dmu_buf_rele+0x3b/0x40 [zfs] [ 28.283200] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 28.283746] ? __kmalloc_node+0x52/0xd0 [ 28.284279] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 28.284821] ? spl_kvmalloc+0x7a/0xa0 [spl] [ 28.285359] ? strlcat+0x56/0x80 [ 28.285882] ? dsl_dir_name+0x104/0x1a0 [zfs] [ 28.286555] ? strlcat+0x56/0x80 [ 28.287037] ? dsl_dir_name+0x104/0x1a0 [zfs] [ 28.287633] ? strlcat+0x56/0x80 [ 28.288095] dsl_prop_get_all_ds+0xcf/0x1a0 [zfs] [ 28.288694] dsl_prop_get_all+0x13/0x20 [zfs] [ 28.289289] zfs_ioc_objset_stats_impl+0x79/0x110 [zfs] [ 28.289882] zfs_ioc_objset_stats+0x66/0x80 [zfs] [ 28.290475] zfsdev_ioctl_common+0x893/0x9f0 [zfs] [ 28.291094] ? kvmalloc_node+0x4b/0xe0 [ 28.291558] zfsdev_ioctl+0x57/0xe0 [zfs] [ 28.292179] __x64_sys_ioctl+0x94/0xd0 [ 28.292667] do_syscall_64+0x55/0x80 [ 28.293145] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.293619] ? irqentry_exit_to_user_mode+0x9/0x20 [ 28.294086] ? irqentry_exit+0x33/0x40 [ 28.294544] ? exc_page_fault+0x89/0x170 [ 28.295002] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.295433] RIP: 0033:0x7ff30dd899ef [ 28.295863] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00 [ 28.296350] RSP: 002b:00007ffd29f1c430 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 28.296854] RAX: ffffffffffffffda RBX: 0000555cec9ffd10 RCX: 00007ff30dd899ef [ 28.297360] RDX: 00007ffd29f1c4c0 RSI: 0000000000005a12 RDI: 0000000000000003 [ 28.297870] RBP: 00007ffd29f1c4b0 R08: 00000000ffffffff R09: 0000000000000000 [ 28.298379] R10: 0000000000000022 R11: 0000000000000246 R12: 00007ffd29f1c4c0 [ 28.298924] R13: 0000555cec9fd2c0 R14: 00007ffd29f1c4c0 R15: 00007ff30d500758 [ 28.299438] </TASK> [ 28.299952] ================================================================================ [ 28.743998] ================================================================================ [ 28.744910] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/sa.c:339:4 [ 28.745681] index 1 is out of range for type 'uint16_t [1]' [ 28.746338] CPU: 0 PID: 542 Comm: run-init Tainted: P OE 6.5.0-rc4 #1 [ 28.746944] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.747533] Call Trace: [ 28.748120] <TASK> [ 28.748707] dump_stack_lvl+0x48/0x60 [ 28.749299] dump_stack+0x10/0x20 [ 28.749882] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.750469] sa_attr_op+0x408/0x460 [zfs] [ 28.751268] sa_lookup_uio+0x8a/0x110 [zfs] [ 28.752006] zfs_readlink+0x10b/0x180 [zfs] [ 28.752729] zpl_get_link_common.constprop.0+0xdf/0x150 [zfs] [ 28.753457] ? zpl_get_link_common.constprop.0+0x150/0x150 [zfs] [ 28.754185] zpl_get_link+0x36/0x70 [zfs] [ 28.754919] step_into+0x657/0x740 [ 28.755504] walk_component+0x51/0x170 [ 28.756089] link_path_walk.part.0.constprop.0+0x269/0x3a0 [ 28.756682] ? path_init+0x28c/0x3c0 [ 28.757273] path_lookupat+0x3e/0x190 [ 28.757864] filename_lookup+0xe4/0x1e0 [ 28.758454] user_path_at_empty+0x3e/0x60 [ 28.759006] do_faccessat+0x111/0x2f0 [ 28.759530] __x64_sys_access+0x1c/0x20 [ 28.760052] do_syscall_64+0x55/0x80 [ 28.760575] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.761099] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.761624] ? do_syscall_64+0x61/0x80 [ 28.762147] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.762699] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.763225] ? __x64_sys_statfs+0x16/0x20 [ 28.763750] ? do_syscall_64+0x61/0x80 [ 28.764275] ? exc_page_fault+0x89/0x170 [ 28.764800] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.765327] RIP: 0033:0x7fdb651aaaab [ 28.765854] Code: 77 05 c3 0f 1f 40 00 48 8b 15 69 a3 0e 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff c3 0f 1f 40 00 f3 0f 1e fa b8 15 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 39 a3 0e 00 f7 d8 [ 28.766436] RSP: 002b:00007ffca41e01c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000015 [ 28.767063] RAX: ffffffffffffffda RBX: 00007ffca41e04e8 RCX: 00007fdb651aaaab [ 28.767664] RDX: 00007ffca41e0278 RSI: 0000000000000001 RDI: 00007ffca41e0d3f [ 28.768233] RBP: 000056205c3c81a9 R08: 0000000000000000 R09: 0000000000000000 [ 28.768771] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001 [ 28.769329] R13: 000056205c3c39a6 R14: 0000000000000002 R15: 000056205c3c8ad8 [ 28.769861] </TASK> [ 28.770416] ================================================================================ [ 28.772136] ================================================================================ [ 28.772748] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/sa.c:1733:24 [ 28.773357] index 1 is out of range for type 'uint16_t [1]' [ 28.773959] CPU: 3 PID: 542 Comm: run-init Tainted: P OE 6.5.0-rc4 #1 [ 28.774564] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.775182] Call Trace: [ 28.775792] <TASK> [ 28.776401] dump_stack_lvl+0x48/0x60 [ 28.777017] dump_stack+0x10/0x20 [ 28.777621] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.778228] sa_find_idx_tab+0x239/0x270 [zfs] [ 28.779021] sa_build_index+0xa3/0x360 [zfs] [ 28.779784] sa_handle_get_from_db+0x120/0x170 [zfs] [ 28.780545] zfs_znode_sa_init+0xb1/0xe0 [zfs] [ 28.781329] zfs_znode_alloc+0x19d/0x7c0 [zfs] [ 28.782117] ? aggsum_add+0x19f/0x1b0 [zfs] [ 28.782889] zfs_zget+0x25b/0x2a0 [zfs] [ 28.783639] zfs_dirent_lock+0x3e8/0x6a0 [zfs] [ 28.784384] zfs_dirlook+0xaa/0x2e0 [zfs] [ 28.785124] ? zfs_zaccess+0x2a0/0x480 [zfs] [ 28.785863] zfs_lookup+0x258/0x410 [zfs] [ 28.786606] zpl_lookup+0xe0/0x210 [zfs] [ 28.787335] __lookup_slow+0x7f/0x120 [ 28.787913] walk_component+0x100/0x170 [ 28.788483] path_lookupat+0x67/0x190 [ 28.789046] filename_lookup+0xe4/0x1e0 [ 28.789601] ? zpl_ioctl_fideduperange+0x20/0x20 [zfs] [ 28.790298] user_path_at_empty+0x3e/0x60 [ 28.790838] do_faccessat+0x111/0x2f0 [ 28.791331] __x64_sys_access+0x1c/0x20 [ 28.791826] do_syscall_64+0x55/0x80 [ 28.792319] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.792816] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.793311] ? do_syscall_64+0x61/0x80 [ 28.793806] ? exit_to_user_mode_prepare+0x35/0x170 [ 28.794303] ? syscall_exit_to_user_mode+0x26/0x40 [ 28.794821] ? __x64_sys_statfs+0x16/0x20 [ 28.795313] ? do_syscall_64+0x61/0x80 [ 28.795804] ? exc_page_fault+0x89/0x170 [ 28.796296] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.796789] RIP: 0033:0x7fdb651aaaab [ 28.797278] Code: 77 05 c3 0f 1f 40 00 48 8b 15 69 a3 0e 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff c3 0f 1f 40 00 f3 0f 1e fa b8 15 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 39 a3 0e 00 f7 d8 [ 28.797824] RSP: 002b:00007ffca41e01c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000015 [ 28.798387] RAX: ffffffffffffffda RBX: 00007ffca41e04e8 RCX: 00007fdb651aaaab [ 28.798978] RDX: 00007ffca41e0278 RSI: 0000000000000001 RDI: 00007ffca41e0d3f [ 28.799547] RBP: 000056205c3c81a9 R08: 0000000000000000 R09: 0000000000000000 [ 28.800119] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001 [ 28.800691] R13: 000056205c3c39a6 R14: 0000000000000002 R15: 000056205c3c8ad8 [ 28.801266] </TASK> [ 28.801851] ================================================================================ [ 28.858889] ================================================================================ [ 28.860019] UBSAN: array-index-out-of-bounds in /var/lib/dkms/zfs/2.2.0rc3/build/module/zfs/zap_leaf.c:661:24 [ 28.861023] index 18 is out of range for type 'uint16_t [1]' [ 28.861878] CPU: 3 PID: 1 Comm: run-init Tainted: P OE 6.5.0-rc4 #1 [ 28.862619] Hardware name: Apple Inc. Macmini7,1/Mac-35C5E08120C7EEAF, BIOS 249.0.0.0.0 06/11/2020 [ 28.863316] Call Trace: [ 28.863996] <TASK> [ 28.864671] dump_stack_lvl+0x48/0x60 [ 28.865314] dump_stack+0x10/0x20 [ 28.865914] __ubsan_handle_out_of_bounds+0xc6/0x100 [ 28.866514] zap_entry_normalization_conflict+0x188/0x1a0 [zfs] [ 28.867297] fzap_lookup+0x146/0x1c0 [zfs] [ 28.868045] zap_lookup_impl+0xae/0x390 [zfs] [ 28.868790] zap_lookup_norm+0xb0/0x110 [zfs] [ 28.869531] zfs_dirent_lock+0x375/0x6a0 [zfs] [ 28.870306] zfs_dirlook+0xaa/0x2e0 [zfs] [ 28.871057] ? zfs_zaccess+0x2a0/0x480 [zfs] [ 28.871792] zfs_lookup+0x258/0x410 [zfs] [ 28.872553] zpl_lookup+0xe0/0x210 [zfs] [ 28.873305] path_openat+0x639/0x1140 [ 28.873897] ? rrm_exit+0x59/0xc0 [zfs] [ 28.874653] do_filp_open+0xaf/0x160 [ 28.875141] ? zpl_ioctl_fideduperange+0x20/0x20 [zfs] [ 28.875763] ? zpl_ioctl_fideduperange+0x20/0x20 [zfs] [ 28.876380] do_open_execat+0x5a/0xf0 [ 28.876871] open_exec+0x2b/0x50 [ 28.877358] load_elf_binary+0x210/0x1780 [ 28.877847] bprm_execve+0x28a/0x670 [ 28.878333] do_execveat_common.isra.0+0x1a9/0x250 [ 28.878845] __x64_sys_execve+0x37/0x50 [ 28.879330] do_syscall_64+0x55/0x80 [ 28.879816] ? do_syscall_64+0x61/0x80 [ 28.880300] ? do_syscall_64+0x61/0x80 [ 28.880781] ? do_syscall_64+0x61/0x80 [ 28.881257] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 28.881739] RIP: 0033:0x7f991965a52b [ 28.882220] Code: b5 11 00 5b 5d 41 5c 41 5d 41 5e 41 5f e9 8d 94 fa ff 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 f3 0f 1e fa b8 3b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d bd 38 11 00 f7 d8 64 89 01 48 [ 28.882788] RSP: 002b:00007ffdee85ac88 EFLAGS: 00000246 ORIG_RAX: 000000000000003b [ 28.883341] RAX: ffffffffffffffda RBX: 00007ffdee85afa0 RCX: 00007f991965a52b [ 28.883898] RDX: 00007ffdee85afb8 RSI: 00007ffdee85afa8 RDI: 00007ffdee85ce84 [ 28.884455] RBP: 00005621c0ee01a9 R08: 0000000000000000 R09: 0000000000000000 [ 28.885015] R10: 0000000000002000 R11: 0000000000000246 R12: 0000000000000000 [ 28.885576] R13: 00005621c0edb9a6 R14: 0000000000000002 R15: 00005621c0ee0ad8 [ 28.886139] </TASK> [ 28.887918] ================================================================================ ``` ### Describe how to reproduce the problem I have a packaged PPA for use with ubuntu 22.10 (Ubuntu Lunar) here: https://launchpad.net/~satadru-umich/+archive/ubuntu/zfs-experimental/ ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
defect
compat ubsan now complains about flex array declarations with array see for more information on changes which were also made to xfs to fix such issues thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name ubuntu distribution version lunar kernel version architecture openzfs version w patches command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing zfs loaded module zfs pool version zfs filesystem version ubsan array index out of bounds in var lib dkms zfs build module zfs zap leaf c index is out of range for type t cpu pid comm zpool tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap leaf lookup spl kvmalloc fzap lookup zap lookup impl zap lookup spa ld trusted config dsl pool init spa ld mos with trusted config part spa load zpool get load policy nvt lookup name type isra spa tryimport zfs ioc pool tryimport zfsdev ioctl common kvmalloc node zfsdev ioctl sys ioctl do syscall count memcg events constprop handle mm fault exit to user mode prepare irqentry exit to user mode irqentry exit exc page fault entry syscall after hwframe rip code ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm zpool tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap lockdir impl zap lockdir zap cursor retrieve spl kvmalloc spa features check spa load zpool get load policy nvt lookup name type isra spa tryimport zfs ioc pool tryimport zfsdev ioctl common kvmalloc node zfsdev ioctl sys ioctl do syscall count memcg events constprop handle mm fault exit to user mode prepare irqentry exit to user mode irqentry exit exc page fault entry syscall after hwframe rip code ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm zpool tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap lockdir impl zap lockdir zap cursor retrieve spl kvmalloc spa features check spa load zpool get load policy nvt lookup name type isra spa tryimport zfs ioc pool tryimport zfsdev ioctl common kvmalloc node zfsdev ioctl sys ioctl do syscall count memcg events constprop handle mm fault exit to user mode prepare irqentry exit to user mode irqentry exit exc page fault entry syscall after hwframe rip code ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm zpool tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap cursor retrieve spa features check spa load zpool get load policy nvt lookup name type isra spa tryimport zfs ioc pool tryimport zfsdev ioctl common kvmalloc node zfsdev ioctl sys ioctl do syscall count memcg events constprop handle mm fault exit to user mode prepare irqentry exit to user mode irqentry exit exc page fault entry syscall after hwframe rip code ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm zpool tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds mze find zap lookup impl zap lookup feature get refcount from disk spa load zpool get load policy nvt lookup name type isra spa tryimport zfs ioc pool tryimport zfsdev ioctl common kvmalloc node zfsdev ioctl sys ioctl do syscall count memcg events constprop handle mm fault exit to user mode prepare irqentry exit to user mode irqentry exit exc page fault entry syscall after hwframe rip code ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm zpool tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap lookup impl zap lookup feature get refcount from disk spa load zpool get load policy nvt lookup name type isra spa tryimport zfs ioc pool tryimport zfsdev ioctl common kvmalloc node zfsdev ioctl sys ioctl do syscall count memcg events constprop handle mm fault exit to user mode prepare irqentry exit to user mode irqentry exit exc page fault entry syscall after hwframe rip code ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm txg sync tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap update feature sync spa feature incr space map alloc spa generate syncing log sm spa flush metaslabs dmu buf rele mutex lock spa sync spa txg history init io txg sync thread spl kmem free txg register callbacks txg register callbacks spl taskq fini thread generic wrapper kthread kthread complete and exit ret from fork kthread complete and exit ret from fork asm ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm txg sync tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap remove impl zap remove zap remove int spa cleanup old sm logs metaslab unflushed bump spa flush metaslabs spa sync spa txg history init io txg sync thread spl kmem free txg register callbacks txg register callbacks spl taskq fini thread generic wrapper kthread kthread complete and exit ret from fork kthread complete and exit ret from fork asm ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm txg sync tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds mzap addent zap add impl zap add zap add int key spa generate syncing log sm spa flush metaslabs dmu buf rele mutex lock spa sync spa txg history init io txg sync thread txg register callbacks spl taskq fini thread generic wrapper kthread kthread complete and exit ret from fork kthread complete and exit ret from fork asm ubsan array index out of bounds in var lib dkms zfs build module zfs zap micro c index is out of range for type mzap ent phys t cpu pid comm txg sync tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds mzap addent zap add impl zap add zap add int key spa generate syncing log sm spa flush metaslabs dmu buf rele mutex lock spa sync spa txg history init io txg sync thread txg register callbacks spl taskq fini thread generic wrapper kthread kthread complete and exit ret from fork kthread complete and exit ret from fork asm ubsan array index out of bounds in var lib dkms zfs build module zfs zap leaf c index is out of range for type t cpu pid comm zfs tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap leaf lookup closest fzap cursor retrieve zap cursor retrieve dbuf cache multilist index func mutex lock dsl prop get all impl dmu buf rele spl kvmalloc kmalloc node spl kvmalloc spl kvmalloc strlcat dsl dir name strlcat dsl dir name strlcat dsl prop get all ds dsl prop get all zfs ioc objset stats impl zfs ioc objset stats zfsdev ioctl common kvmalloc node zfsdev ioctl sys ioctl do syscall exit to user mode prepare irqentry exit to user mode irqentry exit exc page fault entry syscall after hwframe rip code ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs sa c index is out of range for type t cpu pid comm run init tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds sa attr op sa lookup uio zfs readlink zpl get link common constprop zpl get link common constprop zpl get link step into walk component link path walk part constprop path init path lookupat filename lookup user path at empty do faccessat sys access do syscall exit to user mode prepare syscall exit to user mode do syscall exit to user mode prepare syscall exit to user mode sys statfs do syscall exc page fault entry syscall after hwframe rip code ff ff ff ff fa ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs sa c index is out of range for type t cpu pid comm run init tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds sa find idx tab sa build index sa handle get from db zfs znode sa init zfs znode alloc aggsum add zfs zget zfs dirent lock zfs dirlook zfs zaccess zfs lookup zpl lookup lookup slow walk component path lookupat filename lookup zpl ioctl fideduperange user path at empty do faccessat sys access do syscall exit to user mode prepare syscall exit to user mode do syscall exit to user mode prepare syscall exit to user mode sys statfs do syscall exc page fault entry syscall after hwframe rip code ff ff ff ff fa ff ff rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp ubsan array index out of bounds in var lib dkms zfs build module zfs zap leaf c index is out of range for type t cpu pid comm run init tainted p oe hardware name apple inc mac bios call trace dump stack lvl dump stack ubsan handle out of bounds zap entry normalization conflict fzap lookup zap lookup impl zap lookup norm zfs dirent lock zfs dirlook zfs zaccess zfs lookup zpl lookup path openat rrm exit do filp open zpl ioctl fideduperange zpl ioctl fideduperange do open execat open exec load elf binary bprm execve do execveat common isra sys execve do syscall do syscall do syscall do syscall entry syscall after hwframe rip code fa ff fa ff ff bd rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp describe how to reproduce the problem i have a packaged ppa for use with ubuntu ubuntu lunar here include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
1
233,867
25,780,399,838
IssuesEvent
2022-12-09 15:26:45
smb-h/Estates-price-prediction
https://api.github.com/repos/smb-h/Estates-price-prediction
closed
CVE-2022-21797 (High) detected in joblib-1.1.0-py2.py3-none-any.whl
security vulnerability
## CVE-2022-21797 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>joblib-1.1.0-py2.py3-none-any.whl</b></p></summary> <p>Lightweight pipelining with Python functions</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/3e/d5/0163eb0cfa0b673aa4fe1cd3ea9d8a81ea0f32e50807b0c295871e4aab2e/joblib-1.1.0-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/3e/d5/0163eb0cfa0b673aa4fe1cd3ea9d8a81ea0f32e50807b0c295871e4aab2e/joblib-1.1.0-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt,/requirements.txt</p> <p> Dependency Hierarchy: - :x: **joblib-1.1.0-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package joblib from 0 and before 1.2.0 are vulnerable to Arbitrary Code Execution via the pre_dispatch flag in Parallel() class due to the eval() statement. <p>Publish Date: 2022-09-26 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21797>CVE-2022-21797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-09-26</p> <p>Fix Resolution: joblib - 1.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-21797 (High) detected in joblib-1.1.0-py2.py3-none-any.whl - ## CVE-2022-21797 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>joblib-1.1.0-py2.py3-none-any.whl</b></p></summary> <p>Lightweight pipelining with Python functions</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/3e/d5/0163eb0cfa0b673aa4fe1cd3ea9d8a81ea0f32e50807b0c295871e4aab2e/joblib-1.1.0-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/3e/d5/0163eb0cfa0b673aa4fe1cd3ea9d8a81ea0f32e50807b0c295871e4aab2e/joblib-1.1.0-py2.py3-none-any.whl</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt,/requirements.txt</p> <p> Dependency Hierarchy: - :x: **joblib-1.1.0-py2.py3-none-any.whl** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package joblib from 0 and before 1.2.0 are vulnerable to Arbitrary Code Execution via the pre_dispatch flag in Parallel() class due to the eval() statement. <p>Publish Date: 2022-09-26 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21797>CVE-2022-21797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-09-26</p> <p>Fix Resolution: joblib - 1.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in joblib none any whl cve high severity vulnerability vulnerable library joblib none any whl lightweight pipelining with python functions library home page a href path to dependency file requirements txt path to vulnerable library requirements txt requirements txt dependency hierarchy x joblib none any whl vulnerable library found in base branch main vulnerability details the package joblib from and before are vulnerable to arbitrary code execution via the pre dispatch flag in parallel class due to the eval statement publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution joblib step up your open source security game with mend
0
52,290
13,216,683,175
IssuesEvent
2020-08-17 04:39:41
hazelcast/hazelcast-jet
https://api.github.com/repos/hazelcast/hazelcast-jet
closed
When specifying --targets it overrides all config settings
cli defect
When specifying --targets with `jet` command it overrides all the other config settings specified in `hazelcast-client.yaml` rather than just updating the relevant options.
1.0
When specifying --targets it overrides all config settings - When specifying --targets with `jet` command it overrides all the other config settings specified in `hazelcast-client.yaml` rather than just updating the relevant options.
defect
when specifying targets it overrides all config settings when specifying targets with jet command it overrides all the other config settings specified in hazelcast client yaml rather than just updating the relevant options
1
17,319
2,998,407,865
IssuesEvent
2015-07-23 13:59:25
contao/core-bundle
https://api.github.com/repos/contao/core-bundle
closed
Rename String class
defect needs backport
In PHP 7 **int, float, string and bool** are becoming reserved words. see: https://wiki.php.net/rfc/scalar_type_hints It would be nice if you could rename the String class, to make class names PHP 7 compitable. https://github.com/contao/core-bundle/blob/master/src/Resources/contao/library/Contao/String.php Thank you.
1.0
Rename String class - In PHP 7 **int, float, string and bool** are becoming reserved words. see: https://wiki.php.net/rfc/scalar_type_hints It would be nice if you could rename the String class, to make class names PHP 7 compitable. https://github.com/contao/core-bundle/blob/master/src/Resources/contao/library/Contao/String.php Thank you.
defect
rename string class in php int float string and bool are becoming reserved words see it would be nice if you could rename the string class to make class names php compitable thank you
1
40,170
9,882,196,594
IssuesEvent
2019-06-24 16:15:53
openanthem/nimbus-core
https://api.github.com/repos/openanthem/nimbus-core
reopened
pick list not populating data
Closed Defect
When we try to open a picklist, we are seeing the following browser console error: {"stack":"TypeError: Cannot read property 'label' of undefined\n at t.getDesc (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2493943)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2493943)/n> {color:#d04437}at Object.updateRenderer {color}(http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2495768)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2495768)/n> at Object.updateRenderer (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:457051)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:457051)/n> at Zy (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448869)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448869)/n> at lv (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)/n> at ov (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)/n> at Zy (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)/n> at lv (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)/n> at ov (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)/n> at Zy (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)",{color:red}"{color:#d04437}message":"Cannot <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)> read property 'label' of undefined","name":"TypeError","logData":"Uncaught Exception"}{color}{color}
1.0
pick list not populating data - When we try to open a picklist, we are seeing the following browser console error: {"stack":"TypeError: Cannot read property 'label' of undefined\n at t.getDesc (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2493943)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2493943)/n> {color:#d04437}at Object.updateRenderer {color}(http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2495768)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:2495768)/n> at Object.updateRenderer (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:457051)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:457051)/n> at Zy (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448869)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448869)/n> at lv (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)/n> at ov (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)/n> at Zy (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)/n> at lv (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455689)/n> at ov (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)\n <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:455352)/n> at Zy (http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)",{color:red}"{color:#d04437}message":"Cannot <http://localhost:9095/example/main.ee8ebc9d9a8edd2f0b64.js:1:448767)> read property 'label' of undefined","name":"TypeError","logData":"Uncaught Exception"}{color}{color}
defect
pick list not populating data when we try to open a picklist we are seeing the following browser console error stack typeerror cannot read property label of undefined n at t getdesc color at object updaterenderer color at object updaterenderer at zy at lv at ov at zy at lv at ov at zy read property label of undefined name typeerror logdata uncaught exception color color
1
172,576
27,300,031,595
IssuesEvent
2023-02-24 00:36:43
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[REMOTO] [SÊNIOR] [PYTHON] [DJANGO] [CELERY] [JAVASCRIPT] [HTML] [CSS] Pessoa Desenvolvedora Python (Sênior) na [INVILLIA]
HOME OFFICE PYTHON JAVASCRIPT CSS3 HTML DJANGO REMOTO TDD DESIGN PATTERNS DDD CELERY HELP WANTED AUTOMAÇÃO DE TESTES Stale
<!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Descrição da vaga - Estamos buscando um profissional que antes de qualquer coisa ame programar e inovar e queira trabalhar de uma forma totalmente remota. - Trabalhamos em um ambiente com total autonomia para novas soluções disruptivas e valorizamos isso, por isso estamos procurando alguém que compartilhe desse mindset. - Devido a essa autonomia, precisamos estar sempre em sintonia, por isso uma boa comunicação é muito importante para nós. **Responsabilidades e atribuições:** - Desenvolver código através de definição de histórias; corrigir defeitos; executar testes; automatizar (unitários, TDD, integrados, funcionais); - Experiência com arquitetura em Cloud (AWS); familiaridade com CI/CD, Git, Desenvolvimento Ágil, Docker. Análise estática de código-fonte (métricas) ## Local - Home Office ## Benefícios - Informações diretamente com o responsável pela vaga/recrutador. ## Requisitos **Obrigatórios:** - Frameworks web: Django, - Messageria: Celery; - Frontend: Javascript, HTML5, CSS3; - Banco de dados relacionais e não relacionais - Conceitos: DDD (Domain Driven Design), TDD (Test Driven Development) e Design Pattern; - Automação de testes (Unitários, Integrados e Funcionais Automatizados). ## Contratação - a combinar ## Nossa empresa - A Invillia é uma empresa global que vem revolucionando a maneira como game-changers expandem o poder de inovar, implementar tecnologias de ponta e desenvolver novas estratégias, produtos e serviços digitais. - Nenhuma outra empresa no mundo atua como a Invillia. - E o que torna nosso Global Growth Framework tão único e poderoso? - Primeiro, dissolvemos os limites entre o físico e o virtual para ter em nosso time os melhores talentos do planeta. - Criamos infinitas práticas e metodologias para que que cada squad seja super customizado e engajado na cultura e desafios de cada cliente. - Adoramos usar ferramentas ágeis, métricas, inteligência de dados no dia-a-dia. Para que ideias e melhorias se multipliquem. - Mas acreditamos que é na educação contínua, na abordagem mais humana e colaborativa que a mágica acontece. - Novas oportunidades surgem. E a inovação nunca para. Infinite Digital Power. ## Como se candidatar - [Clique aqui para se candidatar](https://invillia.gupy.io/jobs/624745?jobBoardSource=gupy_public_page)
1.0
[REMOTO] [SÊNIOR] [PYTHON] [DJANGO] [CELERY] [JAVASCRIPT] [HTML] [CSS] Pessoa Desenvolvedora Python (Sênior) na [INVILLIA] - <!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Descrição da vaga - Estamos buscando um profissional que antes de qualquer coisa ame programar e inovar e queira trabalhar de uma forma totalmente remota. - Trabalhamos em um ambiente com total autonomia para novas soluções disruptivas e valorizamos isso, por isso estamos procurando alguém que compartilhe desse mindset. - Devido a essa autonomia, precisamos estar sempre em sintonia, por isso uma boa comunicação é muito importante para nós. **Responsabilidades e atribuições:** - Desenvolver código através de definição de histórias; corrigir defeitos; executar testes; automatizar (unitários, TDD, integrados, funcionais); - Experiência com arquitetura em Cloud (AWS); familiaridade com CI/CD, Git, Desenvolvimento Ágil, Docker. Análise estática de código-fonte (métricas) ## Local - Home Office ## Benefícios - Informações diretamente com o responsável pela vaga/recrutador. ## Requisitos **Obrigatórios:** - Frameworks web: Django, - Messageria: Celery; - Frontend: Javascript, HTML5, CSS3; - Banco de dados relacionais e não relacionais - Conceitos: DDD (Domain Driven Design), TDD (Test Driven Development) e Design Pattern; - Automação de testes (Unitários, Integrados e Funcionais Automatizados). ## Contratação - a combinar ## Nossa empresa - A Invillia é uma empresa global que vem revolucionando a maneira como game-changers expandem o poder de inovar, implementar tecnologias de ponta e desenvolver novas estratégias, produtos e serviços digitais. - Nenhuma outra empresa no mundo atua como a Invillia. - E o que torna nosso Global Growth Framework tão único e poderoso? - Primeiro, dissolvemos os limites entre o físico e o virtual para ter em nosso time os melhores talentos do planeta. - Criamos infinitas práticas e metodologias para que que cada squad seja super customizado e engajado na cultura e desafios de cada cliente. - Adoramos usar ferramentas ágeis, métricas, inteligência de dados no dia-a-dia. Para que ideias e melhorias se multipliquem. - Mas acreditamos que é na educação contínua, na abordagem mais humana e colaborativa que a mágica acontece. - Novas oportunidades surgem. E a inovação nunca para. Infinite Digital Power. ## Como se candidatar - [Clique aqui para se candidatar](https://invillia.gupy.io/jobs/624745?jobBoardSource=gupy_public_page)
non_defect
pessoa desenvolvedora python sênior na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga estamos buscando um profissional que antes de qualquer coisa ame programar e inovar e queira trabalhar de uma forma totalmente remota trabalhamos em um ambiente com total autonomia para novas soluções disruptivas e valorizamos isso por isso estamos procurando alguém que compartilhe desse mindset devido a essa autonomia precisamos estar sempre em sintonia por isso uma boa comunicação é muito importante para nós responsabilidades e atribuições desenvolver código através de definição de histórias corrigir defeitos executar testes automatizar unitários tdd integrados funcionais experiência com arquitetura em cloud aws familiaridade com ci cd git desenvolvimento ágil docker análise estática de código fonte métricas local home office benefícios informações diretamente com o responsável pela vaga recrutador requisitos obrigatórios frameworks web django messageria celery frontend javascript banco de dados relacionais e não relacionais conceitos ddd domain driven design tdd test driven development e design pattern automação de testes unitários integrados e funcionais automatizados contratação a combinar nossa empresa a invillia é uma empresa global que vem revolucionando a maneira como game changers expandem o poder de inovar implementar tecnologias de ponta e desenvolver novas estratégias produtos e serviços digitais nenhuma outra empresa no mundo atua como a invillia e o que torna nosso global growth framework tão único e poderoso primeiro dissolvemos os limites entre o físico e o virtual para ter em nosso time os melhores talentos do planeta criamos infinitas práticas e metodologias para que que cada squad seja super customizado e engajado na cultura e desafios de cada cliente adoramos usar ferramentas ágeis métricas inteligência de dados no dia a dia para que ideias e melhorias se multipliquem mas acreditamos que é na educação contínua na abordagem mais humana e colaborativa que a mágica acontece novas oportunidades surgem e a inovação nunca para infinite digital power como se candidatar
0
34,437
7,451,457,733
IssuesEvent
2018-03-29 03:04:51
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
KÜ ja nimistu FNSi halduse testimine
P: high R: fixed T: defect
**Reported by jelenag on 5 Apr 2013 07:52 UTC** 1. Näide [Test arhivaal](http://dev.raju.teepub/et/description_unit/view/117284/) Arhivaali muutmisevaates või lisamisel leidandmeid ei päranda ülemüksusest, antud näites HAMA.02.3.708. 2. Küsimus> Kuidas toimub uute säilikute lisamine juhul, kui Arhiivi all puuduvad nimistud? Kas nimistud peavad alati olema sisestatud enne säilikute lisamist?
1.0
KÜ ja nimistu FNSi halduse testimine - **Reported by jelenag on 5 Apr 2013 07:52 UTC** 1. Näide [Test arhivaal](http://dev.raju.teepub/et/description_unit/view/117284/) Arhivaali muutmisevaates või lisamisel leidandmeid ei päranda ülemüksusest, antud näites HAMA.02.3.708. 2. Küsimus> Kuidas toimub uute säilikute lisamine juhul, kui Arhiivi all puuduvad nimistud? Kas nimistud peavad alati olema sisestatud enne säilikute lisamist?
defect
kü ja nimistu fnsi halduse testimine reported by jelenag on apr utc näide arhivaali muutmisevaates või lisamisel leidandmeid ei päranda ülemüksusest antud näites hama küsimus kuidas toimub uute säilikute lisamine juhul kui arhiivi all puuduvad nimistud kas nimistud peavad alati olema sisestatud enne säilikute lisamist
1
15,211
2,850,318,298
IssuesEvent
2015-05-31 13:34:36
damonkohler/android-scripting
https://api.github.com/repos/damonkohler/android-scripting
closed
sl4a not working
auto-migrated Priority-Medium Type-Defect
``` What device(s) are you experiencing the problem on? eeepc What firmware version are you running on the device? android x86 4.04 eeepc What steps will reproduce the problem? 1.installing sl4a 2.installing Python for android 3.running the hello_world.py script What is the expected output? What do you see instead? in step of showin hello world, sl4a crashes What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `bennor.i...@gmail.com` on 29 Oct 2012 at 4:16
1.0
sl4a not working - ``` What device(s) are you experiencing the problem on? eeepc What firmware version are you running on the device? android x86 4.04 eeepc What steps will reproduce the problem? 1.installing sl4a 2.installing Python for android 3.running the hello_world.py script What is the expected output? What do you see instead? in step of showin hello world, sl4a crashes What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `bennor.i...@gmail.com` on 29 Oct 2012 at 4:16
defect
not working what device s are you experiencing the problem on eeepc what firmware version are you running on the device android eeepc what steps will reproduce the problem installing installing python for android running the hello world py script what is the expected output what do you see instead in step of showin hello world crashes what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by bennor i gmail com on oct at
1
56,049
8,047,888,893
IssuesEvent
2018-08-01 03:25:54
Daniel-Mietchen/ideas
https://api.github.com/repos/Daniel-Mietchen/ideas
opened
Document all SPARQL functions, commands etc. as Wikidata items
SPARQL Wikidata discovery documentation hackathon
and bind them together in a catalog for each version of SPARQL
1.0
Document all SPARQL functions, commands etc. as Wikidata items - and bind them together in a catalog for each version of SPARQL
non_defect
document all sparql functions commands etc as wikidata items and bind them together in a catalog for each version of sparql
0
788,628
27,759,201,092
IssuesEvent
2023-03-16 06:40:01
ballerina-platform/ballerina-distribution
https://api.github.com/repos/ballerina-platform/ballerina-distribution
closed
Getting a bunch of shell script errors when pulling 2201.3.0
Type/Bug Priority/High Team/DevTools
I was using 2201.2.3 and tried to pull 2201.3.0. It successfully pulled and set the distribution. But in the end, gave out some errors as well: ``` $ bal dist pull 2201.3.0 Checking for newer versions of the update tool... Downloading ballerina-command-1.3.11 100% [=====================================================================================================================] 1/1 MB (0:00:01 / 0:00:00) Updating environment variables Update tool version updated to the latest version: 1.3.11 Cleaning old files... Update successfully completed Fetching the '2201.3.0' distribution from the remote server... Downloading 2201.3.0 100% [=================================================================================================================================] 183/183 MB (0:01:36 / 0:00:00) Fetching the dependencies for '2201.3.0' from the remote server... Downloading jdk-11.0.15+10-jre 100% [=========================================================================================================================] 51/51 MB (0:00:33 / 0:00:00) '2201.3.0' successfully set as the active distribution /home/pubudu/software/ballerina-2201.0.0-swan-lake/bin/bal: line 99: TH/../ballerina-command-tmp: No such file or directory /home/pubudu/software/ballerina-2201.0.0-swan-lake/bin/bal: line 101: syntax error near unexpected token `fi' /home/pubudu/software/ballerina-2201.0.0-swan-lake/bin/bal: line 101: ` fi' ``` OS: Arch Linux
1.0
Getting a bunch of shell script errors when pulling 2201.3.0 - I was using 2201.2.3 and tried to pull 2201.3.0. It successfully pulled and set the distribution. But in the end, gave out some errors as well: ``` $ bal dist pull 2201.3.0 Checking for newer versions of the update tool... Downloading ballerina-command-1.3.11 100% [=====================================================================================================================] 1/1 MB (0:00:01 / 0:00:00) Updating environment variables Update tool version updated to the latest version: 1.3.11 Cleaning old files... Update successfully completed Fetching the '2201.3.0' distribution from the remote server... Downloading 2201.3.0 100% [=================================================================================================================================] 183/183 MB (0:01:36 / 0:00:00) Fetching the dependencies for '2201.3.0' from the remote server... Downloading jdk-11.0.15+10-jre 100% [=========================================================================================================================] 51/51 MB (0:00:33 / 0:00:00) '2201.3.0' successfully set as the active distribution /home/pubudu/software/ballerina-2201.0.0-swan-lake/bin/bal: line 99: TH/../ballerina-command-tmp: No such file or directory /home/pubudu/software/ballerina-2201.0.0-swan-lake/bin/bal: line 101: syntax error near unexpected token `fi' /home/pubudu/software/ballerina-2201.0.0-swan-lake/bin/bal: line 101: ` fi' ``` OS: Arch Linux
non_defect
getting a bunch of shell script errors when pulling i was using and tried to pull it successfully pulled and set the distribution but in the end gave out some errors as well bal dist pull checking for newer versions of the update tool downloading ballerina command mb updating environment variables update tool version updated to the latest version cleaning old files update successfully completed fetching the distribution from the remote server downloading mb fetching the dependencies for from the remote server downloading jdk jre mb successfully set as the active distribution home pubudu software ballerina swan lake bin bal line th ballerina command tmp no such file or directory home pubudu software ballerina swan lake bin bal line syntax error near unexpected token fi home pubudu software ballerina swan lake bin bal line fi os arch linux
0
29,651
5,785,140,379
IssuesEvent
2017-05-01 01:21:23
DemiVis/SE420_Final
https://api.github.com/repos/DemiVis/SE420_Final
closed
logic to create copy filename no longer functioning
defect
![weird_extensions](https://cloud.githubusercontent.com/assets/16615393/24664705/31f0fd00-1910-11e7-8f18-cbca04e12f5b.png) As seen above the extension gets muddled for some reason. The image gets written out properly but with an incorrect extension Bottom line in image above should say "../Test_images/mount_m_original.pgm"
1.0
logic to create copy filename no longer functioning - ![weird_extensions](https://cloud.githubusercontent.com/assets/16615393/24664705/31f0fd00-1910-11e7-8f18-cbca04e12f5b.png) As seen above the extension gets muddled for some reason. The image gets written out properly but with an incorrect extension Bottom line in image above should say "../Test_images/mount_m_original.pgm"
defect
logic to create copy filename no longer functioning as seen above the extension gets muddled for some reason the image gets written out properly but with an incorrect extension bottom line in image above should say test images mount m original pgm
1
81,541
30,929,736,359
IssuesEvent
2023-08-06 23:22:31
dkfans/keeperfx
https://api.github.com/repos/dkfans/keeperfx
opened
Query page 3 crashes on non-existing creature model
Priority-Medium Type-Defect
To reproduce: 1) Add an additional creature type to rules.cfg 2) Start Unearth so that it picks up the new unit. Place it on a map. 3) Delete the creature from rules.cfg again 4) Start the map, find the creature -> notice it's a hell_hound animation (correct for a missing unit) 5) query the unit, and click the 'next page' button twice -> crash ``` Error: get_creature_model_graphics: Invalid model 37 graphics sequence 20 === Crash ===Error: Attempt of integer division by zero. in keeperfx.exe at 0023:00a83c35, base 00970000 in keeperfx.exe at 0023:00a780d9, base 00970000 in keeperfx.exe at 0023:00a784ce, base 00970000 ```
1.0
Query page 3 crashes on non-existing creature model - To reproduce: 1) Add an additional creature type to rules.cfg 2) Start Unearth so that it picks up the new unit. Place it on a map. 3) Delete the creature from rules.cfg again 4) Start the map, find the creature -> notice it's a hell_hound animation (correct for a missing unit) 5) query the unit, and click the 'next page' button twice -> crash ``` Error: get_creature_model_graphics: Invalid model 37 graphics sequence 20 === Crash ===Error: Attempt of integer division by zero. in keeperfx.exe at 0023:00a83c35, base 00970000 in keeperfx.exe at 0023:00a780d9, base 00970000 in keeperfx.exe at 0023:00a784ce, base 00970000 ```
defect
query page crashes on non existing creature model to reproduce add an additional creature type to rules cfg start unearth so that it picks up the new unit place it on a map delete the creature from rules cfg again start the map find the creature notice it s a hell hound animation correct for a missing unit query the unit and click the next page button twice crash error get creature model graphics invalid model graphics sequence crash error attempt of integer division by zero in keeperfx exe at base in keeperfx exe at base in keeperfx exe at base
1
394,397
27,028,609,431
IssuesEvent
2023-02-11 23:01:34
qwikifiers/qwik-ui
https://api.github.com/repos/qwikifiers/qwik-ui
closed
docs: save website settings
documentation good first issue
We need to store the settings for a good user experience during navigation
1.0
docs: save website settings - We need to store the settings for a good user experience during navigation
non_defect
docs save website settings we need to store the settings for a good user experience during navigation
0
38,638
8,950,635,220
IssuesEvent
2019-01-25 11:23:02
svigerske/ipopt-donotuse
https://api.github.com/repos/svigerske/ipopt-donotuse
closed
number of nonzeros in inequality Jacobian constraints
Ipopt defect
Issue created by migration from Trac. Original creator: andanh Original creation time: 2006-08-03 13:32:51 Assignee: ipopt-team Version: Hi, I tested successfully many problems by using IPOPT linking PARDISO. I can now find the optimal solution after few steps. It is very great! I tested another bigger problem. I met the following difficulty: The numbers of nonzeros in Jacobian constraints, which were output by IPOPT, is not equal to NZ which I tranfered to the subroutine EV_JAC_G. Although in the previous examples, these values are the same. And IPOPT is terminated and produced the following, see the output 8eb352ceb5 I tried to solve with ma27, the toal number of nonzeros is not equal to mine. But IPOPT can go further (see the output 29817e9645) What should I do to use Pardiso? Thank you so much. Cheers. Danh. 8eb352ceb5 ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Common Public License (CPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** Number of nonzeros in equality constraint Jacobian...: 1107728 Number of nonzeros in inequality constraint Jacobian.: 767633 Number of nonzeros in Lagrangian Hessian.............: 486001 Assertion failed: new_ia[dst->m] == (pint) nnz, file /scratch/oschenk/pardiso/src/smat.c, line 597 IOT/Abort trap (core dumped) 29817e9645 [andanh`@`janus] /work/janus/andanh/Ipopt_2006Jul26/Ipopt/examples/Johnson1> ./pave INTYPE: 4 - Done! LM= 5647 Number of nonzeros in [C]: NZ_CMAT= 1155360 Successful SIZE_PROB: NSHAKE= 108001 NSLACK= 72000 N= 108001 M= 77647 NZ_MAX= 1659360 NZ= 1659360 NNZH_MAX= 486001 NNZH= 486001 SIZE_PROB Subroutine RHS_CONST M= 77647 RHS_CONST START_POINT Here LM= 5647 ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Common Public License (CPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** Number of nonzeros in equality constraint Jacobian...: 1107728 Number of nonzeros in inequality constraint Jacobian.: 767633 Number of nonzeros in Lagrangian Hessian.............: 486001 Total number of variables............................: 108001 variables with only lower bounds: 1 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 5647 Total number of inequality constraints...............: 72000 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 72000 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 -6.8916711e-02 2.74e+01 2.00e+00 0.0 0.00e+00 - 0.00e+00 0.00e+000
1.0
number of nonzeros in inequality Jacobian constraints - Issue created by migration from Trac. Original creator: andanh Original creation time: 2006-08-03 13:32:51 Assignee: ipopt-team Version: Hi, I tested successfully many problems by using IPOPT linking PARDISO. I can now find the optimal solution after few steps. It is very great! I tested another bigger problem. I met the following difficulty: The numbers of nonzeros in Jacobian constraints, which were output by IPOPT, is not equal to NZ which I tranfered to the subroutine EV_JAC_G. Although in the previous examples, these values are the same. And IPOPT is terminated and produced the following, see the output 8eb352ceb5 I tried to solve with ma27, the toal number of nonzeros is not equal to mine. But IPOPT can go further (see the output 29817e9645) What should I do to use Pardiso? Thank you so much. Cheers. Danh. 8eb352ceb5 ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Common Public License (CPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** Number of nonzeros in equality constraint Jacobian...: 1107728 Number of nonzeros in inequality constraint Jacobian.: 767633 Number of nonzeros in Lagrangian Hessian.............: 486001 Assertion failed: new_ia[dst->m] == (pint) nnz, file /scratch/oschenk/pardiso/src/smat.c, line 597 IOT/Abort trap (core dumped) 29817e9645 [andanh`@`janus] /work/janus/andanh/Ipopt_2006Jul26/Ipopt/examples/Johnson1> ./pave INTYPE: 4 - Done! LM= 5647 Number of nonzeros in [C]: NZ_CMAT= 1155360 Successful SIZE_PROB: NSHAKE= 108001 NSLACK= 72000 N= 108001 M= 77647 NZ_MAX= 1659360 NZ= 1659360 NNZH_MAX= 486001 NNZH= 486001 SIZE_PROB Subroutine RHS_CONST M= 77647 RHS_CONST START_POINT Here LM= 5647 ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Common Public License (CPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** Number of nonzeros in equality constraint Jacobian...: 1107728 Number of nonzeros in inequality constraint Jacobian.: 767633 Number of nonzeros in Lagrangian Hessian.............: 486001 Total number of variables............................: 108001 variables with only lower bounds: 1 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 5647 Total number of inequality constraints...............: 72000 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 72000 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 -6.8916711e-02 2.74e+01 2.00e+00 0.0 0.00e+00 - 0.00e+00 0.00e+000
defect
number of nonzeros in inequality jacobian constraints issue created by migration from trac original creator andanh original creation time assignee ipopt team version hi i tested successfully many problems by using ipopt linking pardiso i can now find the optimal solution after few steps it is very great i tested another bigger problem i met the following difficulty the numbers of nonzeros in jacobian constraints which were output by ipopt is not equal to nz which i tranfered to the subroutine ev jac g although in the previous examples these values are the same and ipopt is terminated and produced the following see the output i tried to solve with the toal number of nonzeros is not equal to mine but ipopt can go further see the output what should i do to use pardiso thank you so much cheers danh this program contains ipopt a library for large scale nonlinear optimization ipopt is released as open source code under the common public license cpl for more information visit number of nonzeros in equality constraint jacobian number of nonzeros in inequality constraint jacobian number of nonzeros in lagrangian hessian assertion failed new ia pint nnz file scratch oschenk pardiso src smat c line iot abort trap core dumped work janus andanh ipopt ipopt examples pave intype done lm number of nonzeros in nz cmat successful size prob nshake nslack n m nz max nz nnzh max nnzh size prob subroutine rhs const m rhs const start point here lm this program contains ipopt a library for large scale nonlinear optimization ipopt is released as open source code under the common public license cpl for more information visit number of nonzeros in equality constraint jacobian number of nonzeros in inequality constraint jacobian number of nonzeros in lagrangian hessian total number of variables variables with only lower bounds variables with lower and upper bounds variables with only upper bounds total number of equality constraints total number of inequality constraints inequality constraints with only lower bounds inequality constraints with lower and upper bounds inequality constraints with only upper bounds iter objective inf pr inf du lg mu d lg rg alpha du alpha pr ls
1
140,114
5,397,252,041
IssuesEvent
2017-02-27 14:12:56
g8os/fs
https://api.github.com/repos/g8os/fs
closed
New Flist implementation
priority_major state_verification type_feature
fs should support the new flist format. This new flist format is based on capnp and distributed in the form of a rocksdb DB files. Jumpscale is still used to generate the flist. You can find reference implementation at https://github.com/Jumpscale/jumpscale_core8/tree/8.2.0_dev_flist/lib/JumpScale/tools/flist
1.0
New Flist implementation - fs should support the new flist format. This new flist format is based on capnp and distributed in the form of a rocksdb DB files. Jumpscale is still used to generate the flist. You can find reference implementation at https://github.com/Jumpscale/jumpscale_core8/tree/8.2.0_dev_flist/lib/JumpScale/tools/flist
non_defect
new flist implementation fs should support the new flist format this new flist format is based on capnp and distributed in the form of a rocksdb db files jumpscale is still used to generate the flist you can find reference implementation at
0
445,087
31,164,457,030
IssuesEvent
2023-08-16 18:30:54
afarago/esphome_component_bthome
https://api.github.com/repos/afarago/esphome_component_bthome
closed
Matching problem when property has multiple object ids
documentation
I tried to transmit voltage. ``` sensor: - platform: beethowen_receiver mac_address: 11:22:33:44:55:55 sensors: - measurement_type: voltage name: Temperature ``` But the receiver does not recognize it (unmatched), because there are multiple object ids for this property. The `count` property even has three object ids. I tried to use the numeric value `measurement_type: 0x0C` but also no success. <html><body> <!--StartFragment--> Object id | Property | Data type | Factor | Example | Result | Unit -- | -- | -- | -- | -- | -- | -- 0x0C | voltage | uint16 (2 bytes) | 0.001 | 0C020C | 3.074 | V 0x4A | voltage | uint16 (2 bytes) | 0.1 | 4A020C | 307.4 | V <!--EndFragment--> </body> </html>
1.0
Matching problem when property has multiple object ids - I tried to transmit voltage. ``` sensor: - platform: beethowen_receiver mac_address: 11:22:33:44:55:55 sensors: - measurement_type: voltage name: Temperature ``` But the receiver does not recognize it (unmatched), because there are multiple object ids for this property. The `count` property even has three object ids. I tried to use the numeric value `measurement_type: 0x0C` but also no success. <html><body> <!--StartFragment--> Object id | Property | Data type | Factor | Example | Result | Unit -- | -- | -- | -- | -- | -- | -- 0x0C | voltage | uint16 (2 bytes) | 0.001 | 0C020C | 3.074 | V 0x4A | voltage | uint16 (2 bytes) | 0.1 | 4A020C | 307.4 | V <!--EndFragment--> </body> </html>
non_defect
matching problem when property has multiple object ids i tried to transmit voltage sensor platform beethowen receiver mac address sensors measurement type voltage name temperature but the receiver does not recognize it unmatched because there are multiple object ids for this property the count property even has three object ids i tried to use the numeric value measurement type but also no success object id property data type factor example result unit voltage  bytes v voltage  bytes v
0
237,238
18,154,551,995
IssuesEvent
2021-09-26 21:04:37
serilog-contrib/serilog-enrichers-globallogcontext
https://api.github.com/repos/serilog-contrib/serilog-enrichers-globallogcontext
closed
Include XML docs in the NuGet package
documentation
The XML docs are currently not being included in the NuGet package (`GenerateDocumentationFile` is `false`) ![image](https://user-images.githubusercontent.com/177608/134817462-6cbae353-7819-484f-a04e-e931154f0159.png)
1.0
Include XML docs in the NuGet package - The XML docs are currently not being included in the NuGet package (`GenerateDocumentationFile` is `false`) ![image](https://user-images.githubusercontent.com/177608/134817462-6cbae353-7819-484f-a04e-e931154f0159.png)
non_defect
include xml docs in the nuget package the xml docs are currently not being included in the nuget package generatedocumentationfile is false
0