code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1 value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7 values |
|---|---|---|---|---|---|---|---|
public RocksDBMemoryConfiguration getMemoryConfiguration() {
return memoryConfiguration;
} | Gets the memory configuration object, which offers settings to control RocksDB's memory
usage. | getMemoryConfiguration | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public void setDbStoragePath(String path) {
setDbStoragePaths(path == null ? null : new String[] {path});
} | Sets the path where the RocksDB local database files should be stored on the local file
system. Setting this path overrides the default behavior, where the files are stored across
the configured temp directories.
<p>Passing {@code null} to this function restores the default behavior, where the configured
temp directories will be used.
@param path The path where the local RocksDB database files are stored. | setDbStoragePath | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public void setDbStoragePaths(String... paths) {
if (paths == null) {
localRocksDbDirectories = null;
} else if (paths.length == 0) {
throw new IllegalArgumentException("empty paths");
} else {
File[] pp = new File[paths.length];
for (int i = 0; i < paths.length; i++) {
final String rawPath = paths[i];
final String path;
if (rawPath == null) {
throw new IllegalArgumentException("null path");
} else {
// we need this for backwards compatibility, to allow URIs like 'file:///'...
URI uri = null;
try {
uri = new Path(rawPath).toUri();
} catch (Exception e) {
// cannot parse as a path
}
if (uri != null && uri.getScheme() != null) {
if ("file".equalsIgnoreCase(uri.getScheme())) {
path = uri.getPath();
} else {
throw new IllegalArgumentException(
"Path " + rawPath + " has a non-local scheme");
}
} else {
path = rawPath;
}
}
pp[i] = new File(path);
if (!pp[i].isAbsolute()) {
throw new IllegalArgumentException("Relative paths are not supported");
}
}
localRocksDbDirectories = pp;
}
} | Sets the directories in which the local RocksDB database puts its files (like SST and
metadata files). These directories do not need to be persistent, they can be ephemeral,
meaning that they are lost on a machine failure, because state in RocksDB is persisted in
checkpoints.
<p>If nothing is configured, these directories default to the TaskManager's local temporary
file directories.
<p>Each distinct state will be stored in one path, but when the state backend creates
multiple states, they will store their files on different paths.
<p>Passing {@code null} to this function restores the default behavior, where the configured
temp directories will be used.
@param paths The paths across which the local RocksDB database files will be spread. | setDbStoragePaths | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public String[] getDbStoragePaths() {
if (localRocksDbDirectories == null) {
return null;
} else {
String[] paths = new String[localRocksDbDirectories.length];
for (int i = 0; i < paths.length; i++) {
paths[i] = localRocksDbDirectories[i].toString();
}
return paths;
}
} | Gets the configured local DB storage paths, or null, if none were configured.
<p>Under these directories on the TaskManager, RocksDB stores its SST files and metadata
files. These directories do not need to be persistent, they can be ephermeral, meaning that
they are lost on a machine failure, because state in RocksDB is persisted in checkpoints.
<p>If nothing is configured, these directories default to the TaskManager's local temporary
file directories. | getDbStoragePaths | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public PriorityQueueStateType getPriorityQueueStateType() {
return priorityQueueConfig.getPriorityQueueStateType();
} | Gets the type of the priority queue state. It will fallback to the default value, if it is
not explicitly set.
@return The type of the priority queue state. | getPriorityQueueStateType | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public void setPredefinedOptions(@Nonnull PredefinedOptions options) {
predefinedOptions = checkNotNull(options);
} | Sets the predefined options for RocksDB.
<p>If user-configured options within {@link RocksDBConfigurableOptions} is set (through
config.yaml) or a user-defined options factory is set (via {@link
#setRocksDBOptions(RocksDBOptionsFactory)}), then the options from the factory are applied on
top of the here specified predefined options and customized options.
@param options The options to set (must not be null). | setPredefinedOptions | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
@VisibleForTesting
public PredefinedOptions getPredefinedOptions() {
if (predefinedOptions == null) {
predefinedOptions = PredefinedOptions.DEFAULT;
}
return predefinedOptions;
} | Gets the currently set predefined options for RocksDB. The default options (if nothing was
set via {@link #setPredefinedOptions(PredefinedOptions)}) are {@link
PredefinedOptions#DEFAULT}.
<p>If user-configured options within {@link RocksDBConfigurableOptions} is set (through
config.yaml) of a user-defined options factory is set (via {@link
#setRocksDBOptions(RocksDBOptionsFactory)}), then the options from the factory are applied on
top of the predefined and customized options.
@return The currently set predefined options for RocksDB. | getPredefinedOptions | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public void setRocksDBOptions(RocksDBOptionsFactory optionsFactory) {
this.rocksDbOptionsFactory = optionsFactory;
} | Sets {@link org.rocksdb.Options} for the RocksDB instances. Because the options are not
serializable and hold native code references, they must be specified through a factory.
<p>The options created by the factory here are applied on top of the pre-defined options
profile selected via {@link #setPredefinedOptions(PredefinedOptions)} and user-configured
options from configuration set by {@link #configure(ReadableConfig, ClassLoader)} with keys
in {@link RocksDBConfigurableOptions}.
@param optionsFactory The options factory that lazily creates the RocksDB options. | setRocksDBOptions | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
@Nullable
public RocksDBOptionsFactory getRocksDBOptions() {
return rocksDbOptionsFactory;
} | Gets {@link org.rocksdb.Options} for the RocksDB instances.
<p>The options created by the factory here are applied on top of the pre-defined options
profile selected via {@link #setPredefinedOptions(PredefinedOptions)}. If the pre-defined
options profile is the default ({@link PredefinedOptions#DEFAULT}), then the factory fully
controls the RocksDB options. | getRocksDBOptions | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public int getNumberOfTransferThreads() {
return numberOfTransferThreads == UNDEFINED_NUMBER_OF_TRANSFER_THREADS
? CHECKPOINT_TRANSFER_THREAD_NUM.defaultValue()
: numberOfTransferThreads;
} | Gets the number of threads used to transfer files while snapshotting/restoring. | getNumberOfTransferThreads | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public void setNumberOfTransferThreads(int numberOfTransferThreads) {
Preconditions.checkArgument(
numberOfTransferThreads > 0,
"The number of threads used to transfer files in EmbeddedRocksDBStateBackend should be greater than zero.");
this.numberOfTransferThreads = numberOfTransferThreads;
} | Sets the number of threads used to transfer files while snapshotting/restoring.
@param numberOfTransferThreads The number of threads used to transfer files while
snapshotting/restoring. | setNumberOfTransferThreads | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public long getWriteBatchSize() {
return writeBatchSize == UNDEFINED_WRITE_BATCH_SIZE
? WRITE_BATCH_SIZE.defaultValue().getBytes()
: writeBatchSize;
} | Gets the max batch size will be used in {@link RocksDBWriteBatchWrapper}. | getWriteBatchSize | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
public void setWriteBatchSize(long writeBatchSize) {
checkArgument(writeBatchSize >= 0, "Write batch size have to be no negative.");
this.writeBatchSize = writeBatchSize;
} | Sets the max batch size will be used in {@link RocksDBWriteBatchWrapper}, no positive value
will disable memory size controller, just use item count controller.
@param writeBatchSize The size will used to be used in {@link RocksDBWriteBatchWrapper}. | setWriteBatchSize | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackend.java | Apache-2.0 |
@Override
public EmbeddedRocksDBStateBackend createFromConfig(
ReadableConfig config, ClassLoader classLoader) throws IllegalConfigurationException {
return new EmbeddedRocksDBStateBackend().configure(config, classLoader);
} | A factory that creates an {@link EmbeddedRocksDBStateBackend} from a configuration. | createFromConfig | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackendFactory.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/EmbeddedRocksDBStateBackendFactory.java | Apache-2.0 |
@Nullable
@SuppressWarnings("unchecked")
<T> T getValue(ConfigOption<T> option) {
Object value = options.get(option.key());
if (value == null) {
value = option.defaultValue();
}
if (value == null) {
return null;
}
return (T) value;
} | Get a option value according to the pre-defined values. If not defined, return the default
value.
@param option the option.
@param <T> the option value type.
@return the value if defined, otherwise return the default value. | getValue | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/PredefinedOptions.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/PredefinedOptions.java | Apache-2.0 |
public static RunnableWithException createAsyncRangeCompactionTask(
RocksDB db,
Collection<ColumnFamilyHandle> columnFamilyHandles,
int keyGroupPrefixBytes,
KeyGroupRange dbExpectedKeyGroupRange,
ResourceGuard rocksDBResourceGuard,
CloseableRegistry closeableRegistry) {
return () -> {
logger.debug(
"Starting range check for async compaction targeting key-groups range {}.",
dbExpectedKeyGroupRange.prettyPrintInterval());
final RangeCheckResult rangeCheckResult;
try (ResourceGuard.Lease ignored = rocksDBResourceGuard.acquireResource()) {
rangeCheckResult =
checkSstDataAgainstKeyGroupRange(
db, keyGroupPrefixBytes, dbExpectedKeyGroupRange);
}
if (rangeCheckResult.allInRange()) {
logger.debug(
"Nothing to compact in async compaction targeting key-groups range {}.",
dbExpectedKeyGroupRange.prettyPrintInterval());
// No keys exceed the proclaimed range of the backend, so we don't need a compaction
// from this point of view.
return;
}
try (CompactRangeOptions compactionOptions =
new CompactRangeOptions()
.setBottommostLevelCompaction(
CompactRangeOptions.BottommostLevelCompaction
.kForceOptimized)) {
// To cancel an ongoing compaction asap, we register cancelling through the options
// with the registry
final Closeable cancelCompactionCloseable =
() -> {
logger.info(
"Cancel request for async compaction targeting key-groups range {}.",
dbExpectedKeyGroupRange.prettyPrintInterval(),
new Exception("StackTrace"));
compactionOptions.setCanceled(true);
};
try {
closeableRegistry.registerCloseable(cancelCompactionCloseable);
if (!rangeCheckResult.leftInRange) {
logger.debug(
"Compacting left interval in async compaction targeting key-groups range {}.",
dbExpectedKeyGroupRange.prettyPrintInterval());
// Compact all keys before from the expected key-groups range
for (ColumnFamilyHandle columnFamilyHandle : columnFamilyHandles) {
try (ResourceGuard.Lease ignored =
rocksDBResourceGuard.acquireResource()) {
db.compactRange(
columnFamilyHandle,
// TODO: change to null once this API is fixed
new byte[] {},
rangeCheckResult.getProclaimedMinKey(),
compactionOptions);
}
}
}
if (!rangeCheckResult.rightInRange) {
logger.debug(
"Compacting right interval in async compaction targeting key-groups range {}.",
dbExpectedKeyGroupRange.prettyPrintInterval());
// Compact all keys after the expected key-groups range
for (ColumnFamilyHandle columnFamilyHandle : columnFamilyHandles) {
try (ResourceGuard.Lease ignored =
rocksDBResourceGuard.acquireResource()) {
db.compactRange(
columnFamilyHandle,
rangeCheckResult.getProclaimedMaxKey(),
// TODO: change to null once this API is fixed
new byte[] {
(byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff
},
compactionOptions);
}
}
}
} finally {
closeableRegistry.unregisterCloseable(cancelCompactionCloseable);
}
}
};
} | Returns a range compaction task as runnable if any data in the SST files of the given DB
exceeds the proclaimed key-group range.
@param db the DB to check and compact if needed.
@param columnFamilyHandles list of column families to check.
@param keyGroupPrefixBytes the number of bytes used to serialize the key-group prefix of keys
in the DB.
@param dbExpectedKeyGroupRange the expected key-groups range of the DB.
@param rocksDBResourceGuard the resource guard for the given db instance.
@return runnable that performs compaction upon execution if the key-groups range is exceeded.
Otherwise, empty optional is returned. | createAsyncRangeCompactionTask | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBIncrementalCheckpointUtils.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBIncrementalCheckpointUtils.java | Apache-2.0 |
@Nullable
public static <T extends KeyedStateHandle> T chooseTheBestStateHandleForInitial(
@Nonnull List<T> restoreStateHandles,
@Nonnull KeyGroupRange targetKeyGroupRange,
double overlapFractionThreshold) {
int pos =
findTheBestStateHandleForInitial(
restoreStateHandles, targetKeyGroupRange, overlapFractionThreshold);
return pos >= 0 ? restoreStateHandles.get(pos) : null;
} | Choose the best state handle according to the {@link #stateHandleEvaluator(KeyedStateHandle,
KeyGroupRange, double)} to init the initial db.
@param restoreStateHandles The candidate state handles.
@param targetKeyGroupRange The target key group range.
@param overlapFractionThreshold configured threshold for overlap.
@return The best candidate or null if no candidate was a good fit.
@param <T> the generic parameter type of the state handles. | chooseTheBestStateHandleForInitial | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBIncrementalCheckpointUtils.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBIncrementalCheckpointUtils.java | Apache-2.0 |
public boolean isUsingManagedMemory() {
return useManagedMemory != null
? useManagedMemory
: RocksDBOptions.USE_MANAGED_MEMORY.defaultValue();
} | Gets whether the state backend is configured to use the managed memory of a slot for RocksDB.
See {@link RocksDBOptions#USE_MANAGED_MEMORY} for details. | isUsingManagedMemory | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | Apache-2.0 |
public double getWriteBufferRatio() {
return writeBufferRatio != null
? writeBufferRatio
: RocksDBOptions.WRITE_BUFFER_RATIO.defaultValue();
} | Gets the fraction of the total memory to be used for write buffers. This only has an effect
is either {@link #setUseManagedMemory(boolean)} or {@link #setFixedMemoryPerSlot(MemorySize)}
are set.
<p>See {@link RocksDBOptions#WRITE_BUFFER_RATIO} for details. | getWriteBufferRatio | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | Apache-2.0 |
public double getHighPriorityPoolRatio() {
return highPriorityPoolRatio != null
? highPriorityPoolRatio
: RocksDBOptions.HIGH_PRIORITY_POOL_RATIO.defaultValue();
} | Gets the fraction of the total memory to be used for high priority blocks like indexes,
dictionaries, etc. This only has an effect is either {@link #setUseManagedMemory(boolean)} or
{@link #setFixedMemoryPerSlot(MemorySize)} are set.
<p>See {@link RocksDBOptions#HIGH_PRIORITY_POOL_RATIO} for details. | getHighPriorityPoolRatio | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | Apache-2.0 |
public Boolean isUsingPartitionedIndexFilters() {
return usePartitionedIndexFilters != null
? usePartitionedIndexFilters
: RocksDBOptions.USE_PARTITIONED_INDEX_FILTERS.defaultValue();
} | Gets whether the state backend is configured to use partitioned index/filters for RocksDB.
<p>See {@link RocksDBOptions#USE_PARTITIONED_INDEX_FILTERS} for details. | isUsingPartitionedIndexFilters | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryConfiguration.java | Apache-2.0 |
static long calculateRocksDBDefaultArenaBlockSize(long writeBufferSize) {
long arenaBlockSize = writeBufferSize / 8;
// Align up to 4k
final long align = 4 * 1024;
return ((arenaBlockSize + align - 1) / align) * align;
} | Calculate the default arena block size as RocksDB calculates it in <a
href="https://github.com/dataArtisans/frocksdb/blob/49bc897d5d768026f1eb816d960c1f2383396ef4/db/column_family.cc#L196-L201">
here</a>.
@return the default arena block size
@param writeBufferSize the write buffer size (bytes) | calculateRocksDBDefaultArenaBlockSize | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryControllerUtils.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBMemoryControllerUtils.java | Apache-2.0 |
public Collection<RocksDBProperty> getProperties() {
return Collections.unmodifiableCollection(properties);
} | @return the enabled RocksDB property-based metrics | getProperties | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBNativeMetricOptions.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBNativeMetricOptions.java | Apache-2.0 |
default RocksDBNativeMetricOptions createNativeMetricsOptions(
RocksDBNativeMetricOptions nativeMetricOptions) {
return nativeMetricOptions;
} | This method should enable certain RocksDB metrics to be forwarded to Flink's metrics
reporter.
<p>Enabling these monitoring options may degrade RockDB performance and should be set with
care.
@param nativeMetricOptions The options object with the pre-defined options.
@return The options object on which the additional options are set. | createNativeMetricsOptions | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBOptionsFactory.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBOptionsFactory.java | Apache-2.0 |
public DBOptions getDbOptions() {
// initial options from common profile
DBOptions opt = createBaseCommonDBOptions();
handlesToClose.add(opt);
// load configurable options on top of pre-defined profile
setDBOptionsFromConfigurableOptions(opt);
// add user-defined options factory, if specified
if (optionsFactory != null) {
opt = optionsFactory.createDBOptions(opt, handlesToClose);
}
// add necessary default options
opt = opt.setCreateIfMissing(true).setAvoidFlushDuringShutdown(true);
// if sharedResources is non-null, use the write buffer manager from it.
if (sharedResources != null) {
opt.setWriteBufferManager(sharedResources.getResourceHandle().getWriteBufferManager());
}
if (enableStatistics) {
Statistics statistics = new Statistics();
opt.setStatistics(statistics);
handlesToClose.add(statistics);
}
return opt;
} | Gets the RocksDB {@link DBOptions} to be used for RocksDB instances. | getDbOptions | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | Apache-2.0 |
public ColumnFamilyOptions getColumnOptions() {
// initial options from common profile
ColumnFamilyOptions opt = createBaseCommonColumnOptions();
handlesToClose.add(opt);
// load configurable options on top of pre-defined profile
setColumnFamilyOptionsFromConfigurableOptions(opt, handlesToClose);
// add user-defined options, if specified
if (optionsFactory != null) {
opt = optionsFactory.createColumnOptions(opt, handlesToClose);
}
// if sharedResources is non-null, use the block cache from it and
// set necessary options for performance consideration with memory control
if (sharedResources != null) {
final RocksDBSharedResources rocksResources = sharedResources.getResourceHandle();
final Cache blockCache = rocksResources.getCache();
TableFormatConfig tableFormatConfig = opt.tableFormatConfig();
BlockBasedTableConfig blockBasedTableConfig;
if (tableFormatConfig == null) {
blockBasedTableConfig = new BlockBasedTableConfig();
} else {
Preconditions.checkArgument(
tableFormatConfig instanceof BlockBasedTableConfig,
"We currently only support BlockBasedTableConfig When bounding total memory.");
blockBasedTableConfig = (BlockBasedTableConfig) tableFormatConfig;
}
if (rocksResources.isUsingPartitionedIndexFilters()
&& overwriteFilterIfExist(blockBasedTableConfig)) {
blockBasedTableConfig.setIndexType(IndexType.kTwoLevelIndexSearch);
blockBasedTableConfig.setPartitionFilters(true);
blockBasedTableConfig.setPinTopLevelIndexAndFilter(true);
}
blockBasedTableConfig.setBlockCache(blockCache);
blockBasedTableConfig.setCacheIndexAndFilterBlocks(true);
blockBasedTableConfig.setCacheIndexAndFilterBlocksWithHighPriority(true);
blockBasedTableConfig.setPinL0FilterAndIndexBlocksInCache(true);
opt.setTableFormatConfig(blockBasedTableConfig);
}
return opt;
} | Gets the RocksDB {@link ColumnFamilyOptions} to be used for all RocksDB instances. | getColumnOptions | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | Apache-2.0 |
private void relocateDefaultDbLogDir(DBOptions dbOptions) {
String logFilePath = System.getProperty("log.file");
if (logFilePath != null) {
File logFile = resolveFileLocation(logFilePath);
if (logFile != null && resolveFileLocation(logFile.getParent()) != null) {
String relocatedDbLogDir = logFile.getParent();
this.relocatedDbLogBaseDir = new File(relocatedDbLogDir).toPath();
dbOptions.setDbLogDir(relocatedDbLogDir);
}
}
} | Relocates the default log directory of RocksDB with the Flink log directory. Finds the Flink
log directory using log.file Java property that is set during startup.
@param dbOptions The RocksDB {@link DBOptions}. | relocateDefaultDbLogDir | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | Apache-2.0 |
private String resolveRelocatedDbLogPrefix(String instanceRocksDBAbsolutePath) {
if (!instanceRocksDBAbsolutePath.isEmpty()
&& !instanceRocksDBAbsolutePath.matches("^[a-zA-Z0-9\\-._].*")) {
instanceRocksDBAbsolutePath = instanceRocksDBAbsolutePath.substring(1);
}
return instanceRocksDBAbsolutePath.replaceAll("[^a-zA-Z0-9\\-._]", "_")
+ ROCKSDB_RELOCATE_LOG_SUFFIX;
} | Resolve the prefix of rocksdb's log file name according to rocksdb's log file name rules. See
https://github.com/ververica/frocksdb/blob/FRocksDB-6.20.3/file/filename.cc#L30.
@param instanceRocksDBAbsolutePath The path where the rocksdb directory is located.
@return Resolved rocksdb log name prefix. | resolveRelocatedDbLogPrefix | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/RocksDBResourceContainer.java | Apache-2.0 |
private void makeAllStateHandlesLocal(
Path absolutInstanceBasePath,
List<IncrementalLocalKeyedStateHandle> localKeyedStateHandlesOut,
List<StateHandleDownloadSpec> allDownloadSpecsOut)
throws Exception {
// Prepare and collect all the download request to pull remote state to a local directory
for (IncrementalKeyedStateHandle stateHandle : restoreStateHandles) {
if (stateHandle instanceof IncrementalRemoteKeyedStateHandle) {
StateHandleDownloadSpec downloadRequest =
new StateHandleDownloadSpec(
(IncrementalRemoteKeyedStateHandle) stateHandle,
absolutInstanceBasePath.resolve(UUID.randomUUID().toString()));
allDownloadSpecsOut.add(downloadRequest);
} else if (stateHandle instanceof IncrementalLocalKeyedStateHandle) {
localKeyedStateHandlesOut.add((IncrementalLocalKeyedStateHandle) stateHandle);
} else {
throw unexpectedStateHandleException(
EXPECTED_STATE_HANDLE_CLASSES, stateHandle.getClass());
}
}
allDownloadSpecsOut.stream()
.map(StateHandleDownloadSpec::createLocalStateHandleForDownloadedState)
.forEach(localKeyedStateHandlesOut::add);
transferRemoteStateToLocalDirectory(allDownloadSpecsOut);
} | Downloads and converts all {@link IncrementalRemoteKeyedStateHandle}s to {@link
IncrementalLocalKeyedStateHandle}s.
@param absolutInstanceBasePath the base path of the restoring DB instance as absolute path.
@param localKeyedStateHandlesOut the output parameter for the created {@link
IncrementalLocalKeyedStateHandle}s.
@param allDownloadSpecsOut output parameter for the created download specs.
@throws Exception if an unexpected state handle type is passed as argument. | makeAllStateHandlesLocal | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void initBaseDBFromSingleStateHandle(IncrementalLocalKeyedStateHandle stateHandle)
throws Exception {
logger.info(
"Starting opening base RocksDB instance in operator {} with target key-group range {} from state handle {}.",
operatorIdentifier,
keyGroupRange.prettyPrintInterval(),
stateHandle);
// Restore base DB from selected initial handle
restoreBaseDBFromLocalState(stateHandle);
KeyGroupRange stateHandleKeyGroupRange = stateHandle.getKeyGroupRange();
// Check if the key-groups range has changed.
if (Objects.equals(stateHandleKeyGroupRange, keyGroupRange)) {
// This is the case if we didn't rescale, so we can restore all the info from the
// previous backend instance (backend id and incremental checkpoint history).
restorePreviousIncrementalFilesStatus(stateHandle);
} else {
// If the key-groups don't match, this was a scale out, and we need to clip the
// key-groups range of the db to the target range for this backend.
try {
RocksDBIncrementalCheckpointUtils.clipDBWithKeyGroupRange(
this.rocksHandle.getDb(),
this.rocksHandle.getColumnFamilyHandles(),
keyGroupRange,
stateHandleKeyGroupRange,
keyGroupPrefixBytes,
useDeleteFilesInRange);
} catch (RocksDBException e) {
String errMsg = "Failed to clip DB after initialization.";
logger.error(errMsg, e);
throw new BackendBuildingException(errMsg, e);
}
}
logger.info(
"Finished opening base RocksDB instance in operator {} with target key-group range {}.",
operatorIdentifier,
keyGroupRange.prettyPrintInterval());
} | Initializes the base DB that we restore from a single local state handle.
@param stateHandle the state handle to restore the base DB from.
@throws Exception on any error during restore. | initBaseDBFromSingleStateHandle | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void restoreFromMultipleStateHandles(
List<IncrementalLocalKeyedStateHandle> localKeyedStateHandles) throws Exception {
logger.info(
"Starting to restore backend with range {} in operator {} from multiple state handles {} with useIngestDbRestoreMode = {}.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier,
localKeyedStateHandles,
useIngestDbRestoreMode);
byte[] startKeyGroupPrefixBytes = new byte[keyGroupPrefixBytes];
CompositeKeySerializationUtils.serializeKeyGroup(
keyGroupRange.getStartKeyGroup(), startKeyGroupPrefixBytes);
byte[] stopKeyGroupPrefixBytes = new byte[keyGroupPrefixBytes];
CompositeKeySerializationUtils.serializeKeyGroup(
keyGroupRange.getEndKeyGroup() + 1, stopKeyGroupPrefixBytes);
if (useIngestDbRestoreMode) {
// Optimized path for merging multiple handles with Ingest/Clip
mergeStateHandlesWithClipAndIngest(
localKeyedStateHandles, startKeyGroupPrefixBytes, stopKeyGroupPrefixBytes);
} else {
// Optimized path for single handle and legacy path for merging multiple handles.
mergeStateHandlesWithCopyFromTemporaryInstance(
localKeyedStateHandles, startKeyGroupPrefixBytes, stopKeyGroupPrefixBytes);
}
logger.info(
"Completed restoring backend with range {} in operator {} from multiple state handles with useIngestDbRestoreMode = {}.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier,
useIngestDbRestoreMode);
} | Initializes the base DB that we restore from a list of multiple local state handles.
@param localKeyedStateHandles the list of state handles to restore the base DB from.
@throws Exception on any error during restore. | restoreFromMultipleStateHandles | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void mergeStateHandlesWithClipAndIngest(
List<IncrementalLocalKeyedStateHandle> localKeyedStateHandles,
byte[] startKeyGroupPrefixBytes,
byte[] stopKeyGroupPrefixBytes)
throws Exception {
final Path absolutInstanceBasePath = instanceBasePath.getAbsoluteFile().toPath();
final Path exportCfBasePath = absolutInstanceBasePath.resolve("export-cfs");
Files.createDirectories(exportCfBasePath);
final Map<RegisteredStateMetaInfoBase.Key, List<ExportImportFilesMetaData>>
exportedColumnFamilyMetaData = new HashMap<>(localKeyedStateHandles.size());
final List<IncrementalLocalKeyedStateHandle> notImportableHandles =
new ArrayList<>(localKeyedStateHandles.size());
try {
KeyGroupRange exportedSstKeyGroupsRange =
exportColumnFamiliesWithSstDataInKeyGroupsRange(
exportCfBasePath,
localKeyedStateHandles,
exportedColumnFamilyMetaData,
notImportableHandles);
if (exportedColumnFamilyMetaData.isEmpty()) {
// Nothing coule be exported, so we fall back to
// #mergeStateHandlesWithCopyFromTemporaryInstance
mergeStateHandlesWithCopyFromTemporaryInstance(
notImportableHandles, startKeyGroupPrefixBytes, stopKeyGroupPrefixBytes);
} else {
// We initialize the base DB by importing all the exported data.
initBaseDBFromColumnFamilyImports(
exportedColumnFamilyMetaData, exportedSstKeyGroupsRange);
// Copy data from handles that we couldn't directly import using temporary
// instances.
copyToBaseDBUsingTempDBs(
notImportableHandles, startKeyGroupPrefixBytes, stopKeyGroupPrefixBytes);
}
} finally {
// Close native RocksDB objects
exportedColumnFamilyMetaData.values().forEach(IOUtils::closeAllQuietly);
// Cleanup export base directory
cleanUpPathQuietly(exportCfBasePath);
}
} | Restores the base DB by merging multiple state handles into one. This method first checks if
all data to import is in the expected key-groups range and then uses import/export.
Otherwise, this method falls back to copying the data using a temporary DB.
@param localKeyedStateHandles the list of state handles to restore the base DB from.
@param startKeyGroupPrefixBytes the min/start key of the key groups range as bytes.
@param stopKeyGroupPrefixBytes the max+1/end key of the key groups range as bytes.
@throws Exception on any restore error. | mergeStateHandlesWithClipAndIngest | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private KeyGroupRange exportColumnFamiliesWithSstDataInKeyGroupsRange(
Path exportCfBasePath,
List<IncrementalLocalKeyedStateHandle> localKeyedStateHandles,
Map<RegisteredStateMetaInfoBase.Key, List<ExportImportFilesMetaData>>
exportedColumnFamiliesOut,
List<IncrementalLocalKeyedStateHandle> skipped)
throws Exception {
logger.info(
"Starting restore export for backend with range {} in operator {}.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier);
int minExportKeyGroup = Integer.MAX_VALUE;
int maxExportKeyGroup = Integer.MIN_VALUE;
int index = 0;
for (IncrementalLocalKeyedStateHandle stateHandle : localKeyedStateHandles) {
final String logLineSuffix =
" for state handle at index "
+ index
+ " with proclaimed key-group range "
+ stateHandle.getKeyGroupRange().prettyPrintInterval()
+ " for backend with range "
+ keyGroupRange.prettyPrintInterval()
+ " in operator "
+ operatorIdentifier
+ ".";
logger.debug("Opening temporary database" + logLineSuffix);
try (RestoredDBInstance tmpRestoreDBInfo =
restoreTempDBInstanceFromLocalState(stateHandle)) {
List<ColumnFamilyHandle> tmpColumnFamilyHandles =
tmpRestoreDBInfo.columnFamilyHandles;
logger.debug("Checking actual keys of sst files" + logLineSuffix);
// Check if the data in all SST files referenced in the handle is within the
// proclaimed key-groups range of the handle.
RocksDBIncrementalCheckpointUtils.RangeCheckResult rangeCheckResult =
RocksDBIncrementalCheckpointUtils.checkSstDataAgainstKeyGroupRange(
tmpRestoreDBInfo.db,
keyGroupPrefixBytes,
stateHandle.getKeyGroupRange());
logger.info("{}" + logLineSuffix, rangeCheckResult);
if (rangeCheckResult.allInRange()) {
logger.debug("Start exporting" + logLineSuffix);
List<RegisteredStateMetaInfoBase> registeredStateMetaInfoBases =
tmpRestoreDBInfo.stateMetaInfoSnapshots.stream()
.map(RegisteredStateMetaInfoBase::fromMetaInfoSnapshot)
.collect(Collectors.toList());
// Export all the Column Families and store the result in
// exportedColumnFamiliesOut
RocksDBIncrementalCheckpointUtils.exportColumnFamilies(
tmpRestoreDBInfo.db,
tmpColumnFamilyHandles,
registeredStateMetaInfoBases,
exportCfBasePath,
exportedColumnFamiliesOut);
minExportKeyGroup =
Math.min(
minExportKeyGroup,
stateHandle.getKeyGroupRange().getStartKeyGroup());
maxExportKeyGroup =
Math.max(
maxExportKeyGroup,
stateHandle.getKeyGroupRange().getEndKeyGroup());
logger.debug("Done exporting" + logLineSuffix);
} else {
// Actual key range in files exceeds proclaimed range, cannot import. We
// will copy this handle using a temporary DB later.
skipped.add(stateHandle);
logger.debug("Skipped export" + logLineSuffix);
}
}
++index;
}
KeyGroupRange exportedKeyGroupsRange =
minExportKeyGroup <= maxExportKeyGroup
? new KeyGroupRange(minExportKeyGroup, maxExportKeyGroup)
: KeyGroupRange.EMPTY_KEY_GROUP_RANGE;
logger.info(
"Completed restore export for backend with range {} in operator {}. {} exported handles with overall exported range {}. {} Skipped handles: {}.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier,
localKeyedStateHandles.size() - skipped.size(),
exportedKeyGroupsRange.prettyPrintInterval(),
skipped.size(),
skipped);
return exportedKeyGroupsRange;
} | Prepares the data for importing by exporting from temporary RocksDB instances. We can only
import data that does not exceed the target key-groups range and skip state handles that
exceed their range.
@param exportCfBasePath the base path for the export files.
@param localKeyedStateHandles the state handles to prepare for import.
@param exportedColumnFamiliesOut output parameter for the metadata of completed exports.
@param skipped output parameter for state handles that could not be exported because the data
exceeded the proclaimed range.
@return the total key-groups range of the exported data.
@throws Exception on any export error. | exportColumnFamiliesWithSstDataInKeyGroupsRange | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void mergeStateHandlesWithCopyFromTemporaryInstance(
List<IncrementalLocalKeyedStateHandle> localKeyedStateHandles,
byte[] startKeyGroupPrefixBytes,
byte[] stopKeyGroupPrefixBytes)
throws Exception {
logger.info(
"Starting to merge state for backend with range {} in operator {} from multiple state handles using temporary instances.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier);
// Choose the best state handle for the initial DB and remove it from the list
final IncrementalLocalKeyedStateHandle selectedInitialHandle =
localKeyedStateHandles.remove(
RocksDBIncrementalCheckpointUtils.findTheBestStateHandleForInitial(
localKeyedStateHandles, keyGroupRange, overlapFractionThreshold));
Preconditions.checkNotNull(selectedInitialHandle);
// Init the base DB instance with the initial state
initBaseDBFromSingleStateHandle(selectedInitialHandle);
// Copy remaining handles using temporary RocksDB instances
copyToBaseDBUsingTempDBs(
localKeyedStateHandles, startKeyGroupPrefixBytes, stopKeyGroupPrefixBytes);
logger.info(
"Completed merging state for backend with range {} in operator {} from multiple state handles using temporary instances.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier);
} | Helper method that merges the data from multiple state handles into the restoring base DB by
the help of copying through temporary RocksDB instances.
@param localKeyedStateHandles the state handles to merge into the base DB.
@param startKeyGroupPrefixBytes the min/start key of the key groups range as bytes.
@param stopKeyGroupPrefixBytes the max+1/end key of the key groups range as bytes.
@throws Exception on any merge error. | mergeStateHandlesWithCopyFromTemporaryInstance | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void restorePreviousIncrementalFilesStatus(
IncrementalKeyedStateHandle localKeyedStateHandle) {
backendUID = localKeyedStateHandle.getBackendIdentifier();
restoredSstFiles.put(
localKeyedStateHandle.getCheckpointId(),
localKeyedStateHandle.getSharedStateHandles());
lastCompletedCheckpointId = localKeyedStateHandle.getCheckpointId();
logger.info(
"Restored previous incremental files status in backend with range {} in operator {}: backend uuid {}, last checkpoint id {}.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier,
backendUID,
lastCompletedCheckpointId);
} | Restores the checkpointing status and state for this backend. This can only be done if the
backend was not rescaled and is therefore identical to the source backend in the previous
run.
@param localKeyedStateHandle the single state handle from which the backend is restored. | restorePreviousIncrementalFilesStatus | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void restoreBaseDBFromLocalState(IncrementalLocalKeyedStateHandle localKeyedStateHandle)
throws Exception {
KeyedBackendSerializationProxy<K> serializationProxy =
readMetaData(localKeyedStateHandle.getMetaDataStateHandle());
List<StateMetaInfoSnapshot> stateMetaInfoSnapshots =
serializationProxy.getStateMetaInfoSnapshots();
Path restoreSourcePath = localKeyedStateHandle.getDirectoryStateHandle().getDirectory();
this.rocksHandle.openDB(
createColumnFamilyDescriptors(stateMetaInfoSnapshots, true),
stateMetaInfoSnapshots,
restoreSourcePath,
cancelStreamRegistryForRestore);
} | Restores the base DB from local state of a single state handle.
@param localKeyedStateHandle the state handle tor estore from.
@throws Exception on any restore error. | restoreBaseDBFromLocalState | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void transferRemoteStateToLocalDirectory(
Collection<StateHandleDownloadSpec> downloadSpecs) throws Exception {
logger.info(
"Start downloading remote state to local directory in operator {} for target key-group range {}.",
operatorIdentifier,
keyGroupRange.prettyPrintInterval());
try (RocksDBStateDownloader rocksDBStateDownloader =
new RocksDBStateDownloader(
RocksDBStateDataTransferHelper.forThreadNumIfSpecified(
numberOfTransferringThreads, ioExecutor))) {
rocksDBStateDownloader.transferAllStateDataToDirectory(
downloadSpecs, cancelStreamRegistryForRestore);
logger.info(
"Finished downloading remote state to local directory in operator {} for target key-group range {}.",
operatorIdentifier,
keyGroupRange.prettyPrintInterval());
}
} | Helper method to download files, as specified in the given download specs, to the local
directory.
@param downloadSpecs specifications of files to download.
@throws Exception On any download error. | transferRemoteStateToLocalDirectory | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
private void copyToBaseDBUsingTempDBs(
List<IncrementalLocalKeyedStateHandle> toImport,
byte[] startKeyGroupPrefixBytes,
byte[] stopKeyGroupPrefixBytes)
throws Exception {
if (toImport.isEmpty()) {
return;
}
logger.info(
"Starting to copy state handles for backend with range {} in operator {} using temporary instances.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier);
try (RocksDBWriteBatchWrapper writeBatchWrapper =
new RocksDBWriteBatchWrapper(this.rocksHandle.getDb(), writeBatchSize);
Closeable ignored =
cancelStreamRegistryForRestore.registerCloseableTemporarily(
writeBatchWrapper.getCancelCloseable())) {
for (IncrementalLocalKeyedStateHandle handleToCopy : toImport) {
try (RestoredDBInstance restoredDBInstance =
restoreTempDBInstanceFromLocalState(handleToCopy)) {
copyTempDbIntoBaseDb(
restoredDBInstance,
writeBatchWrapper,
startKeyGroupPrefixBytes,
stopKeyGroupPrefixBytes);
}
}
}
logger.info(
"Competed copying state handles for backend with range {} in operator {} using temporary instances.",
keyGroupRange.prettyPrintInterval(),
operatorIdentifier);
} | Helper method to copy all data from the given local state handles to the base DB by using
temporary DB instances.
@param toImport the state handles to import.
@param startKeyGroupPrefixBytes the min/start key of the key groups range as bytes.
@param stopKeyGroupPrefixBytes the max+1/end key of the key groups range as bytes.
@throws Exception on any copy error. | copyToBaseDBUsingTempDBs | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/restore/RocksDBIncrementalRestoreOperation.java | Apache-2.0 |
protected Optional<KeyedStateHandle> getLocalSnapshot(
@Nullable StreamStateHandle localStreamStateHandle,
List<HandleAndLocalPath> sharedState)
throws IOException {
final DirectoryStateHandle directoryStateHandle =
localBackupDirectory.completeSnapshotAndGetHandle();
if (directoryStateHandle != null && localStreamStateHandle != null) {
return Optional.of(
new IncrementalLocalKeyedStateHandle(
backendUID,
checkpointId,
directoryStateHandle,
keyGroupRange,
localStreamStateHandle,
sharedState));
} else {
return Optional.empty();
}
} | Local directory for the RocksDB native backup. | getLocalSnapshot | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksDBSnapshotStrategyBase.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksDBSnapshotStrategyBase.java | Apache-2.0 |
@Override
public void release() {
try {
if (snapshotDirectory.exists()) {
LOG.trace(
"Running cleanup for local RocksDB backup directory {}.",
snapshotDirectory);
boolean cleanupOk = snapshotDirectory.cleanup();
if (!cleanupOk) {
LOG.debug("Could not properly cleanup local RocksDB backup directory.");
}
}
} catch (IOException e) {
LOG.warn("Could not properly cleanup local RocksDB backup directory.", e);
}
} | A {@link SnapshotResources} for native rocksdb snapshot. | release | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksDBSnapshotStrategyBase.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksDBSnapshotStrategyBase.java | Apache-2.0 |
@Override
public SnapshotResult<KeyedStateHandle> get(CloseableRegistry snapshotCloseableRegistry)
throws Exception {
boolean completed = false;
// Handle to the meta data file
SnapshotResult<StreamStateHandle> metaStateHandle = null;
// Handles to new sst files since the last completed checkpoint will go here
final List<HandleAndLocalPath> sstFiles = new ArrayList<>();
// Handles to the misc files in the current snapshot will go here
final List<HandleAndLocalPath> miscFiles = new ArrayList<>();
final List<StreamStateHandle> reusedHandle = new ArrayList<>();
try {
metaStateHandle =
materializeMetaData(
snapshotCloseableRegistry,
tmpResourcesRegistry,
stateMetaInfoSnapshots,
checkpointId,
checkpointStreamFactory);
// Sanity checks - they should never fail
Preconditions.checkNotNull(metaStateHandle, "Metadata was not properly created.");
Preconditions.checkNotNull(
metaStateHandle.getJobManagerOwnedSnapshot(),
"Metadata for job manager was not properly created.");
long checkpointedSize = metaStateHandle.getStateSize();
checkpointedSize +=
uploadSnapshotFiles(
sstFiles,
miscFiles,
snapshotCloseableRegistry,
tmpResourcesRegistry,
reusedHandle);
// We make the 'sstFiles' as the 'sharedState' in IncrementalRemoteKeyedStateHandle,
// whether they belong to the sharded CheckpointedStateScope or exclusive
// CheckpointedStateScope.
// In this way, the first checkpoint after job recovery can be an incremental
// checkpoint in CLAIM mode, either restoring from checkpoint or restoring from
// native savepoint.
// And this has no effect on the registration of shareState currently, because the
// snapshot result of native savepoint would not be registered into
// 'SharedStateRegistry'.
final IncrementalRemoteKeyedStateHandle jmIncrementalKeyedStateHandle =
new IncrementalRemoteKeyedStateHandle(
backendUID,
keyGroupRange,
checkpointId,
sstFiles,
miscFiles,
metaStateHandle.getJobManagerOwnedSnapshot(),
checkpointedSize);
Optional<KeyedStateHandle> localSnapshot =
getLocalSnapshot(metaStateHandle.getTaskLocalSnapshot(), sstFiles);
final SnapshotResult<KeyedStateHandle> snapshotResult =
localSnapshot
.map(
keyedStateHandle ->
SnapshotResult.withLocalState(
jmIncrementalKeyedStateHandle,
keyedStateHandle))
.orElseGet(() -> SnapshotResult.of(jmIncrementalKeyedStateHandle));
completed = true;
return snapshotResult;
} finally {
if (!completed) {
cleanupIncompleteSnapshot(tmpResourcesRegistry, localBackupDirectory);
} else {
// Report the reuse of state handle to stream factory, which is essential for
// file merging mechanism.
checkpointStreamFactory.reusePreviousStateHandle(reusedHandle);
}
}
} | All sst files that were part of the last previously completed checkpoint. | get | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksIncrementalSnapshotStrategy.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksIncrementalSnapshotStrategy.java | Apache-2.0 |
@Override
public SnapshotResult<KeyedStateHandle> get(CloseableRegistry snapshotCloseableRegistry)
throws Exception {
boolean completed = false;
// Handle to the meta data file
SnapshotResult<StreamStateHandle> metaStateHandle = null;
// Handles to all the files in the current snapshot will go here
final List<HandleAndLocalPath> privateFiles = new ArrayList<>();
try {
metaStateHandle =
materializeMetaData(
snapshotCloseableRegistry,
tmpResourcesRegistry,
stateMetaInfoSnapshots,
checkpointId,
checkpointStreamFactory);
// Sanity checks - they should never fail
Preconditions.checkNotNull(metaStateHandle, "Metadata was not properly created.");
Preconditions.checkNotNull(
metaStateHandle.getJobManagerOwnedSnapshot(),
"Metadata for job manager was not properly created.");
long checkpointedSize = metaStateHandle.getStateSize();
checkpointedSize +=
uploadSnapshotFiles(
privateFiles, snapshotCloseableRegistry, tmpResourcesRegistry);
final IncrementalRemoteKeyedStateHandle jmIncrementalKeyedStateHandle =
new IncrementalRemoteKeyedStateHandle(
backendUID,
keyGroupRange,
checkpointId,
Collections.emptyList(),
privateFiles,
metaStateHandle.getJobManagerOwnedSnapshot(),
checkpointedSize);
Optional<KeyedStateHandle> localSnapshot =
getLocalSnapshot(
metaStateHandle.getTaskLocalSnapshot(), Collections.emptyList());
final SnapshotResult<KeyedStateHandle> snapshotResult =
localSnapshot
.map(
keyedStateHandle ->
SnapshotResult.withLocalState(
jmIncrementalKeyedStateHandle,
keyedStateHandle))
.orElseGet(() -> SnapshotResult.of(jmIncrementalKeyedStateHandle));
completed = true;
return snapshotResult;
} finally {
if (!completed) {
cleanupIncompleteSnapshot(tmpResourcesRegistry, localBackupDirectory);
}
}
} | Encapsulates the process to perform a full snapshot of a RocksDBKeyedStateBackend. | get | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksNativeFullSnapshotStrategy.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/snapshot/RocksNativeFullSnapshotStrategy.java | Apache-2.0 |
void compact(ColumnFamilyHandle cfName, int level, List<String> files) throws RocksDBException {
int outputLevel = Math.min(level + 1, cfName.getDescriptor().getOptions().numLevels() - 1);
LOG.debug(
"Manually compacting {} files from level {} to {}: {}",
files.size(),
level,
outputLevel,
files);
try (CompactionOptions options =
new CompactionOptions().setOutputFileSizeLimit(targetOutputFileSize);
CompactionJobInfo compactionJobInfo = new CompactionJobInfo()) {
db.compactFiles(options, cfName, files, outputLevel, OUTPUT_PATH_ID, compactionJobInfo);
}
} | Compacts multiple RocksDB SST files using {@link RocksDB#compactFiles(CompactionOptions,
ColumnFamilyHandle, List, int, int, CompactionJobInfo) RocksDB#compactFiles} into the last level.
Usually this results in a single SST file if it doesn't exceed RocksDB target output file size
for that level. | compact | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/sstmerge/Compactor.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/state/rocksdb/sstmerge/Compactor.java | Apache-2.0 |
@Test
public void testTwoSeparateClassLoaders() throws Exception {
// collect the libraries / class folders with RocksDB related code: the state backend and
// RocksDB itself
final URL codePath1 =
EmbeddedRocksDBStateBackend.class
.getProtectionDomain()
.getCodeSource()
.getLocation();
final URL codePath2 = RocksDB.class.getProtectionDomain().getCodeSource().getLocation();
final ClassLoader parent = getClass().getClassLoader();
final ClassLoader loader1 =
FlinkUserCodeClassLoaders.childFirst(
new URL[] {codePath1, codePath2},
parent,
new String[0],
NOOP_EXCEPTION_HANDLER,
true);
final ClassLoader loader2 =
FlinkUserCodeClassLoaders.childFirst(
new URL[] {codePath1, codePath2},
parent,
new String[0],
NOOP_EXCEPTION_HANDLER,
true);
final String className = EmbeddedRocksDBStateBackend.class.getName();
final Class<?> clazz1 = Class.forName(className, false, loader1);
final Class<?> clazz2 = Class.forName(className, false, loader2);
assertNotEquals(
"Test broken - the two reflectively loaded classes are equal", clazz1, clazz2);
final Object instance1 = clazz1.getConstructor().newInstance();
final Object instance2 = clazz2.getConstructor().newInstance();
final String tempDir = tmp.newFolder().getAbsolutePath();
final Method meth1 = clazz1.getDeclaredMethod("ensureRocksDBIsLoaded", String.class);
final Method meth2 = clazz2.getDeclaredMethod("ensureRocksDBIsLoaded", String.class);
meth1.setAccessible(true);
meth2.setAccessible(true);
// if all is well, these methods can both complete successfully
meth1.invoke(instance1, tempDir);
meth2.invoke(instance2, tempDir);
} | This test validates that the RocksDB JNI library loading works properly in the presence of the
RocksDB code being loaded dynamically via reflection. That can happen when RocksDB is in the user
code JAR, or in certain test setups. | testTwoSeparateClassLoaders | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDbMultiClassLoaderTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDbMultiClassLoaderTest.java | Apache-2.0 |
@Test
public void testSharedResourcesAfterClose() throws Exception {
OpaqueMemoryResource<RocksDBSharedResources> sharedResources = getSharedResources();
RocksDBResourceContainer container =
new RocksDBResourceContainer(PredefinedOptions.DEFAULT, null, sharedResources);
container.close();
RocksDBSharedResources rocksDBSharedResources = sharedResources.getResourceHandle();
assertThat(rocksDBSharedResources.getCache().isOwningHandle(), is(false));
assertThat(rocksDBSharedResources.getWriteBufferManager().isOwningHandle(), is(false));
} | Guard the shared resources will be released after {@link RocksDBResourceContainer#close()}
when the {@link RocksDBResourceContainer} instance is initiated with {@link
OpaqueMemoryResource}.
@throws Exception if unexpected error happened. | testSharedResourcesAfterClose | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBResourceContainerTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBResourceContainerTest.java | Apache-2.0 |
@Test
public void testGetDbOptionsWithSharedResources() throws Exception {
final int optionNumber = 20;
OpaqueMemoryResource<RocksDBSharedResources> sharedResources = getSharedResources();
RocksDBResourceContainer container =
new RocksDBResourceContainer(PredefinedOptions.DEFAULT, null, sharedResources);
HashSet<WriteBufferManager> writeBufferManagers = new HashSet<>();
for (int i = 0; i < optionNumber; i++) {
DBOptions dbOptions = container.getDbOptions();
WriteBufferManager writeBufferManager = getWriteBufferManager(dbOptions);
writeBufferManagers.add(writeBufferManager);
}
assertThat(writeBufferManagers.size(), is(1));
assertThat(
writeBufferManagers.iterator().next(),
is(sharedResources.getResourceHandle().getWriteBufferManager()));
container.close();
} | Guard that {@link RocksDBResourceContainer#getDbOptions()} shares the same {@link
WriteBufferManager} instance if the {@link RocksDBResourceContainer} instance is initiated
with {@link OpaqueMemoryResource}.
@throws Exception if unexpected error happened. | testGetDbOptionsWithSharedResources | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBResourceContainerTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBResourceContainerTest.java | Apache-2.0 |
@Test
public void testMultiThreadRestoreCorrectly() throws Exception {
int numRemoteHandles = 3;
int numSubHandles = 6;
byte[][][] contents = createContents(numRemoteHandles, numSubHandles);
List<StateHandleDownloadSpec> downloadRequests = new ArrayList<>(numRemoteHandles);
for (int i = 0; i < numRemoteHandles; ++i) {
downloadRequests.add(
createDownloadRequestForContent(
temporaryFolder.newFolder().toPath(), contents[i], i));
}
try (RocksDBStateDownloader rocksDBStateDownloader = new RocksDBStateDownloader(4)) {
rocksDBStateDownloader.transferAllStateDataToDirectory(
downloadRequests, new CloseableRegistry());
}
for (int i = 0; i < numRemoteHandles; ++i) {
StateHandleDownloadSpec downloadRequest = downloadRequests.get(i);
Path dstPath = downloadRequest.getDownloadDestination();
Assert.assertTrue(dstPath.toFile().exists());
for (int j = 0; j < numSubHandles; ++j) {
assertStateContentEqual(
contents[i][j], dstPath.resolve(String.format("sharedState-%d-%d", i, j)));
}
}
} | Tests that download files with multi-thread correctly. | testMultiThreadRestoreCorrectly | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBStateDownloaderTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBStateDownloaderTest.java | Apache-2.0 |
@Test
public void testUseOptimizePointLookupWithMapState() throws Exception {
EmbeddedRocksDBStateBackend rocksDBStateBackend =
createStateBackendWithOptimizePointLookup();
RocksDBKeyedStateBackend<Integer> keyedStateBackend =
createKeyedStateBackend(
rocksDBStateBackend,
new MockEnvironmentBuilder().build(),
IntSerializer.INSTANCE);
try {
MapStateDescriptor<Integer, Long> stateDescriptor =
new MapStateDescriptor<>(
"map", IntSerializer.INSTANCE, LongSerializer.INSTANCE);
MapState<Integer, Long> mapState =
keyedStateBackend.getPartitionedState(
VoidNamespace.INSTANCE,
VoidNamespaceSerializer.INSTANCE,
stateDescriptor);
keyedStateBackend.setCurrentKey(1);
Map<Integer, Long> expectedResult = new HashMap<>();
for (int i = 0; i < 100; i++) {
long uv = ThreadLocalRandom.current().nextLong();
mapState.put(i, uv);
expectedResult.put(i, uv);
}
Iterator<Map.Entry<Integer, Long>> iterator = mapState.entries().iterator();
while (iterator.hasNext()) {
Map.Entry<Integer, Long> entry = iterator.next();
assertEquals(entry.getValue(), expectedResult.remove(entry.getKey()));
iterator.remove();
}
assertTrue(expectedResult.isEmpty());
assertTrue(mapState.isEmpty());
} finally {
keyedStateBackend.dispose();
}
} | Tests to cover case when user choose optimizeForPointLookup with iterator interfaces on map
state. | testUseOptimizePointLookupWithMapState | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBStateOptionTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBStateOptionTest.java | Apache-2.0 |
@Test
public void testWriteBatchWrapperFlushAfterMemorySizeExceed() throws Exception {
try (RocksDB db = RocksDB.open(folder.newFolder().getAbsolutePath());
WriteOptions options = new WriteOptions().setDisableWAL(true);
ColumnFamilyHandle handle =
db.createColumnFamily(new ColumnFamilyDescriptor("test".getBytes()));
RocksDBWriteBatchWrapper writeBatchWrapper =
new RocksDBWriteBatchWrapper(db, options, 200, 50)) {
long initBatchSize = writeBatchWrapper.getDataSize();
byte[] dummy = new byte[6];
ThreadLocalRandom.current().nextBytes(dummy);
// will add 1 + 1 + 1 + 6 + 1 + 6 = 16 bytes for each KV
// format is [handleType|kvType|keyLen|key|valueLen|value]
// more information please ref write_batch.cc in RocksDB
writeBatchWrapper.put(handle, dummy, dummy);
assertEquals(initBatchSize + 16, writeBatchWrapper.getDataSize());
writeBatchWrapper.put(handle, dummy, dummy);
assertEquals(initBatchSize + 32, writeBatchWrapper.getDataSize());
writeBatchWrapper.put(handle, dummy, dummy);
// will flush all, then an empty write batch
assertEquals(initBatchSize, writeBatchWrapper.getDataSize());
}
} | Tests that {@link RocksDBWriteBatchWrapper} flushes after the memory consumed exceeds the
preconfigured value. | testWriteBatchWrapperFlushAfterMemorySizeExceed | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | Apache-2.0 |
@Test
public void testWriteBatchWrapperFlushAfterCountExceed() throws Exception {
try (RocksDB db = RocksDB.open(folder.newFolder().getAbsolutePath());
WriteOptions options = new WriteOptions().setDisableWAL(true);
ColumnFamilyHandle handle =
db.createColumnFamily(new ColumnFamilyDescriptor("test".getBytes()));
RocksDBWriteBatchWrapper writeBatchWrapper =
new RocksDBWriteBatchWrapper(db, options, 100, 50000)) {
long initBatchSize = writeBatchWrapper.getDataSize();
byte[] dummy = new byte[2];
ThreadLocalRandom.current().nextBytes(dummy);
for (int i = 1; i < 100; ++i) {
writeBatchWrapper.put(handle, dummy, dummy);
// each kv consumes 8 bytes
assertEquals(initBatchSize + 8 * i, writeBatchWrapper.getDataSize());
}
writeBatchWrapper.put(handle, dummy, dummy);
assertEquals(initBatchSize, writeBatchWrapper.getDataSize());
}
} | Tests that {@link RocksDBWriteBatchWrapper} flushes after the kv count exceeds the
preconfigured value. | testWriteBatchWrapperFlushAfterCountExceed | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | Apache-2.0 |
@Test
public void testDefaultWriteOptionsHaveDisabledWAL() throws Exception {
WriteOptions options;
try (RocksDB db = RocksDB.open(folder.newFolder().getAbsolutePath());
RocksDBWriteBatchWrapper writeBatchWrapper =
new RocksDBWriteBatchWrapper(db, null, 200, 50)) {
options = writeBatchWrapper.getOptions();
assertTrue(options.isOwningHandle());
assertTrue(options.disableWAL());
}
assertFalse(options.isOwningHandle());
} | Test that {@link RocksDBWriteBatchWrapper} creates default {@link WriteOptions} with disabled
WAL and closes them correctly. | testDefaultWriteOptionsHaveDisabledWAL | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | Apache-2.0 |
@Test
public void testNotClosingPassedInWriteOption() throws Exception {
try (WriteOptions passInOption = new WriteOptions().setDisableWAL(false)) {
try (RocksDB db = RocksDB.open(folder.newFolder().getAbsolutePath());
RocksDBWriteBatchWrapper writeBatchWrapper =
new RocksDBWriteBatchWrapper(db, passInOption, 200, 50)) {
WriteOptions options = writeBatchWrapper.getOptions();
assertTrue(options.isOwningHandle());
assertFalse(options.disableWAL());
}
assertTrue(passInOption.isOwningHandle());
}
} | Test that {@link RocksDBWriteBatchWrapper} respects passed in {@link WriteOptions} and does
not close them. | testNotClosingPassedInWriteOption | java | apache/flink | flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/state/rocksdb/RocksDBWriteBatchWrapperTest.java | Apache-2.0 |
private static <IN, OUT> SingleOutputStreamOperator<OUT> addOperator(
DataStream<IN> in,
AsyncFunction<IN, OUT> func,
long timeout,
int bufSize,
OutputMode mode,
AsyncRetryStrategy<OUT> asyncRetryStrategy) {
if (asyncRetryStrategy != NO_RETRY_STRATEGY) {
Preconditions.checkArgument(
timeout > 0, "Timeout should be configured when do async with retry.");
}
TypeInformation<OUT> outTypeInfo =
TypeExtractor.getUnaryOperatorReturnType(
func,
AsyncFunction.class,
0,
1,
new int[] {1, 0},
in.getType(),
Utils.getCallLocationName(),
true);
// create transform
AsyncWaitOperatorFactory<IN, OUT> operatorFactory =
new AsyncWaitOperatorFactory<>(
in.getExecutionEnvironment().clean(func),
timeout,
bufSize,
mode,
asyncRetryStrategy);
return in.transform("async wait operator", outTypeInfo, operatorFactory);
} | Add an AsyncWaitOperator.
@param in The {@link DataStream} where the {@link AsyncWaitOperator} will be added.
@param func {@link AsyncFunction} wrapped inside {@link AsyncWaitOperator}.
@param timeout for the asynchronous operation to complete
@param bufSize The max number of inputs the {@link AsyncWaitOperator} can hold inside.
@param mode Processing mode for {@link AsyncWaitOperator}.
@param asyncRetryStrategy AsyncRetryStrategy for {@link AsyncFunction}.
@param <IN> Input type.
@param <OUT> Output type.
@return A new {@link SingleOutputStreamOperator} | addOperator | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> unorderedWait(
DataStream<IN> in,
AsyncFunction<IN, OUT> func,
long timeout,
TimeUnit timeUnit,
int capacity) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
capacity,
OutputMode.UNORDERED,
NO_RETRY_STRATEGY);
} | Adds an AsyncWaitOperator. The order of output stream records may be reordered.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout for the asynchronous operation to complete
@param timeUnit of the given timeout
@param capacity The max number of async i/o operation that can be triggered
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | unorderedWait | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> unorderedWait(
DataStream<IN> in, AsyncFunction<IN, OUT> func, long timeout, TimeUnit timeUnit) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
DEFAULT_QUEUE_CAPACITY,
OutputMode.UNORDERED,
NO_RETRY_STRATEGY);
} | Adds an AsyncWaitOperator. The order of output stream records may be reordered.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout for the asynchronous operation to complete
@param timeUnit of the given timeout
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | unorderedWait | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> orderedWait(
DataStream<IN> in,
AsyncFunction<IN, OUT> func,
long timeout,
TimeUnit timeUnit,
int capacity) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
capacity,
OutputMode.ORDERED,
NO_RETRY_STRATEGY);
} | Adds an AsyncWaitOperator. The order to process input records is guaranteed to be the same as
input ones.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout for the asynchronous operation to complete
@param timeUnit of the given timeout
@param capacity The max number of async i/o operation that can be triggered
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | orderedWait | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> orderedWait(
DataStream<IN> in, AsyncFunction<IN, OUT> func, long timeout, TimeUnit timeUnit) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
DEFAULT_QUEUE_CAPACITY,
OutputMode.ORDERED,
NO_RETRY_STRATEGY);
} | Adds an AsyncWaitOperator. The order to process input records is guaranteed to be the same as
input ones.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout for the asynchronous operation to complete
@param timeUnit of the given timeout
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | orderedWait | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> unorderedWaitWithRetry(
DataStream<IN> in,
AsyncFunction<IN, OUT> func,
long timeout,
TimeUnit timeUnit,
AsyncRetryStrategy<OUT> asyncRetryStrategy) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
DEFAULT_QUEUE_CAPACITY,
OutputMode.UNORDERED,
asyncRetryStrategy);
} | Adds an AsyncWaitOperator with an AsyncRetryStrategy to support retry of AsyncFunction. The
order of output stream records may be reordered.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout from first invoke to final completion of asynchronous operation, may include
multiple retries, and will be reset in case of restart
@param timeUnit of the given timeout
@param asyncRetryStrategy The strategy of reattempt async i/o operation that can be triggered
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | unorderedWaitWithRetry | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> unorderedWaitWithRetry(
DataStream<IN> in,
AsyncFunction<IN, OUT> func,
long timeout,
TimeUnit timeUnit,
int capacity,
AsyncRetryStrategy<OUT> asyncRetryStrategy) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
capacity,
OutputMode.UNORDERED,
asyncRetryStrategy);
} | Adds an AsyncWaitOperator with an AsyncRetryStrategy to support retry of AsyncFunction. The
order of output stream records may be reordered.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout from first invoke to final completion of asynchronous operation, may include
multiple retries, and will be reset in case of restart
@param timeUnit of the given timeout
@param capacity The max number of async i/o operation that can be triggered
@param asyncRetryStrategy The strategy of reattempt async i/o operation that can be triggered
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | unorderedWaitWithRetry | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> orderedWaitWithRetry(
DataStream<IN> in,
AsyncFunction<IN, OUT> func,
long timeout,
TimeUnit timeUnit,
AsyncRetryStrategy<OUT> asyncRetryStrategy) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
DEFAULT_QUEUE_CAPACITY,
OutputMode.ORDERED,
asyncRetryStrategy);
} | Adds an AsyncWaitOperator with an AsyncRetryStrategy to support retry of AsyncFunction. The
order to process input records is guaranteed to be the same as * input ones.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout from first invoke to final completion of asynchronous operation, may include
multiple retries, and will be reset in case of restart
@param timeUnit of the given timeout
@param asyncRetryStrategy The strategy of reattempt async i/o operation that can be triggered
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | orderedWaitWithRetry | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <IN, OUT> SingleOutputStreamOperator<OUT> orderedWaitWithRetry(
DataStream<IN> in,
AsyncFunction<IN, OUT> func,
long timeout,
TimeUnit timeUnit,
int capacity,
AsyncRetryStrategy<OUT> asyncRetryStrategy) {
return addOperator(
in,
func,
timeUnit.toMillis(timeout),
capacity,
OutputMode.ORDERED,
asyncRetryStrategy);
} | Adds an AsyncWaitOperator with an AsyncRetryStrategy to support retry of AsyncFunction. The
order to process input records is guaranteed to be the same as * input ones.
@param in Input {@link DataStream}
@param func {@link AsyncFunction}
@param timeout from first invoke to final completion of asynchronous operation, may include
multiple retries, and will be reset in case of restart
@param timeUnit of the given timeout
@param capacity The max number of async i/o operation that can be triggered
@param asyncRetryStrategy The strategy of reattempt async i/o operation that can be triggered
@param <IN> Type of input record
@param <OUT> Type of output record
@return A new {@link SingleOutputStreamOperator}. | orderedWaitWithRetry | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/AsyncDataStream.java | Apache-2.0 |
public static <T, K> KeyedStream<T, K> reinterpretAsKeyedStream(
DataStream<T> stream, KeySelector<T, K> keySelector) {
return reinterpretAsKeyedStream(
stream,
keySelector,
TypeExtractor.getKeySelectorTypes(keySelector, stream.getType()));
} | Reinterprets the given {@link DataStream} as a {@link KeyedStream}, which extracts keys with
the given {@link KeySelector}.
<p>IMPORTANT: For every partition of the base stream, the keys of events in the base stream
must be partitioned exactly in the same way as if it was created through a {@link
DataStream#keyBy(KeySelector)}.
@param stream The data stream to reinterpret. For every partition, this stream must be
partitioned exactly in the same way as if it was created through a {@link
DataStream#keyBy(KeySelector)}.
@param keySelector Function that defines how keys are extracted from the data stream.
@param <T> Type of events in the data stream.
@param <K> Type of the extracted keys.
@return The reinterpretation of the {@link DataStream} as a {@link KeyedStream}. | reinterpretAsKeyedStream | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStreamUtils.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStreamUtils.java | Apache-2.0 |
public static <T, K> KeyedStream<T, K> reinterpretAsKeyedStream(
DataStream<T> stream, KeySelector<T, K> keySelector, TypeInformation<K> typeInfo) {
PartitionTransformation<T> partitionTransformation =
new PartitionTransformation<>(
stream.getTransformation(), new ForwardPartitioner<>());
return new KeyedStream<>(stream, partitionTransformation, keySelector, typeInfo);
} | Reinterprets the given {@link DataStream} as a {@link KeyedStream}, which extracts keys with
the given {@link KeySelector}.
<p>IMPORTANT: For every partition of the base stream, the keys of events in the base stream
must be partitioned exactly in the same way as if it was created through a {@link
DataStream#keyBy(KeySelector)}.
@param stream The data stream to reinterpret. For every partition, this stream must be
partitioned exactly in the same way as if it was created through a {@link
DataStream#keyBy(KeySelector)}.
@param keySelector Function that defines how keys are extracted from the data stream.
@param typeInfo Explicit type information about the key type.
@param <T> Type of events in the data stream.
@param <K> Type of the extracted keys.
@return The reinterpretation of the {@link DataStream} as a {@link KeyedStream}. | reinterpretAsKeyedStream | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStreamUtils.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStreamUtils.java | Apache-2.0 |
default void timeout(IN input, ResultFuture<OUT> resultFuture) throws Exception {
resultFuture.completeExceptionally(
new TimeoutException("Async function call has timed out."));
} | {@link AsyncFunction#asyncInvoke} timeout occurred. By default, the result future is
exceptionally completed with a timeout exception.
@param input element coming from an upstream task
@param resultFuture to be completed with the result data | timeout | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/async/AsyncFunction.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/async/AsyncFunction.java | Apache-2.0 |
@Override
public void setRuntimeContext(RuntimeContext runtimeContext) {
Preconditions.checkNotNull(runtimeContext);
if (runtimeContext instanceof IterationRuntimeContext) {
super.setRuntimeContext(
new RichAsyncFunctionIterationRuntimeContext(
(IterationRuntimeContext) runtimeContext));
} else {
super.setRuntimeContext(new RichAsyncFunctionRuntimeContext(runtimeContext));
}
} | Rich variant of the {@link AsyncFunction}. As a {@link RichFunction}, it gives access to the
{@link RuntimeContext} and provides setup and teardown methods: {@link
RichFunction#open(OpenContext)} and {@link RichFunction#close()}.
<p>State related apis in {@link RuntimeContext} are not supported yet because the key may get
changed while accessing states in the working thread.
<p>{@link IterationRuntimeContext#getIterationAggregator(String)} is not supported since the
aggregator may be modified by multiple threads.
@param <IN> The type of the input elements.
@param <OUT> The type of the returned elements. | setRuntimeContext | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/async/RichAsyncFunction.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/async/RichAsyncFunction.java | Apache-2.0 |
static <IN, BucketID> Bucket<IN, BucketID> getNew(
final int subtaskIndex,
final BucketID bucketId,
final Path bucketPath,
final long initialPartCounter,
final BucketWriter<IN, BucketID> bucketWriter,
final RollingPolicy<IN, BucketID> rollingPolicy,
@Nullable final FileLifeCycleListener<BucketID> fileListener,
final OutputFileConfig outputFileConfig) {
return new Bucket<>(
subtaskIndex,
bucketId,
bucketPath,
initialPartCounter,
bucketWriter,
rollingPolicy,
fileListener,
outputFileConfig);
} | Creates a new empty {@code Bucket}.
@param subtaskIndex the index of the subtask creating the bucket.
@param bucketId the identifier of the bucket, as returned by the {@link BucketAssigner}.
@param bucketPath the path to where the part files for the bucket will be written to.
@param initialPartCounter the initial counter for the part files of the bucket.
@param bucketWriter the {@link BucketWriter} used to write part files in the bucket.
@param rollingPolicy the policy based on which a bucket rolls its currently open part file
and opens a new one.
@param fileListener the listener about the status of file.
@param <IN> the type of input elements to the sink.
@param <BucketID> the type of the identifier of the bucket, as returned by the {@link
BucketAssigner}
@param outputFileConfig the part file configuration.
@return The new Bucket. | getNew | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java | Apache-2.0 |
static <IN, BucketID> Bucket<IN, BucketID> restore(
final int subtaskIndex,
final long initialPartCounter,
final BucketWriter<IN, BucketID> bucketWriter,
final RollingPolicy<IN, BucketID> rollingPolicy,
final BucketState<BucketID> bucketState,
@Nullable final FileLifeCycleListener<BucketID> fileListener,
final OutputFileConfig outputFileConfig)
throws IOException {
return new Bucket<>(
subtaskIndex,
initialPartCounter,
bucketWriter,
rollingPolicy,
bucketState,
fileListener,
outputFileConfig);
} | Restores a {@code Bucket} from the state included in the provided {@link BucketState}.
@param subtaskIndex the index of the subtask creating the bucket.
@param initialPartCounter the initial counter for the part files of the bucket.
@param bucketWriter the {@link BucketWriter} used to write part files in the bucket.
@param rollingPolicy the policy based on which a bucket rolls its currently open part file
and opens a new one.
@param bucketState the initial state of the restored bucket.
@param fileListener the listener about the status of file.
@param <IN> the type of input elements to the sink.
@param <BucketID> the type of the identifier of the bucket, as returned by the {@link
BucketAssigner}
@param outputFileConfig the part file configuration.
@return The restored Bucket. | restore | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/Bucket.java | Apache-2.0 |
@Internal
public static <IN> StreamingFileSink.DefaultRowFormatBuilder<IN> forRowFormat(
final Path basePath, final Encoder<IN> encoder) {
return new DefaultRowFormatBuilder<>(basePath, encoder, new DateTimeBucketAssigner<>());
} | Creates the builder for a {@link StreamingFileSink} with row-encoding format.
@param basePath the base path where all the buckets are going to be created as
sub-directories.
@param encoder the {@link Encoder} to be used when writing elements in the buckets.
@param <IN> the type of incoming elements
@return The builder where the remaining of the configuration parameters for the sink can be
configured. In order to instantiate the sink, call {@link RowFormatBuilder#build()} after
specifying the desired parameters. | forRowFormat | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/legacy/StreamingFileSink.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/legacy/StreamingFileSink.java | Apache-2.0 |
@Internal
public static <IN> StreamingFileSink.DefaultBulkFormatBuilder<IN> forBulkFormat(
final Path basePath, final BulkWriter.Factory<IN> writerFactory) {
return new StreamingFileSink.DefaultBulkFormatBuilder<>(
basePath, writerFactory, new DateTimeBucketAssigner<>());
} | Creates the builder for a {@link StreamingFileSink} with bulk-encoding format.
@param basePath the base path where all the buckets are going to be created as
sub-directories.
@param writerFactory the {@link BulkWriter.Factory} to be used when writing elements in the
buckets.
@param <IN> the type of incoming elements
@return The builder where the remaining of the configuration parameters for the sink can be
configured. In order to instantiate the sink, call {@link BulkFormatBuilder#build()}
after specifying the desired parameters. | forBulkFormat | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/legacy/StreamingFileSink.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/legacy/StreamingFileSink.java | Apache-2.0 |
private TransactionHolder<TXN> beginTransactionInternal() throws Exception {
return new TransactionHolder<>(beginTransaction(), clock.millis());
} | This method must be the only place to call {@link #beginTransaction()} to ensure that the
{@link TransactionHolder} is created at the same time. | beginTransactionInternal | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/legacy/TwoPhaseCommitSinkFunction.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/legacy/TwoPhaseCommitSinkFunction.java | Apache-2.0 |
protected TwoPhaseCommitSinkFunction<IN, TXN, CONTEXT> setTransactionTimeout(
long transactionTimeout) {
checkArgument(transactionTimeout >= 0, "transactionTimeout must not be negative");
this.transactionTimeout = transactionTimeout;
return this;
} | Sets the transaction timeout. Setting only the transaction timeout has no effect in itself.
@param transactionTimeout The transaction timeout in ms.
@see #ignoreFailuresAfterTransactionTimeout()
@see #enableTransactionTimeoutWarnings(double) | setTransactionTimeout | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/legacy/TwoPhaseCommitSinkFunction.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/legacy/TwoPhaseCommitSinkFunction.java | Apache-2.0 |
public long getCheckpointId() {
return checkpointId;
} | Gets the checkpointId of the checkpoint.
@return The checkpointId of the checkpoint. | getCheckpointId | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | Apache-2.0 |
public byte[] getSerializedData() {
return serializedData;
} | Gets the binary data for the serialized elements.
@return The binary data for the serialized elements. | getSerializedData | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | Apache-2.0 |
public int getNumIds() {
return numIds;
} | Gets the number of IDs in the checkpoint.
@return The number of IDs in the checkpoint. | getNumIds | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | Apache-2.0 |
public static <T> SerializedCheckpointData[] fromDeque(
ArrayDeque<Tuple2<Long, Set<T>>> checkpoints, TypeSerializer<T> serializer)
throws IOException {
return fromDeque(checkpoints, serializer, new DataOutputSerializer(128));
} | Converts a list of checkpoints with elements into an array of SerializedCheckpointData.
@param checkpoints The checkpoints to be converted into IdsCheckpointData.
@param serializer The serializer to serialize the IDs.
@param <T> The type of the ID.
@return An array of serializable SerializedCheckpointData, one per entry in the queue.
@throws IOException Thrown, if the serialization fails. | fromDeque | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | Apache-2.0 |
public static <T> SerializedCheckpointData[] fromDeque(
ArrayDeque<Tuple2<Long, Set<T>>> checkpoints,
TypeSerializer<T> serializer,
DataOutputSerializer outputBuffer)
throws IOException {
SerializedCheckpointData[] serializedCheckpoints =
new SerializedCheckpointData[checkpoints.size()];
int pos = 0;
for (Tuple2<Long, Set<T>> checkpoint : checkpoints) {
outputBuffer.clear();
Set<T> checkpointIds = checkpoint.f1;
for (T id : checkpointIds) {
serializer.serialize(id, outputBuffer);
}
serializedCheckpoints[pos++] =
new SerializedCheckpointData(
checkpoint.f0, outputBuffer.getCopyOfBuffer(), checkpointIds.size());
}
return serializedCheckpoints;
} | Converts a list of checkpoints into an array of SerializedCheckpointData.
@param checkpoints The checkpoints to be converted into IdsCheckpointData.
@param serializer The serializer to serialize the IDs.
@param outputBuffer The reusable serialization buffer.
@param <T> The type of the ID.
@return An array of serializable SerializedCheckpointData, one per entry in the queue.
@throws IOException Thrown, if the serialization fails. | fromDeque | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | Apache-2.0 |
public static <T> ArrayDeque<Tuple2<Long, Set<T>>> toDeque(
SerializedCheckpointData[] data, TypeSerializer<T> serializer) throws IOException {
ArrayDeque<Tuple2<Long, Set<T>>> deque = new ArrayDeque<>(data.length);
DataInputDeserializer deser = null;
for (SerializedCheckpointData checkpoint : data) {
byte[] serializedData = checkpoint.getSerializedData();
if (deser == null) {
deser = new DataInputDeserializer(serializedData, 0, serializedData.length);
} else {
deser.setBuffer(serializedData);
}
final Set<T> ids = CollectionUtil.newHashSetWithExpectedSize(checkpoint.getNumIds());
final int numIds = checkpoint.getNumIds();
for (int i = 0; i < numIds; i++) {
ids.add(serializer.deserialize(deser));
}
deque.addLast(new Tuple2<Long, Set<T>>(checkpoint.checkpointId, ids));
}
return deque;
} | De-serializes an array of SerializedCheckpointData back into an ArrayDeque of element
checkpoints.
@param data The data to be deserialized.
@param serializer The serializer used to deserialize the data.
@param <T> The type of the elements.
@return An ArrayDeque of element checkpoints.
@throws IOException Thrown, if the serialization fails. | toDeque | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/SerializedCheckpointData.java | Apache-2.0 |
@Override
public void handleViolation(long elementTimestamp, long lastTimestamp) {} | Handler that does nothing when timestamp monotony is violated. | handleViolation | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/timestamps/AscendingTimestampExtractor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/timestamps/AscendingTimestampExtractor.java | Apache-2.0 |
@Override
public void handleViolation(long elementTimestamp, long lastTimestamp) {
throw new RuntimeException(
"Ascending timestamps condition violated. Element timestamp "
+ elementTimestamp
+ " is smaller than last timestamp "
+ lastTimestamp);
} | Handler that fails the program when timestamp monotony is violated. | handleViolation | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/timestamps/AscendingTimestampExtractor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/timestamps/AscendingTimestampExtractor.java | Apache-2.0 |
@Override
public void handleViolation(long elementTimestamp, long lastTimestamp) {
LOG.warn("Timestamp monotony violated: {} < {}", elementTimestamp, lastTimestamp);
} | Handler that only logs violations of timestamp monotony, on WARN log level. | handleViolation | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/timestamps/AscendingTimestampExtractor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/timestamps/AscendingTimestampExtractor.java | Apache-2.0 |
@Override
public double getNestedDelta(double[] oldDataPoint, double[] newDataPoint) {
double result = 0;
for (int i = 0; i < oldDataPoint.length; i++) {
result += (oldDataPoint[i] - newDataPoint[i]) * (oldDataPoint[i] - newDataPoint[i]);
}
return Math.sqrt(result);
} | This delta function calculates the euclidean distance between two given points.
<p>Euclidean distance: http://en.wikipedia.org/wiki/Euclidean_distance
@param <DATA> The input data type. This delta function works with a double[], but can
extract/convert to it from any other given object in case the respective extractor has been
set. See {@link ExtractionAwareDeltaFunction} for more information. | getNestedDelta | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/EuclideanDistance.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/EuclideanDistance.java | Apache-2.0 |
@SuppressWarnings("unchecked")
@Override
public double getDelta(DATA oldDataPoint, DATA newDataPoint) {
if (converter == null) {
// In case no conversion/extraction is required, we can cast DATA to
// TO
// => Therefore, "unchecked" warning is suppressed for this method.
return getNestedDelta((TO) oldDataPoint, (TO) newDataPoint);
} else {
return getNestedDelta(converter.extract(oldDataPoint), converter.extract(newDataPoint));
}
} | This method takes the two data point and runs the set extractor on it. The delta function
implemented at {@link #getNestedDelta} is then called with the extracted data. In case no
extractor is set the input data gets passes to {@link #getNestedDelta} as-is. The return
value is just forwarded from {@link #getNestedDelta}.
@param oldDataPoint the older data point as raw data (before extraction).
@param newDataPoint the new data point as raw data (before extraction).
@return the delta between the two points. | getDelta | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/ExtractionAwareDeltaFunction.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/ExtractionAwareDeltaFunction.java | Apache-2.0 |
@Override
public Object[] extract(Tuple in) {
Object[] output;
if (order == null) {
// copy the whole tuple
output = new Object[in.getArity()];
for (int i = 0; i < in.getArity(); i++) {
output[i] = in.getField(i);
}
} else {
// copy user specified order
output = new Object[order.length];
for (int i = 0; i < order.length; i++) {
output[i] = in.getField(order[i]);
}
}
return output;
} | Using this constructor the extractor will combine the fields as specified in the indexes
parameter in an object array.
@param indexes the field ids (enumerated from 0) | extract | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/ArrayFromTuple.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/ArrayFromTuple.java | Apache-2.0 |
@SuppressWarnings("unchecked")
@Override
public OUT extract(Object in) {
return (OUT) Array.get(in, fieldId);
} | Extracts the field with the given id from the array.
@param fieldId The id of the field which will be extracted from the array. | extract | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldFromArray.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldFromArray.java | Apache-2.0 |
@Override
public OUT extract(Tuple in) {
return in.getField(fieldId);
} | Extracts the field with the given id from the tuple.
@param fieldId The id of the field which will be extracted from the tuple. | extract | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldFromTuple.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldFromTuple.java | Apache-2.0 |
@Override
public double[] extract(Tuple in) {
double[] out = new double[indexes.length];
for (int i = 0; i < indexes.length; i++) {
out[i] = (Double) in.getField(indexes[i]);
}
return out;
} | Extracts one or more fields of the type Double from a tuple and puts them into a new double[]
(in the specified order).
@param indexes The indexes of the fields to be extracted. | extract | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldsFromTuple.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldsFromTuple.java | Apache-2.0 |
private void outputCompletedElement() {
if (queue.hasCompletedElements()) {
// emit only one element to not block the mailbox thread unnecessarily
queue.emitCompletedElement(timestampedCollector);
// if there are more completed elements, emit them with subsequent mails
if (queue.hasCompletedElements()) {
try {
mailboxExecutor.execute(
this::outputCompletedElement,
"AsyncWaitOperator#outputCompletedElement");
} catch (RejectedExecutionException mailboxClosedException) {
// This exception can only happen if the operator is cancelled which means all
// pending records can be safely ignored since they will be processed one more
// time after recovery.
LOG.debug(
"Attempt to complete element is ignored since the mailbox rejected the execution.",
mailboxClosedException);
}
}
}
} | Outputs one completed element. Watermarks are always completed if it's their turn to be
processed.
<p>This method will be called from {@link #processWatermark(Watermark)} and from a mail
processing the result of an async function call. | outputCompletedElement | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperator.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperator.java | Apache-2.0 |
private void tryOnce(RetryableResultHandlerDelegator resultHandlerDelegator) throws Exception {
// increment current attempt number
resultHandlerDelegator.currentAttempts++;
// fire a new attempt
userFunction.asyncInvoke(
resultHandlerDelegator.resultHandler.inputRecord.getValue(),
resultHandlerDelegator);
} | Increments number of attempts and fire the attempt. | tryOnce | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperator.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperator.java | Apache-2.0 |
private ScheduledFuture<?> registerTimer(
ProcessingTimeService processingTimeService,
long timeout,
ThrowingConsumer<Void, Exception> callback) {
final long timeoutTimestamp = timeout + processingTimeService.getCurrentProcessingTime();
return processingTimeService.registerTimer(
timeoutTimestamp, timestamp -> callback.accept(null));
} | Utility method to register timeout timer. | registerTimer | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperator.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperator.java | Apache-2.0 |
@Override
public void complete(Collection<OUT> result) {
super.complete(result);
segment.completed(this);
} | An entry that notifies the respective segment upon completion. | complete | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | Apache-2.0 |
void completed(StreamElementQueueEntry<OUT> elementQueueEntry) {
// adding only to completed queue if not completed before
// there may be a real result coming after a timeout result, which is updated in the
// queue entry but
// the entry is not re-added to the complete queue
if (incompleteElements.remove(elementQueueEntry)) {
completedElements.add(elementQueueEntry);
}
} | Signals that an entry finished computation. | completed | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | Apache-2.0 |
boolean isEmpty() {
return incompleteElements.isEmpty() && completedElements.isEmpty();
} | True if there are no incomplete elements and all complete elements have been consumed. | isEmpty | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | Apache-2.0 |
int emitCompleted(TimestampedCollector<OUT> output) {
final StreamElementQueueEntry<OUT> completedEntry = completedElements.poll();
if (completedEntry == null) {
return 0;
}
completedEntry.emitResult(output);
return 1;
} | Pops one completed elements into the given output. Because an input element may produce
an arbitrary number of output elements, there is no correlation between the size of the
collection and the popped elements.
@return the number of popped input elements. | emitCompleted | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | Apache-2.0 |
void add(StreamElementQueueEntry<OUT> queueEntry) {
if (queueEntry.isDone()) {
completedElements.add(queueEntry);
} else {
incompleteElements.add(queueEntry);
}
} | Adds the given entry to this segment. If the element is completed (watermark), it is
directly moved into the completed queue. | add | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueue.java | Apache-2.0 |
public static <T, W extends Window> DeltaEvictor<T, W> of(
double threshold, DeltaFunction<T> deltaFunction) {
return new DeltaEvictor<>(threshold, deltaFunction);
} | Creates a {@code DeltaEvictor} from the given threshold and {@code DeltaFunction}. Eviction
is done before the window function.
@param threshold The threshold
@param deltaFunction The {@code DeltaFunction} | of | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/DeltaEvictor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/DeltaEvictor.java | Apache-2.0 |
public static <T, W extends Window> DeltaEvictor<T, W> of(
double threshold, DeltaFunction<T> deltaFunction, boolean doEvictAfter) {
return new DeltaEvictor<>(threshold, deltaFunction, doEvictAfter);
} | Creates a {@code DeltaEvictor} from the given threshold, {@code DeltaFunction}. Eviction is
done before/after the window function based on the value of doEvictAfter.
@param threshold The threshold
@param deltaFunction The {@code DeltaFunction}
@param doEvictAfter Whether eviction should be done after window function | of | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/DeltaEvictor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/DeltaEvictor.java | Apache-2.0 |
private boolean hasTimestamp(Iterable<TimestampedValue<Object>> elements) {
Iterator<TimestampedValue<Object>> it = elements.iterator();
if (it.hasNext()) {
return it.next().hasTimestamp();
}
return false;
} | Returns true if the first element in the Iterable of {@link TimestampedValue} has a
timestamp. | hasTimestamp | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/TimeEvictor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/TimeEvictor.java | Apache-2.0 |
public static <W extends Window> TimeEvictor<W> of(Duration windowSize) {
return new TimeEvictor<>(windowSize.toMillis());
} | Creates a {@code TimeEvictor} that keeps the given number of elements. Eviction is done
before the window function.
@param windowSize The amount of time for which to keep elements. | of | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/TimeEvictor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/TimeEvictor.java | Apache-2.0 |
public static <W extends Window> TimeEvictor<W> of(Duration windowSize, boolean doEvictAfter) {
return new TimeEvictor<>(windowSize.toMillis(), doEvictAfter);
} | Creates a {@code TimeEvictor} that keeps the given number of elements. Eviction is done
before/after the window function based on the value of doEvictAfter.
@param windowSize The amount of time for which to keep elements.
@param doEvictAfter Whether eviction is done after window function. | of | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/TimeEvictor.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/evictors/TimeEvictor.java | Apache-2.0 |
public static <T, W extends Window> DeltaTrigger<T, W> of(
double threshold, DeltaFunction<T> deltaFunction, TypeSerializer<T> stateSerializer) {
return new DeltaTrigger<>(threshold, deltaFunction, stateSerializer);
} | Creates a delta trigger from the given threshold and {@code DeltaFunction}.
@param threshold The threshold at which to trigger.
@param deltaFunction The delta function to use
@param stateSerializer TypeSerializer for the data elements.
@param <T> The type of elements on which this trigger can operate.
@param <W> The type of {@link Window Windows} on which this trigger can operate. | of | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/DeltaTrigger.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/DeltaTrigger.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.