code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1 value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7 values |
|---|---|---|---|---|---|---|---|
@SuppressWarnings("unchecked")
public static <T, SD extends ShuffleDescriptor> T applyWithShuffleTypeCheck(
Class<SD> shuffleDescriptorClass,
ShuffleDescriptor shuffleDescriptor,
Function<UnknownShuffleDescriptor, T> functionOfUnknownDescriptor,
Function<SD, T> functionOfKnownDescriptor) {
if (shuffleDescriptor.isUnknown()) {
return functionOfUnknownDescriptor.apply((UnknownShuffleDescriptor) shuffleDescriptor);
} else if (shuffleDescriptorClass.equals(shuffleDescriptor.getClass())) {
return functionOfKnownDescriptor.apply((SD) shuffleDescriptor);
} else {
throw new IllegalArgumentException(
String.format(
"Unsupported ShuffleDescriptor type <%s>, only <%s> is supported",
shuffleDescriptor.getClass().getName(),
shuffleDescriptorClass.getName()));
}
} | Apply different functions to known and unknown {@link ShuffleDescriptor}s.
<p>Also casts known {@link ShuffleDescriptor}.
@param shuffleDescriptorClass concrete class of {@code shuffleDescriptor}
@param shuffleDescriptor concrete shuffle descriptor to check
@param functionOfUnknownDescriptor supplier to call in case {@code shuffleDescriptor} is
unknown
@param functionOfKnownDescriptor function to call in case {@code shuffleDescriptor} is known
@param <T> return type of called functions
@param <SD> concrete type of {@code shuffleDescriptor} to check
@return result of either function call | applyWithShuffleTypeCheck | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleUtils.java | Apache-2.0 |
@Override
public Optional<ResourceProfile> match(
ResourceProfile resourceProfile,
ResourceCounter totalRequirements,
Function<ResourceProfile, Integer> numAssignedResourcesLookup) {
// Short-cut for fine-grained resource management. If there is already exactly equal
// requirement, we can directly match with it.
if (totalRequirements.getResourceCount(resourceProfile)
> numAssignedResourcesLookup.apply(resourceProfile)) {
return Optional.of(resourceProfile);
}
for (Map.Entry<ResourceProfile, Integer> requirementCandidate :
totalRequirements.getResourcesWithCount()) {
ResourceProfile requirementProfile = requirementCandidate.getKey();
// beware the order when matching resources to requirements, because
// ResourceProfile.UNKNOWN (which only
// occurs as a requirement) does not match any resource!
if (resourceProfile.isMatching(requirementProfile)
&& requirementCandidate.getValue()
> numAssignedResourcesLookup.apply(requirementProfile)) {
return Optional.of(requirementProfile);
}
}
return Optional.empty();
} | Default implementation of {@link RequirementMatcher}. This matcher finds the first requirement
that a) is not unfulfilled and B) matches the resource profile. | match | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/slots/DefaultRequirementMatcher.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/slots/DefaultRequirementMatcher.java | Apache-2.0 |
public <T> void notifyReadyAsync(Callable<T> callable, BiConsumer<T, Throwable> handler) {
workerExecutor.execute(
() -> {
try {
T result = callable.call();
executorToNotify.execute(() -> handler.accept(result, null));
} catch (Throwable t) {
executorToNotify.execute(() -> handler.accept(null, t));
}
});
} | Call the given callable once. Notify the {@link #executorToNotify} to execute the handler.
<p>Note that when this method is invoked multiple times, it is possible that multiple
callables are executed concurrently, so do the handlers. For example, assuming both the
workerExecutor and executorToNotify are single threaded. The following code may still throw a
<code>ConcurrentModificationException</code>.
<pre>{@code
final List<Integer> list = new ArrayList<>();
// The callable adds an integer 1 to the list, while it works at the first glance,
// A ConcurrentModificationException may be thrown because the caller and
// handler may modify the list at the same time.
notifier.notifyReadyAsync(
() -> list.add(1),
(ignoredValue, ignoredThrowable) -> list.add(2));
}</pre>
<p>Instead, the above logic should be implemented in as:
<pre>{@code
// Modify the state in the handler.
notifier.notifyReadyAsync(() -> 1, (v, ignoredThrowable) -> {
list.add(v));
list.add(2);
});
}</pre>
@param callable the callable to invoke before notifying the executor.
@param handler the handler to handle the result of the callable. | notifyReadyAsync | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/ExecutorNotifier.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/ExecutorNotifier.java | Apache-2.0 |
public <T> void notifyReadyAsync(
Callable<T> callable,
BiConsumer<T, Throwable> handler,
long initialDelayMs,
long periodMs) {
workerExecutor.scheduleAtFixedRate(
() -> {
try {
T result = callable.call();
executorToNotify.execute(() -> handler.accept(result, null));
} catch (Throwable t) {
executorToNotify.execute(() -> handler.accept(null, t));
}
},
initialDelayMs,
periodMs,
TimeUnit.MILLISECONDS);
} | Call the given callable once. Notify the {@link #executorToNotify} to execute the handler.
<p>Note that when this method is invoked multiple times, it is possible that multiple
callables are executed concurrently, so do the handlers. For example, assuming both the
workerExecutor and executorToNotify are single threaded. The following code may still throw a
<code>ConcurrentModificationException</code>.
<pre>{@code
final List<Integer> list = new ArrayList<>();
// The callable adds an integer 1 to the list, while it works at the first glance,
// A ConcurrentModificationException may be thrown because the caller and
// handler may modify the list at the same time.
notifier.notifyReadyAsync(
() -> list.add(1),
(ignoredValue, ignoredThrowable) -> list.add(2));
}</pre>
<p>Instead, the above logic should be implemented in as:
<pre>{@code
// Modify the state in the handler.
notifier.notifyReadyAsync(() -> 1, (v, ignoredThrowable) -> {
list.add(v));
list.add(2);
});
}</pre>
@param callable the callable to execute before notifying the executor to notify.
@param handler the handler that handles the result from the callable.
@param initialDelayMs the initial delay in ms before invoking the given callable.
@param periodMs the interval in ms to invoke the callable. | notifyReadyAsync | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/ExecutorNotifier.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/ExecutorNotifier.java | Apache-2.0 |
void handleUncaughtExceptionFromAsyncCall(Throwable t) {
if (closed) {
return;
}
ExceptionUtils.rethrowIfFatalErrorOrOOM(t);
LOG.error(
"Exception while handling result from async call in {}. Triggering job failover.",
coordinatorThreadName,
t);
failJob(t);
} | Fail the job with the given cause.
@param cause the cause of the job failure. | handleUncaughtExceptionFromAsyncCall | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | Apache-2.0 |
void registerSourceReader(int subtaskId, int attemptNumber, String location) {
final Map<Integer, ReaderInfo> attemptReaders =
registeredReaders.computeIfAbsent(subtaskId, k -> new ConcurrentHashMap<>());
checkState(
!attemptReaders.containsKey(attemptNumber),
"ReaderInfo of subtask %s (#%s) already exists.",
subtaskId,
attemptNumber);
attemptReaders.put(attemptNumber, new ReaderInfo(subtaskId, location));
sendCachedSplitsToNewlyRegisteredReader(subtaskId, attemptNumber);
} | Register a source reader.
@param subtaskId the subtask id of the source reader.
@param attemptNumber the attempt number of the source reader.
@param location the location of the source reader. | registerSourceReader | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | Apache-2.0 |
void unregisterSourceReader(int subtaskId, int attemptNumber) {
final Map<Integer, ReaderInfo> attemptReaders = registeredReaders.get(subtaskId);
if (attemptReaders != null) {
attemptReaders.remove(attemptNumber);
if (attemptReaders.isEmpty()) {
registeredReaders.remove(subtaskId);
}
}
} | Unregister a source reader.
@param subtaskId the subtask id of the source reader.
@param attemptNumber the attempt number of the source reader. | unregisterSourceReader | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | Apache-2.0 |
List<SplitT> getAndRemoveUncheckpointedAssignment(int subtaskId, long restoredCheckpointId) {
return assignmentTracker.getAndRemoveUncheckpointedAssignment(
subtaskId, restoredCheckpointId);
} | Get the split to put back. This only happens when a source reader subtask has failed.
@param subtaskId the failed subtask id.
@param restoredCheckpointId the checkpoint that the task is recovered to.
@return A list of splits that needs to be added back to the {@link SplitEnumerator}. | getAndRemoveUncheckpointedAssignment | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | Apache-2.0 |
public TernaryBoolean isBacklog() {
return backlog;
} | Returns whether the Source is processing backlog data. UNDEFINED is returned if it is not set
by the {@link #setIsProcessingBacklog} method. | isBacklog | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinatorContext.java | Apache-2.0 |
public void onCheckpoint(long checkpointId) throws Exception {
// Include the uncheckpointed assignments to the snapshot.
assignmentsByCheckpointId.put(checkpointId, uncheckpointedAssignments);
uncheckpointedAssignments = new HashMap<>();
} | Behavior of SplitAssignmentTracker on checkpoint. Tracker will mark uncheckpointed assignment
as checkpointed with current checkpoint ID.
@param checkpointId the id of the ongoing checkpoint | onCheckpoint | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | Apache-2.0 |
public byte[] snapshotState(SimpleVersionedSerializer<SplitT> splitSerializer)
throws Exception {
return SourceCoordinatorSerdeUtils.serializeAssignments(
uncheckpointedAssignments, splitSerializer);
} | Take a snapshot of the split assignments. | snapshotState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | Apache-2.0 |
public void restoreState(
SimpleVersionedSerializer<SplitT> splitSerializer, byte[] assignmentData)
throws Exception {
uncheckpointedAssignments =
SourceCoordinatorSerdeUtils.deserializeAssignments(assignmentData, splitSerializer);
} | Restore the state of the SplitAssignmentTracker.
@param splitSerializer The serializer of the splits.
@param assignmentData The state of the SplitAssignmentTracker.
@throws Exception when the state deserialization fails. | restoreState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | Apache-2.0 |
public void onCheckpointComplete(long checkpointId) {
assignmentsByCheckpointId.entrySet().removeIf(entry -> entry.getKey() <= checkpointId);
} | when a checkpoint has been successfully made, this method is invoked to clean up the
assignment history before this successful checkpoint.
@param checkpointId the id of the successful checkpoint. | onCheckpointComplete | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | Apache-2.0 |
public void recordSplitAssignment(SplitsAssignment<SplitT> splitsAssignment) {
addSplitAssignment(splitsAssignment, uncheckpointedAssignments);
} | Record a new split assignment.
@param splitsAssignment the new split assignment. | recordSplitAssignment | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SplitAssignmentTracker.java | Apache-2.0 |
@Override
public String toString() {
return "[NoMoreSplitEvent]";
} | A source event sent from the SplitEnumerator to the SourceReader to indicate that no more splits
will be assigned to the source reader anymore. So once the SplitReader finishes reading the
currently assigned splits, they can exit. | toString | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/source/event/NoMoreSplitsEvent.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/source/event/NoMoreSplitsEvent.java | Apache-2.0 |
@Override
public void dispose() {
IOUtils.closeQuietly(cancelStreamRegistry);
if (kvStateRegistry != null) {
kvStateRegistry.unregisterAll();
}
lastName = null;
lastState = null;
keyValueStatesByName.clear();
} | Closes the state backend, releasing all internal resources, but does not delete any
persistent checkpoint data. | dispose | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java | Apache-2.0 |
@Override
public boolean useManagedMemory() {
return true;
} | Abstract base class for state backends that use managed memory. | useManagedMemory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractManagedMemoryStateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractManagedMemoryStateBackend.java | Apache-2.0 |
public static StreamCompressionDecorator getCompressionDecorator(
ExecutionConfig executionConfig) {
if (executionConfig != null && executionConfig.isUseSnapshotCompression()) {
return SnappyStreamCompressionDecorator.INSTANCE;
} else {
return UncompressedStreamCompressionDecorator.INSTANCE;
}
} | An abstract base implementation of the {@link StateBackend} interface.
<p>This class has currently no contents and only kept to not break the prior class hierarchy for
users. | getCompressionDecorator | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractStateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractStateBackend.java | Apache-2.0 |
@Override
default void switchContext(@Nullable RecordContext<K> context) {} | By default, a state backend does nothing when a key is switched in async processing. | switchContext | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncKeyedStateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncKeyedStateBackend.java | Apache-2.0 |
default boolean requiresLegacySynchronousTimerSnapshots(SnapshotType checkpointType) {
return true;
} | Whether the keyed state backend requires legacy synchronous timer snapshots.
@param checkpointType
@return true as default in case of AsyncKeyedStateBackend | requiresLegacySynchronousTimerSnapshots | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncKeyedStateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncKeyedStateBackend.java | Apache-2.0 |
default boolean isSafeToReuseKVState() {
return false;
} | Whether it's safe to reuse key-values from the state-backend, e.g for the purpose of
optimization.
<p>NOTE: this method should not be used to check for {@link InternalPriorityQueue}, as the
priority queue could be stored on different locations, e.g ForSt state-backend could store
that on JVM heap if configuring HEAP as the time-service factory.
@return returns ture if safe to reuse the key-values from the state-backend. | isSafeToReuseKVState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncKeyedStateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncKeyedStateBackend.java | Apache-2.0 |
public AsyncSnapshotTask toAsyncSnapshotFutureTask(@Nonnull CloseableRegistry taskRegistry)
throws IOException {
return new AsyncSnapshotTask(taskRegistry);
} | Creates a future task from this and registers it with the given {@link CloseableRegistry}.
The task is unregistered again in {@link FutureTask#done()}. | toAsyncSnapshotFutureTask | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncSnapshotCallable.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncSnapshotCallable.java | Apache-2.0 |
private void closeSnapshotIO() {
try {
snapshotCloseableRegistry.close();
} catch (IOException e) {
LOG.warn("Could not properly close incremental snapshot streams.", e);
}
} | This method is invoked after completion of the snapshot and can be overridden to output a
logging about the duration of the async part. | closeSnapshotIO | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncSnapshotCallable.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/AsyncSnapshotCallable.java | Apache-2.0 |
public boolean isEmpty() {
for (T state : operatorStateHandles) {
if (state != null) {
return false;
}
}
return true;
} | Check if there are any states handles present. Notice that this can be true even if {@link
#getLength()} is greater than zero, because state handles can be null.
@return true if there are no state handles for any operator. | isEmpty | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/ChainedStateHandle.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/ChainedStateHandle.java | Apache-2.0 |
public int getLength() {
return operatorStateHandles.size();
} | Returns the length of the operator chain. This can be different from the number of operator
state handles, because the some operators in the chain can have no state and thus their state
handle can be null.
@return length of the operator chain | getLength | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/ChainedStateHandle.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/ChainedStateHandle.java | Apache-2.0 |
public static Optional<CheckpointStorage> fromConfig(
ReadableConfig config, ClassLoader classLoader, @Nullable Logger logger)
throws IllegalStateException, DynamicCodeLoadingException {
Preconditions.checkNotNull(config, "config");
Preconditions.checkNotNull(classLoader, "classLoader");
final String storageName = config.get(CheckpointingOptions.CHECKPOINT_STORAGE);
if (storageName == null) {
if (logger != null) {
logger.debug(
"The configuration {} has not be set in the current"
+ " sessions config.yaml. Falling back to a default CheckpointStorage"
+ " type. Users are strongly encouraged explicitly set this configuration"
+ " so they understand how their applications are checkpointing"
+ " snapshots for fault-tolerance.",
CheckpointingOptions.CHECKPOINT_STORAGE.key());
}
return Optional.empty();
}
switch (storageName.toLowerCase()) {
case JOB_MANAGER_STORAGE_NAME:
return Optional.of(createJobManagerCheckpointStorage(config, classLoader, logger));
case FILE_SYSTEM_STORAGE_NAME:
return Optional.of(createFileSystemCheckpointStorage(config, classLoader, logger));
default:
if (logger != null) {
logger.info("Loading state backend via factory '{}'", storageName);
}
CheckpointStorageFactory<?> factory;
try {
@SuppressWarnings("rawtypes")
Class<? extends CheckpointStorageFactory> clazz =
Class.forName(storageName, false, classLoader)
.asSubclass(CheckpointStorageFactory.class);
factory = clazz.newInstance();
} catch (ClassNotFoundException e) {
throw new DynamicCodeLoadingException(
"Cannot find configured state backend factory class: " + storageName,
e);
} catch (ClassCastException | InstantiationException | IllegalAccessException e) {
throw new DynamicCodeLoadingException(
"The class configured under '"
+ CheckpointingOptions.CHECKPOINT_STORAGE.key()
+ "' is not a valid checkpoint storage factory ("
+ storageName
+ ')',
e);
}
return Optional.of(factory.createFromConfig(config, classLoader));
}
} | Loads the checkpoint storage from the configuration, from the parameter
'execution.checkpointing.storage', as defined in {@link
CheckpointingOptions#CHECKPOINT_STORAGE}.
<p>The implementation can be specified either via their shortcut name, or via the class name
of a {@link CheckpointStorageFactory}. If a CheckpointStorageFactory class name is specified,
the factory is instantiated (via its zero-argument constructor) and its {@link
CheckpointStorageFactory#createFromConfig(ReadableConfig, ClassLoader)} method is called.
<p>Recognized shortcut names are '{@value #JOB_MANAGER_STORAGE_NAME}', and '{@value
#FILE_SYSTEM_STORAGE_NAME}'.
@param config The configuration to load the checkpoint storage from
@param classLoader The class loader that should be used to load the checkpoint storage
@param logger Optionally, a logger to log actions to (may be null)
@return The instantiated checkpoint storage.
@throws DynamicCodeLoadingException Thrown if a checkpoint storage factory is configured and
the factory class was not found or the factory could not be instantiated
@throws IllegalConfigurationException May be thrown by the CheckpointStorageFactory when
creating / configuring the checkpoint storage in the factory | fromConfig | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLoader.java | Apache-2.0 |
public static CheckpointStorage load(
@Nullable CheckpointStorage fromApplication,
StateBackend configuredStateBackend,
Configuration jobConfig,
Configuration clusterConfig,
ClassLoader classLoader,
@Nullable Logger logger)
throws IllegalConfigurationException, DynamicCodeLoadingException {
Preconditions.checkNotNull(jobConfig, "jobConfig");
Preconditions.checkNotNull(clusterConfig, "clusterConfig");
Preconditions.checkNotNull(classLoader, "classLoader");
Preconditions.checkNotNull(configuredStateBackend, "statebackend");
// Job level config can override the cluster level config.
Configuration mergedConfig = new Configuration(clusterConfig);
mergedConfig.addAll(jobConfig);
// Legacy state backends always take precedence for backwards compatibility.
StateBackend rootStateBackend =
(configuredStateBackend instanceof DelegatingStateBackend)
? ((DelegatingStateBackend) configuredStateBackend)
.getDelegatedStateBackend()
: configuredStateBackend;
if (rootStateBackend instanceof CheckpointStorage) {
if (logger != null) {
logger.info(
"Using legacy state backend {} as Job checkpoint storage",
rootStateBackend);
if (fromApplication != null) {
logger.warn(
"Checkpoint storage passed via StreamExecutionEnvironment is ignored because legacy state backend '{}' is used. {}",
rootStateBackend.getClass().getName(),
LEGACY_PRECEDENCE_LOG_MESSAGE);
}
if (mergedConfig.get(CheckpointingOptions.CHECKPOINT_STORAGE) != null) {
logger.warn(
"Config option '{}' is ignored because legacy state backend '{}' is used. {}",
CheckpointingOptions.CHECKPOINT_STORAGE.key(),
rootStateBackend.getClass().getName(),
LEGACY_PRECEDENCE_LOG_MESSAGE);
}
}
return (CheckpointStorage) rootStateBackend;
}
// In the FLINK-2.0, the checkpoint storage from application will not be supported
// anymore.
if (fromApplication != null) {
if (fromApplication instanceof ConfigurableCheckpointStorage) {
if (logger != null) {
logger.info(
"Using job/cluster config to configure application-defined checkpoint storage: {}",
fromApplication);
if (mergedConfig.get(CheckpointingOptions.CHECKPOINT_STORAGE) != null) {
logger.warn(
"Config option '{}' is ignored because the checkpoint storage passed via StreamExecutionEnvironment takes precedence.",
CheckpointingOptions.CHECKPOINT_STORAGE.key());
}
}
return ((ConfigurableCheckpointStorage) fromApplication)
// Use cluster config for backwards compatibility.
.configure(clusterConfig, classLoader);
}
if (logger != null) {
logger.info("Using application defined checkpoint storage: {}", fromApplication);
}
return fromApplication;
}
return fromConfig(mergedConfig, classLoader, logger)
.orElseGet(() -> createDefaultCheckpointStorage(mergedConfig, classLoader, logger));
} | Loads the configured {@link CheckpointStorage} for the job based on the following precedent
rules:
<p>1) If the jobs configured {@link StateBackend} implements {@code CheckpointStorage} it
will always be used. This is to maintain backwards compatibility with older versions of Flink
that intermixed these responsibilities.
<p>2) Use the {@link CheckpointStorage} instance configured via the {@code
StreamExecutionEnvironment}.
<p>3) Use the {@link CheckpointStorage} instance configured via the clusters
<b>config.yaml</b>.
<p>4) Load a default {@link CheckpointStorage} instance.
@param fromApplication The checkpoint storage instance passed to the jobs
StreamExecutionEnvironment. Or null if not was set.
@param configuredStateBackend The jobs configured state backend.
@param jobConfig The job level configuration to load the checkpoint storage from.
@param clusterConfig The cluster level configuration to load the checkpoint storage from.
@param classLoader The class loader that should be used to load the checkpoint storage.
@param logger Optionally, a logger to log actions to (may be null).
@return The configured checkpoint storage instance.
@throws DynamicCodeLoadingException Thrown if a checkpoint storage factory is configured and
the factory class was not found or the factory could not be instantiated
@throws IllegalConfigurationException May be thrown by the CheckpointStorageFactory when
creating / configuring the checkpoint storage in the factory | load | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLoader.java | Apache-2.0 |
public byte[] getReferenceBytes() {
// return a non null object always
return encodedReference != null ? encodedReference : new byte[0];
} | Gets the reference bytes.
<p><b>Important:</b> For efficiency, this method does not make a defensive copy, so the
caller must not modify the bytes in the array. | getReferenceBytes | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLocationReference.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLocationReference.java | Apache-2.0 |
protected final Object readResolve() throws ObjectStreamException {
return encodedReference == null ? DEFAULT : this;
} | readResolve() preserves the singleton property of the default value. | readResolve | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLocationReference.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLocationReference.java | Apache-2.0 |
public static CheckpointStorageLocationReference getDefault() {
return DEFAULT;
} | The singleton object for the default reference. | getDefault | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLocationReference.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStorageLocationReference.java | Apache-2.0 |
default void reusePreviousStateHandle(Collection<? extends StreamStateHandle> previousHandle) {
// Does nothing for normal stream factory
} | A callback method when some previous handle is reused. It is needed by the file merging
mechanism (FLIP-306) which will manage the life cycle of underlying files by file-reusing
information.
@param previousHandle the previous handles that will be reused. | reusePreviousStateHandle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStreamFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStreamFactory.java | Apache-2.0 |
default boolean couldReuseStateHandle(StreamStateHandle stateHandle) {
// By default, the CheckpointStreamFactory doesn't support snapshot-file-merging, so the
// SegmentFileStateHandle type of stateHandle can not be reused.
return !FileMergingSnapshotManager.isFileMergingHandle(stateHandle);
} | A pre-check hook before the checkpoint writer want to reuse a state handle, if this returns
false, it is not recommended for the writer to rewrite the state file considering the space
amplification.
@param stateHandle the handle to be reused.
@return true if it can be reused. | couldReuseStateHandle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStreamFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStreamFactory.java | Apache-2.0 |
@Nonnull
static SnapshotResult<KeyedStateHandle> toKeyedStateHandleSnapshotResult(
@Nonnull SnapshotResult<StreamStateHandle> snapshotResult,
@Nonnull KeyGroupRangeOffsets keyGroupRangeOffsets,
@Nonnull KeyedStateHandleFactory stateHandleFactory) {
StreamStateHandle jobManagerOwnedSnapshot = snapshotResult.getJobManagerOwnedSnapshot();
if (jobManagerOwnedSnapshot != null) {
KeyedStateHandle jmKeyedState =
stateHandleFactory.create(keyGroupRangeOffsets, jobManagerOwnedSnapshot);
StreamStateHandle taskLocalSnapshot = snapshotResult.getTaskLocalSnapshot();
if (taskLocalSnapshot != null) {
KeyedStateHandle localKeyedState =
stateHandleFactory.create(keyGroupRangeOffsets, taskLocalSnapshot);
return SnapshotResult.withLocalState(jmKeyedState, localKeyedState);
} else {
return SnapshotResult.of(jmKeyedState);
}
} else {
return SnapshotResult.empty();
}
} | Helper method that takes a {@link SnapshotResult<StreamStateHandle>} and a {@link
KeyGroupRangeOffsets} and creates a {@link SnapshotResult<KeyedStateHandle>} by combining the
key groups offsets with all the present stream state handles. | toKeyedStateHandleSnapshotResult | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProvider.java | Apache-2.0 |
public StreamStateHandle closeAndGetPrimaryHandle() throws IOException {
flushInternalBuffer();
return primaryOutputStream.closeAndGetHandle();
} | Returns the state handle from the {@link #primaryOutputStream}. | closeAndGetPrimaryHandle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/DuplicatingCheckpointOutputStream.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/DuplicatingCheckpointOutputStream.java | Apache-2.0 |
@Override
public void close() throws IOException {
// users should not be able to actually close the stream, it is closed by the system.
// TODO if we want to support async writes, this call could trigger a callback to the
// snapshot context that a handle is available.
} | Checkpoint output stream that allows to write raw keyed state in a partitioned way, split into
key-groups. | close | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | Apache-2.0 |
public KeyGroupsList getKeyGroupList() {
return keyGroupRangeOffsets.getKeyGroupRange();
} | Returns a list of all key-groups which can be written to this stream. | getKeyGroupList | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | Apache-2.0 |
public void startNewKeyGroup(int keyGroupId) throws IOException {
if (isKeyGroupAlreadyStarted(keyGroupId)) {
throw new IOException("Key group " + keyGroupId + " already registered!");
}
keyGroupRangeOffsets.setKeyGroupOffset(keyGroupId, delegate.getPos());
currentKeyGroup = keyGroupId;
} | User code can call this method to signal that it begins to write a new key group with the
given key group id. This id must be within the {@link KeyGroupsList} provided by the stream.
Each key-group can only be started once and is considered final/immutable as soon as this
method is called again. | startNewKeyGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | Apache-2.0 |
public boolean isKeyGroupAlreadyStarted(int keyGroupId) {
return NO_OFFSET_SET != keyGroupRangeOffsets.getKeyGroupOffset(keyGroupId);
} | Returns true, if the key group with the given id was already started. The key group might not
yet be finished, if it's id is equal to the return value of {@link #getCurrentKeyGroup()}. | isKeyGroupAlreadyStarted | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | Apache-2.0 |
public boolean isKeyGroupAlreadyFinished(int keyGroupId) {
return isKeyGroupAlreadyStarted(keyGroupId) && keyGroupId != getCurrentKeyGroup();
} | Returns true if the key group is already completely written and immutable. It was started and
since then another key group has been started. | isKeyGroupAlreadyFinished | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java | Apache-2.0 |
@Nonnull
default <N, SV, S extends State, IS extends S> IS createOrUpdateInternalState(
@Nonnull TypeSerializer<N> namespaceSerializer,
@Nonnull StateDescriptor<S, SV> stateDesc)
throws Exception {
return createOrUpdateInternalState(
namespaceSerializer, stateDesc, StateSnapshotTransformFactory.noTransform());
} | Creates or updates internal state and returns a new {@link InternalKvState}.
@param namespaceSerializer TypeSerializer for the state namespace.
@param stateDesc The {@code StateDescriptor} that contains the name of the state.
@param <N> The type of the namespace.
@param <SV> The type of the stored state value.
@param <S> The type of the public API state.
@param <IS> The type of internal state. | createOrUpdateInternalState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateFactory.java | Apache-2.0 |
@Nonnull
default <N, SV, SEV, S extends State, IS extends S> IS createOrUpdateInternalState(
@Nonnull TypeSerializer<N> namespaceSerializer,
@Nonnull StateDescriptor<S, SV> stateDesc,
@Nonnull StateSnapshotTransformFactory<SEV> snapshotTransformFactory,
boolean allowFutureMetadataUpdates)
throws Exception {
if (allowFutureMetadataUpdates) {
throw new UnsupportedOperationException(
this.getClass().getName() + "doesn't support to allow future metadata update");
} else {
return createOrUpdateInternalState(
namespaceSerializer, stateDesc, snapshotTransformFactory);
}
} | Creates or updates internal state and returns a new {@link InternalKvState}.
@param namespaceSerializer TypeSerializer for the state namespace.
@param stateDesc The {@code StateDescriptor} that contains the name of the state.
@param snapshotTransformFactory factory of state snapshot transformer.
@param allowFutureMetadataUpdates whether allow metadata to update in the future or not.
@param <N> The type of the namespace.
@param <SV> The type of the stored state value.
@param <SEV> The type of the stored state value or entry for collection types (list or map).
@param <S> The type of the public API state.
@param <IS> The type of internal state. | createOrUpdateInternalState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateFactory.java | Apache-2.0 |
@Nonnull
@Override
public Object extractKeyFromElement(@Nonnull Keyed<?> element) {
return element.getKey();
} | Function to extract a key from a given object.
@param <T> type of the element from which we extract the key. | extractKeyFromElement | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyExtractorFunction.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyExtractorFunction.java | Apache-2.0 |
@SuppressWarnings("unchecked")
static <T extends Keyed<?>> KeyExtractorFunction<T> forKeyedObjects() {
return (KeyExtractorFunction<T>) FOR_KEYED_OBJECTS;
} | Returns the key for the given element by which the key-group can be computed. | forKeyedObjects | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyExtractorFunction.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyExtractorFunction.java | Apache-2.0 |
public PartitioningResult<T> partitionByKeyGroup() {
if (computedResult == null) {
reportAllElementKeyGroups();
int outputNumberOfElements = buildHistogramByAccumulatingCounts();
executePartitioning(outputNumberOfElements);
}
return computedResult;
} | Partitions the data into key-groups and returns the result as a {@link PartitioningResult}. | partitionByKeyGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | Apache-2.0 |
protected void reportAllElementKeyGroups() {
Preconditions.checkState(partitioningSource.length >= numberOfElements);
for (int i = 0; i < numberOfElements; ++i) {
int keyGroup =
KeyGroupRangeAssignment.assignToKeyGroup(
keyExtractorFunction.extractKeyFromElement(partitioningSource[i]),
totalKeyGroups);
reportKeyGroupOfElementAtIndex(i, keyGroup);
}
} | This method iterates over the input data and reports the key-group for each element. | reportAllElementKeyGroups | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | Apache-2.0 |
protected void reportKeyGroupOfElementAtIndex(int index, int keyGroup) {
final int keyGroupIndex = keyGroup - firstKeyGroup;
elementKeyGroups[index] = keyGroupIndex;
++counterHistogram[keyGroupIndex];
} | This method reports in the bookkeeping data that the element at the given index belongs to
the given key-group. | reportKeyGroupOfElementAtIndex | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | Apache-2.0 |
@Override
public void readMappingsInKeyGroup(@Nonnull DataInputView in, @Nonnegative int keyGroupId)
throws IOException {
int numElements = in.readInt();
for (int i = 0; i < numElements; i++) {
T element = readerFunction.readElement(in);
elementConsumer.consume(element, keyGroupId);
}
} | General algorithm to read key-grouped state that was written from a {@link
PartitioningResultImpl}.
@param <T> type of the elements to read. | readMappingsInKeyGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupPartitioner.java | Apache-2.0 |
@Override
public boolean contains(int keyGroup) {
return keyGroup >= startKeyGroup && keyGroup <= endKeyGroup;
} | Checks whether or not a single key-group is contained in the range.
@param keyGroup Key-group to check for inclusion.
@return True, only if the key-group is in the range. | contains | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | Apache-2.0 |
public KeyGroupRange getIntersection(KeyGroupRange other) {
int start = Math.max(startKeyGroup, other.startKeyGroup);
int end = Math.min(endKeyGroup, other.endKeyGroup);
return start <= end ? new KeyGroupRange(start, end) : EMPTY_KEY_GROUP_RANGE;
} | Create a range that represent the intersection between this range and the given range.
@param other A KeyGroupRange to intersect.
@return Key-group range that is the intersection between this and the given key-group range. | getIntersection | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | Apache-2.0 |
@Override
public int getNumberOfKeyGroups() {
return 1 + endKeyGroup - startKeyGroup;
} | @return The number of key-groups in the range | getNumberOfKeyGroups | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | Apache-2.0 |
public int getStartKeyGroup() {
return startKeyGroup;
} | @return The first key-group in the range. | getStartKeyGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | Apache-2.0 |
public static KeyGroupRange of(int startKeyGroup, int endKeyGroup) {
return startKeyGroup <= endKeyGroup
? new KeyGroupRange(startKeyGroup, endKeyGroup)
: EMPTY_KEY_GROUP_RANGE;
} | Factory method that also handles creation of empty key-groups.
@param startKeyGroup start of the range (inclusive)
@param endKeyGroup end of the range (inclusive)
@return the key-group from start to end or an empty key-group range. | of | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRange.java | Apache-2.0 |
public static int assignKeyToParallelOperator(Object key, int maxParallelism, int parallelism) {
Preconditions.checkNotNull(key, "Assigned key must not be null!");
return computeOperatorIndexForKeyGroup(
maxParallelism, parallelism, assignToKeyGroup(key, maxParallelism));
} | Assigns the given key to a parallel operator index.
@param key the key to assign
@param maxParallelism the maximum supported parallelism, aka the number of key-groups.
@param parallelism the current parallelism of the operator
@return the index of the parallel operator to which the given key should be routed. | assignKeyToParallelOperator | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | Apache-2.0 |
public static int assignToKeyGroup(Object key, int maxParallelism) {
Preconditions.checkNotNull(key, "Assigned key must not be null!");
return computeKeyGroupForKeyHash(key.hashCode(), maxParallelism);
} | Assigns the given key to a key-group index.
@param key the key to assign
@param maxParallelism the maximum supported parallelism, aka the number of key-groups.
@return the key-group to which the given key is assigned | assignToKeyGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | Apache-2.0 |
public static int computeKeyGroupForKeyHash(int keyHash, int maxParallelism) {
return MathUtils.murmurHash(keyHash) % maxParallelism;
} | Assigns the given key to a key-group index.
@param keyHash the hash of the key to assign
@param maxParallelism the maximum supported parallelism, aka the number of key-groups.
@return the key-group to which the given key is assigned | computeKeyGroupForKeyHash | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | Apache-2.0 |
public static KeyGroupRange computeKeyGroupRangeForOperatorIndex(
int maxParallelism, int parallelism, int operatorIndex) {
checkParallelismPreconditions(parallelism);
checkParallelismPreconditions(maxParallelism);
Preconditions.checkArgument(
maxParallelism >= parallelism,
"Maximum parallelism must not be smaller than parallelism.");
int start = ((operatorIndex * maxParallelism + parallelism - 1) / parallelism);
int end = ((operatorIndex + 1) * maxParallelism - 1) / parallelism;
return new KeyGroupRange(start, end);
} | Computes the range of key-groups that are assigned to a given operator under the given
parallelism and maximum parallelism.
<p>IMPORTANT: maxParallelism must be <= Short.MAX_VALUE + 1 to avoid rounding problems in
this method. If we ever want to go beyond this boundary, this method must perform arithmetic
on long values.
@param maxParallelism Maximal parallelism that the job was initially created with.
@param parallelism The current parallelism under which the job runs. Must be <=
maxParallelism.
@param operatorIndex index of a operatorIndex. 0 <= operatorIndex < parallelism.
@return the computed key-group range for the operator. | computeKeyGroupRangeForOperatorIndex | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | Apache-2.0 |
public static int computeOperatorIndexForKeyGroup(
int maxParallelism, int parallelism, int keyGroupId) {
return keyGroupId * parallelism / maxParallelism;
} | Computes the index of the operator to which a key-group belongs under the given parallelism
and maximum parallelism.
<p>IMPORTANT: maxParallelism must be <= Short.MAX_VALUE + 1 to avoid rounding problems in
this method. If we ever want to go beyond this boundary, this method must perform arithmetic
on long values.
@param maxParallelism Maximal parallelism that the job was initially created with. 0 <
parallelism <= maxParallelism <= Short.MAX_VALUE + 1 must hold.
@param parallelism The current parallelism under which the job runs. Must be <=
maxParallelism.
@param keyGroupId Id of a key-group. 0 <= keyGroupID < maxParallelism.
@return The index of the operator to which elements from the given key-group should be routed
under the given parallelism and maxParallelism. | computeOperatorIndexForKeyGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeAssignment.java | Apache-2.0 |
public static IllegalArgumentException newIllegalKeyGroupException(
int keyGroup, KeyGroupRange keyGroupRange) {
return new IllegalArgumentException(
String.format(
"Key group %d is not in %s. Unless you're directly using low level state access APIs, this"
+ " is most likely caused by non-deterministic shuffle key (hashCode and equals implementation).",
keyGroup, keyGroupRange));
} | This class combines a key-group range with offsets that correspond to the key-groups in the
range. | newIllegalKeyGroupException | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeOffsets.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeOffsets.java | Apache-2.0 |
public long getKeyGroupOffset(int keyGroup) {
return offsets[computeKeyGroupIndex(keyGroup)];
} | Returns the offset for the given key-group. The key-group must be contained in the range.
@param keyGroup Key-group for which we query the offset. Key-group must be contained in the
range.
@return The offset for the given key-group which must be contained in the range. | getKeyGroupOffset | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeOffsets.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeOffsets.java | Apache-2.0 |
public void setKeyGroupOffset(int keyGroup, long offset) {
offsets[computeKeyGroupIndex(keyGroup)] = offset;
} | Sets the offset for the given key-group. The key-group must be contained in the range.
@param keyGroup Key-group for which we set the offset. Must be contained in the range.
@param offset Offset for the key-group. | setKeyGroupOffset | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeOffsets.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupRangeOffsets.java | Apache-2.0 |
public static KeyGroupsStateHandle restore(
KeyGroupRangeOffsets groupRangeOffsets,
StreamStateHandle streamStateHandle,
StateHandleID stateHandleId) {
return new KeyGroupsStateHandle(groupRangeOffsets, streamStateHandle, stateHandleId);
} | @param groupRangeOffsets range of key-group ids that in the state of this handle
@param streamStateHandle handle to the actual state of the key-groups | restore | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | Apache-2.0 |
public KeyGroupRangeOffsets getGroupRangeOffsets() {
return groupRangeOffsets;
} | @return the internal key-group range to offsets metadata | getGroupRangeOffsets | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | Apache-2.0 |
public StreamStateHandle getDelegateStateHandle() {
return stateHandle;
} | @return The handle to the actual states | getDelegateStateHandle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | Apache-2.0 |
public long getOffsetForKeyGroup(int keyGroupId) {
return groupRangeOffsets.getKeyGroupOffset(keyGroupId);
} | @param keyGroupId the id of a key-group. the id must be contained in the range of this
handle.
@return offset to the position of data for the provided key-group in the stream referenced by
this state handle | getOffsetForKeyGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupsStateHandle.java | Apache-2.0 |
public int getKeyGroupId() {
return keyGroupId;
} | Returns the key group that corresponds to the data in the provided stream. | getKeyGroupId | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupStatePartitionStreamProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyGroupStatePartitionStreamProvider.java | Apache-2.0 |
default boolean isRestored() {
return getRestoredCheckpointId().isPresent();
} | Returns true, if state was restored from the snapshot of a previous execution. | isRestored | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/ManagedInitializationContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/ManagedInitializationContext.java | Apache-2.0 |
StreamStateHandle closeAndGetHandleAfterLeasesReleased() throws IOException {
try {
resourceGuard.closeInterruptibly();
return delegate.closeAndGetHandle();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
delegate.closeAndGetHandle();
throw new IOException("Interrupted while awaiting handle.", e);
}
} | This method should not be public so as to not expose internals to user code. Closes the
underlying stream and returns a state handle. | closeAndGetHandleAfterLeasesReleased | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/NonClosingCheckpointOutputStream.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/NonClosingCheckpointOutputStream.java | Apache-2.0 |
@SuppressWarnings("unchecked")
static <T extends PriorityComparable<?>> PriorityComparator<T> forPriorityComparableObjects() {
return (PriorityComparator<T>) FOR_PRIORITY_COMPARABLE_OBJECTS;
} | Compares two objects for priority. Returns a negative integer, zero, or a positive integer as
the first argument has lower, equal to, or higher priority than the second.
@param left left operand in the comparison by priority.
@param right left operand in the comparison by priority.
@return a negative integer, zero, or a positive integer as the first argument has lower,
equal to, or higher priority than the second. | forPriorityComparableObjects | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityComparator.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityComparator.java | Apache-2.0 |
default <T extends HeapPriorityQueueElement & PriorityComparable<? super T> & Keyed<?>>
KeyGroupedInternalPriorityQueue<T> create(
@Nonnull String stateName,
@Nonnull TypeSerializer<T> byteOrderedElementSerializer,
boolean allowFutureMetadataUpdates) {
if (allowFutureMetadataUpdates) {
throw new UnsupportedOperationException(
this.getClass().getName()
+ " doesn't support to allow to update future metadata.");
} else {
return create(stateName, byteOrderedElementSerializer);
}
} | Creates a {@link KeyGroupedInternalPriorityQueue}.
@param stateName unique name for associated with this queue.
@param byteOrderedElementSerializer a serializer that with a format that is lexicographically
ordered in alignment with elementPriorityComparator.
@param allowFutureMetadataUpdates whether allow metadata to update in the future or not.
@param <T> type of the stored elements.
@return the queue with the specified unique name. | create | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityQueueSetFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/PriorityQueueSetFactory.java | Apache-2.0 |
public static RegisteredStateMetaInfoBase fromMetaInfoSnapshot(
@Nonnull StateMetaInfoSnapshot snapshot) {
final StateMetaInfoSnapshot.BackendStateType backendStateType =
snapshot.getBackendStateType();
switch (backendStateType) {
case KEY_VALUE:
return new RegisteredKeyValueStateBackendMetaInfo<>(snapshot);
case OPERATOR:
return new RegisteredOperatorStateBackendMetaInfo<>(snapshot);
case BROADCAST:
return new RegisteredBroadcastStateBackendMetaInfo<>(snapshot);
case PRIORITY_QUEUE:
return new RegisteredPriorityQueueStateBackendMetaInfo<>(snapshot);
case KEY_VALUE_V2:
if (snapshot.getOption(
StateMetaInfoSnapshot.CommonOptionsKeys.KEYED_STATE_TYPE.toString())
.equals(StateDescriptor.Type.MAP.toString())) {
return new org.apache.flink.runtime.state.v2
.RegisteredKeyAndUserKeyValueStateBackendMetaInfo<>(snapshot);
} else {
return new org.apache.flink.runtime.state.v2
.RegisteredKeyValueStateBackendMetaInfo<>(snapshot);
}
default:
throw new IllegalArgumentException(
"Unknown backend state type: " + backendStateType);
}
} | create a new metadata object with Lazy serializer provider using existing one as a snapshot.
Sometimes metadata was just created or updated, but its StateSerializerProvider will not
allow further updates. So this method could replace it with a new one that contains a fresh
LazilyRegisteredStateSerializerProvider. | fromMetaInfoSnapshot | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/RegisteredStateMetaInfoBase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/RegisteredStateMetaInfoBase.java | Apache-2.0 |
public final Key asMapKey() {
return new Key(this);
} | Returns a wrapper that can be used as a key in {@link java.util.Map}. | asMapKey | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/RegisteredStateMetaInfoBase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/RegisteredStateMetaInfoBase.java | Apache-2.0 |
@Nonnull
public <N> byte[] buildCompositeKeyNamespace(
@Nonnull N namespace, @Nonnull TypeSerializer<N> namespaceSerializer) {
try {
serializeNamespace(namespace, namespaceSerializer);
return keyOutView.getCopyOfBuffer();
} catch (IOException shouldNeverHappen) {
throw new FlinkRuntimeException(shouldNeverHappen);
}
} | Returns a serialized composite key, from the key and key-group provided in a previous call to
{@link #setKeyAndKeyGroup(Object, int)} and the given namespace.
@param namespace the namespace to concatenate for the serialized composite key bytes.
@param namespaceSerializer the serializer to obtain the serialized form of the namespace.
@param <N> the type of the namespace.
@return the bytes for the serialized composite key of key-group, key, namespace. | buildCompositeKeyNamespace | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SerializedCompositeKeyBuilder.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SerializedCompositeKeyBuilder.java | Apache-2.0 |
@Nonnull
public <N, UK> byte[] buildCompositeKeyNamesSpaceUserKey(
@Nonnull N namespace,
@Nonnull TypeSerializer<N> namespaceSerializer,
@Nonnull UK userKey,
@Nonnull TypeSerializer<UK> userKeySerializer)
throws IOException {
serializeNamespace(namespace, namespaceSerializer);
userKeySerializer.serialize(userKey, keyOutView);
return keyOutView.getCopyOfBuffer();
} | Returns a serialized composite key, from the key and key-group provided in a previous call to
{@link #setKeyAndKeyGroup(Object, int)} and the given namespace, followed by the given
user-key.
@param namespace the namespace to concatenate for the serialized composite key bytes.
@param namespaceSerializer the serializer to obtain the serialized form of the namespace.
@param userKey the user-key to concatenate for the serialized composite key, after the
namespace.
@param userKeySerializer the serializer to obtain the serialized form of the user-key.
@param <N> the type of the namespace.
@param <UK> the type of the user-key.
@return the bytes for the serialized composite key of key-group, key, namespace. | buildCompositeKeyNamesSpaceUserKey | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SerializedCompositeKeyBuilder.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SerializedCompositeKeyBuilder.java | Apache-2.0 |
@Nonnull
public <UK> byte[] buildCompositeKeyUserKey(
@Nonnull UK userKey, @Nonnull TypeSerializer<UK> userKeySerializer) throws IOException {
// this should only be called when there is already a namespace written.
assert isNamespaceWritten();
resetToNamespace();
userKeySerializer.serialize(userKey, keyOutView);
return keyOutView.getCopyOfBuffer();
} | Returns a serialized composite key, from the key and key-group provided in a previous call to
{@link #setKeyAndKeyGroup(Object, int)} and the namespace provided in {@link
#setNamespace(Object, TypeSerializer)}, followed by the given user-key.
@param userKey the user-key to concatenate for the serialized composite key, after the
namespace.
@param userKeySerializer the serializer to obtain the serialized form of the user-key.
@param <UK> the type of the user-key.
@return the bytes for the serialized composite key of key-group, key, namespace. | buildCompositeKeyUserKey | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SerializedCompositeKeyBuilder.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SerializedCompositeKeyBuilder.java | Apache-2.0 |
default StreamStateHandle registerReference(
SharedStateRegistryKey registrationKey, StreamStateHandle state, long checkpointID) {
return registerReference(registrationKey, state, checkpointID, false);
} | Shortcut for {@link #registerReference(SharedStateRegistryKey, StreamStateHandle, long,
boolean)} with preventDiscardingCreatedCheckpoint = false. | registerReference | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SharedStateRegistry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SharedStateRegistry.java | Apache-2.0 |
@Override
public void run() {
try {
toDispose.discardState();
} catch (Exception e) {
LOG.warn(
"A problem occurred during asynchronous disposal of a shared state object: {}",
toDispose,
e);
}
} | Encapsulates the operation the delete state handles asynchronously. | run | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SharedStateRegistryImpl.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SharedStateRegistryImpl.java | Apache-2.0 |
public static SharedStateRegistryKey forStreamStateHandle(StreamStateHandle handle) {
String keyString = handle.getStreamStateHandleID().getKeyString();
// key strings tend to be longer, so we use the MD5 of the key string to save memory
return new SharedStateRegistryKey(
UUID.nameUUIDFromBytes(keyString.getBytes(StandardCharsets.UTF_8)).toString());
} | Create a unique key based on physical id. | forStreamStateHandle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SharedStateRegistryKey.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SharedStateRegistryKey.java | Apache-2.0 |
public Path[] listDirectory() throws IOException {
return FileUtils.listDirectory(directory);
} | List the files in the snapshot directory.
@return the files in the snapshot directory.
@throws IOException if there is a problem creating the file statuses. | listDirectory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | Apache-2.0 |
public boolean cleanup() throws IOException {
if (state.compareAndSet(State.ONGOING, State.DELETED)) {
FileUtils.deleteDirectory(directory.toFile());
}
return true;
} | Calling this method will attempt delete the underlying snapshot directory recursively, if the
state is "ongoing". In this case, the state will be set to "deleted" as a result of this
call.
@return <code>true</code> if delete is successful, <code>false</code> otherwise.
@throws IOException if an exception happens during the delete. | cleanup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | Apache-2.0 |
public boolean isSnapshotCompleted() {
return State.COMPLETED == state.get();
} | Returns <code>true</code> if the snapshot is marked as completed. | isSnapshotCompleted | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | Apache-2.0 |
public static SnapshotDirectory temporary(@Nonnull File directory) throws IOException {
return new TemporarySnapshotDirectory(directory);
} | Creates a local temporary snapshot directory for the given path. This will always return
"null" as result of {@link #completeSnapshotAndGetHandle()} and always attempt to delete the
underlying directory in {@link #cleanup()}. | temporary | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/SnapshotDirectory.java | Apache-2.0 |
@Experimental
default <K> AsyncKeyedStateBackend<K> createAsyncKeyedStateBackend(
KeyedStateBackendParameters<K> parameters) throws Exception {
throw new UnsupportedOperationException(
"Don't support createAsyncKeyedStateBackend by default");
} | Creates a new {@link AsyncKeyedStateBackend} which supports to access <b>keyed state</b>
asynchronously.
<p><i>Keyed State</i> is state where each value is bound to a key.
@param parameters The arguments bundle for creating {@link AsyncKeyedStateBackend}.
@param <K> The type of the keys by which the state is organized.
@return The Async Keyed State Backend for the given job, operator.
@throws Exception This method may forward all exceptions that occur while instantiating the
backend. | createAsyncKeyedStateBackend | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackend.java | Apache-2.0 |
@Experimental
default boolean supportsAsyncKeyedStateBackend() {
return false;
} | Tells if a state backend supports the {@link AsyncKeyedStateBackend}.
<p>If a state backend supports {@code AsyncKeyedStateBackend}, it could use {@link
#createAsyncKeyedStateBackend(KeyedStateBackendParameters)} to create an async keyed state
backend to access <b>keyed state</b> asynchronously.
@return If the state backend supports {@link AsyncKeyedStateBackend}. | supportsAsyncKeyedStateBackend | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackend.java | Apache-2.0 |
default boolean useManagedMemory() {
return false;
} | Whether the state backend uses Flink's managed memory. | useManagedMemory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackend.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackend.java | Apache-2.0 |
@Nonnull
public static StateBackend loadStateBackendFromConfig(
ReadableConfig config, ClassLoader classLoader, @Nullable Logger logger)
throws IllegalConfigurationException, DynamicCodeLoadingException, IOException {
checkNotNull(config, "config");
checkNotNull(classLoader, "classLoader");
final String backendName = config.get(StateBackendOptions.STATE_BACKEND);
// by default the factory class is the backend name
String factoryClassName = backendName;
switch (backendName.toLowerCase()) {
case HASHMAP_STATE_BACKEND_NAME:
HashMapStateBackend hashMapStateBackend =
new HashMapStateBackendFactory().createFromConfig(config, classLoader);
if (logger != null) {
logger.info("State backend is set to heap memory {}", hashMapStateBackend);
}
return hashMapStateBackend;
case ROCKSDB_STATE_BACKEND_NAME:
factoryClassName = ROCKSDB_STATE_BACKEND_FACTORY;
// fall through to the case that uses reflection to load the backend
// that way we can keep RocksDB in a separate module
break;
case FORST_STATE_BACKEND_NAME:
factoryClassName = FORST_STATE_BACKEND_FACTORY;
// fall through to the case that uses reflection to load the backend
// that way we can keep ForSt in a separate module
break;
}
// The reflection loading path
if (logger != null) {
logger.info("Loading state backend via factory {}", factoryClassName);
}
StateBackendFactory<?> factory;
try {
@SuppressWarnings("rawtypes")
Class<? extends StateBackendFactory> clazz =
Class.forName(factoryClassName, false, classLoader)
.asSubclass(StateBackendFactory.class);
factory = clazz.newInstance();
} catch (ClassNotFoundException e) {
throw new DynamicCodeLoadingException(
"Cannot find configured state backend factory class: " + backendName, e);
} catch (ClassCastException | InstantiationException | IllegalAccessException e) {
throw new DynamicCodeLoadingException(
"The class configured under '"
+ StateBackendOptions.STATE_BACKEND.key()
+ "' is not a valid state backend factory ("
+ backendName
+ ')',
e);
}
return factory.createFromConfig(config, classLoader);
} | Loads the unwrapped state backend from the configuration, from the parameter 'state.backend',
as defined in {@link StateBackendOptions#STATE_BACKEND}.
<p>The state backends can be specified either via their shortcut name, or via the class name
of a {@link StateBackendFactory}. If a StateBackendFactory class name is specified, the
factory is instantiated (via its zero-argument constructor) and its {@link
StateBackendFactory#createFromConfig(ReadableConfig, ClassLoader)} method is called.
<p>Recognized shortcut names are '{@value StateBackendLoader#HASHMAP_STATE_BACKEND_NAME}',
'{@value StateBackendLoader#ROCKSDB_STATE_BACKEND_NAME}'
@param config The configuration to load the state backend from
@param classLoader The class loader that should be used to load the state backend
@param logger Optionally, a logger to log actions to (may be null)
@return The instantiated state backend.
@throws DynamicCodeLoadingException Thrown if a state backend factory is configured and the
factory class was not found or the factory could not be instantiated
@throws IllegalConfigurationException May be thrown by the StateBackendFactory when creating
/ configuring the state backend in the factory
@throws IOException May be thrown by the StateBackendFactory when instantiating the state
backend | loadStateBackendFromConfig | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | Apache-2.0 |
private static StateBackend loadFromApplicationOrConfigOrDefaultInternal(
@Nullable StateBackend fromApplication,
Configuration jobConfig,
Configuration clusterConfig,
ClassLoader classLoader,
@Nullable Logger logger)
throws IllegalConfigurationException, DynamicCodeLoadingException, IOException {
checkNotNull(jobConfig, "jobConfig");
checkNotNull(clusterConfig, "clusterConfig");
checkNotNull(classLoader, "classLoader");
// Job level config can override the cluster level config.
Configuration mergedConfig = new Configuration(clusterConfig);
mergedConfig.addAll(jobConfig);
final StateBackend backend;
// In the FLINK-2.0, the state backend from application will be not supported anymore.
// (1) the application defined state backend has precedence
if (fromApplication != null) {
// see if this is supposed to pick up additional configuration parameters
if (fromApplication instanceof ConfigurableStateBackend) {
// needs to pick up configuration
if (logger != null) {
logger.info(
"Using job/cluster config to configure application-defined state backend: {}",
fromApplication);
}
backend =
((ConfigurableStateBackend) fromApplication)
// Use cluster config for backwards compatibility.
.configure(clusterConfig, classLoader);
} else {
// keep as is!
backend = fromApplication;
}
if (logger != null) {
logger.info("Using application-defined state backend: {}", backend);
}
} else {
// (2) check if the config defines a state backend
backend = loadStateBackendFromConfig(mergedConfig, classLoader, logger);
}
return backend;
} | Checks if an application-defined state backend is given, and if not, loads the state backend
from the configuration, from the parameter 'state.backend', as defined in {@link
StateBackendOptions#STATE_BACKEND}. If no state backend is configured, this instantiates the
default state backend (the {@link HashMapStateBackend}).
<p>If an application-defined state backend is found, and the state backend is a {@link
ConfigurableStateBackend}, this methods calls {@link
ConfigurableStateBackend#configure(ReadableConfig, ClassLoader)} on the state backend.
<p>Refer to {@link #loadStateBackendFromConfig(ReadableConfig, ClassLoader, Logger)} for
details on how the state backend is loaded from the configuration.
@param jobConfig The job configuration to load the state backend from
@param clusterConfig The cluster configuration to load the state backend from
@param classLoader The class loader that should be used to load the state backend
@param logger Optionally, a logger to log actions to (may be null)
@return The instantiated state backend.
@throws DynamicCodeLoadingException Thrown if a state backend factory is configured and the
factory class was not found or the factory could not be instantiated
@throws IllegalConfigurationException May be thrown by the StateBackendFactory when creating
/ configuring the state backend in the factory
@throws IOException May be thrown by the StateBackendFactory when instantiating the state
backend | loadFromApplicationOrConfigOrDefaultInternal | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | Apache-2.0 |
public static StateBackend fromApplicationOrConfigOrDefault(
@Nullable StateBackend fromApplication,
Configuration jobConfig,
Configuration clusterConfig,
ClassLoader classLoader,
@Nullable Logger logger)
throws IllegalConfigurationException, DynamicCodeLoadingException, IOException {
StateBackend rootBackend =
loadFromApplicationOrConfigOrDefaultInternal(
fromApplication, jobConfig, clusterConfig, classLoader, logger);
boolean enableChangeLog =
jobConfig
.getOptional(StateChangelogOptions.ENABLE_STATE_CHANGE_LOG)
.orElse(clusterConfig.get(StateChangelogOptions.ENABLE_STATE_CHANGE_LOG));
StateBackend backend;
if (enableChangeLog) {
backend = wrapStateBackend(rootBackend, classLoader, CHANGELOG_STATE_BACKEND);
LOG.info(
"State backend loader loads {} to delegate {}",
backend.getClass().getSimpleName(),
rootBackend.getClass().getSimpleName());
} else {
backend = rootBackend;
LOG.info(
"State backend loader loads the state backend as {}",
backend.getClass().getSimpleName());
}
return backend;
} | This is the state backend loader that loads a {@link DelegatingStateBackend} wrapping the
state backend loaded from {@link
StateBackendLoader#loadFromApplicationOrConfigOrDefaultInternal} when delegation is enabled.
If delegation is not enabled, the underlying wrapped state backend is returned instead.
@param fromApplication StateBackend defined from application
@param jobConfig The job level configuration to load the state backend from
@param clusterConfig The cluster level configuration to load the state backend from
@param classLoader The class loader that should be used to load the state backend
@param logger Optionally, a logger to log actions to (may be null)
@return The instantiated state backend.
@throws DynamicCodeLoadingException Thrown if a state backend (factory) is configured and the
(factory) class was not found or could not be instantiated
@throws IllegalConfigurationException May be thrown by the StateBackendFactory when creating
/ configuring the state backend in the factory
@throws IOException May be thrown by the StateBackendFactory when instantiating the state
backend | fromApplicationOrConfigOrDefault | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | Apache-2.0 |
public static boolean stateBackendFromApplicationOrConfigOrDefaultUseManagedMemory(
Configuration config,
Optional<Boolean> stateBackendFromApplicationUsesManagedMemory,
ClassLoader classLoader) {
checkNotNull(config, "config");
// (1) the application defined state backend has precedence
if (stateBackendFromApplicationUsesManagedMemory.isPresent()) {
return stateBackendFromApplicationUsesManagedMemory.get();
}
// (2) check if the config defines a state backend
try {
final StateBackend fromConfig = loadStateBackendFromConfig(config, classLoader, LOG);
return fromConfig.useManagedMemory();
} catch (IllegalConfigurationException | DynamicCodeLoadingException | IOException e) {
LOG.warn(
"Cannot decide whether state backend uses managed memory. Will reserve managed memory by default.",
e);
return true;
}
} | Checks whether state backend uses managed memory, without having to deserialize or load the
state backend.
@param config configuration to load the state backend from.
@param stateBackendFromApplicationUsesManagedMemory Whether the application-defined backend
uses Flink's managed memory. Empty if application has not defined a backend.
@param classLoader User code classloader.
@return Whether the state backend uses managed memory. | stateBackendFromApplicationOrConfigOrDefaultUseManagedMemory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateBackendLoader.java | Apache-2.0 |
public static StateHandleID randomStateHandleId() {
return new StateHandleID(UUID.randomUUID().toString());
} | Unique ID that allows for logical comparison between state handles.
<p>Two state handles that are considered as logically equal should always return the same ID
(whatever logically equal means is up to the implementation). For example, this could be based on
the string representation of the full filepath for a state that is based on a file. | randomStateHandleId | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateHandleID.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateHandleID.java | Apache-2.0 |
default void collectSizeStats(StateObjectSizeStatsCollector collector) {
collector.add(StateObjectLocation.UNKNOWN, getStateSize());
} | Collects statistics about state size and location from the state object.
@implNote default implementation reports {@link StateObject#getStateSize()} as size and
{@link StateObjectLocation#UNKNOWN} as location.
@param collector the statistics collector. | collectSizeStats | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java | Apache-2.0 |
public InputStream getStream() throws IOException {
if (creationException != null) {
throw new IOException(creationException);
}
return stream;
} | Returns a stream with the data of one state partition. | getStream | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StatePartitionStreamProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StatePartitionStreamProvider.java | Apache-2.0 |
public static <T> StateSerializerProvider<T> fromPreviousSerializerSnapshot(
TypeSerializerSnapshot<T> stateSerializerSnapshot) {
return new LazilyRegisteredStateSerializerProvider<>(stateSerializerSnapshot);
} | Creates a {@link StateSerializerProvider} for restored state from the previous serializer's
snapshot.
<p>Once a new serializer is registered for the state, it should be provided via the {@link
#registerNewSerializerForRestoredState(TypeSerializer)} method.
@param stateSerializerSnapshot the previous serializer's snapshot.
@param <T> the type of the state.
@return a new {@link StateSerializerProvider}. | fromPreviousSerializerSnapshot | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | Apache-2.0 |
public static <T> StateSerializerProvider<T> fromNewRegisteredSerializer(
TypeSerializer<T> registeredStateSerializer) {
return new EagerlyRegisteredStateSerializerProvider<>(registeredStateSerializer);
} | Creates a {@link StateSerializerProvider} from the registered state serializer.
<p>If the state is a restored one, and the previous serializer's snapshot is obtained later
on, is should be supplied via the {@link
#setPreviousSerializerSnapshotForRestoredState(TypeSerializerSnapshot)} method.
@param registeredStateSerializer the new state's registered serializer.
@param <T> the type of the state.
@return a new {@link StateSerializerProvider}. | fromNewRegisteredSerializer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | Apache-2.0 |
@Nonnull
public final TypeSerializer<T> currentSchemaSerializer() {
if (registeredSerializer != null) {
checkState(
!isRegisteredWithIncompatibleSerializer,
"Unable to provide a serializer with the current schema, because the restored state was "
+ "registered with a new serializer that has incompatible schema.");
return registeredSerializer;
}
// if we are not yet registered with a new serializer,
// we can just use the restore serializer to read / write the state.
return previousSchemaSerializer();
} | Gets the serializer that recognizes the current serialization schema of the state. This is
the serializer that should be used for regular state serialization and deserialization after
state has been restored.
<p>If this provider was created from a restored state's serializer snapshot, while a new
serializer (with a new schema) was not registered for the state (i.e., because the state was
never accessed after it was restored), then the schema of state remains identical. Therefore,
in this case, it is guaranteed that the serializer returned by this method is the same as the
one returned by {@link #previousSchemaSerializer()}.
<p>If this provider was created from a serializer instance, then this always returns the that
same serializer instance. If later on a snapshot of the previous serializer is supplied via
{@link #setPreviousSerializerSnapshotForRestoredState(TypeSerializerSnapshot)}, then the
initially supplied serializer instance will be checked for compatibility.
@return a serializer that reads and writes in the current schema of the state. | currentSchemaSerializer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | Apache-2.0 |
@Nonnull
public final TypeSerializer<T> previousSchemaSerializer() {
if (cachedRestoredSerializer != null) {
return cachedRestoredSerializer;
}
if (previousSerializerSnapshot == null) {
throw new UnsupportedOperationException(
"This provider does not contain the state's previous serializer's snapshot. Cannot provider a serializer for previous schema.");
}
this.cachedRestoredSerializer = previousSerializerSnapshot.restoreSerializer();
return cachedRestoredSerializer;
} | Gets the serializer that recognizes the previous serialization schema of the state. This is
the serializer that should be used for restoring the state, i.e. when the state is still in
the previous serialization schema.
<p>This method only returns a serializer if this provider has the previous serializer's
snapshot. Otherwise, trying to access the previous schema serializer will fail with an
exception.
@return a serializer that reads and writes in the previous schema of the state. | previousSchemaSerializer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | Apache-2.0 |
@Nullable
public final TypeSerializerSnapshot<T> getPreviousSerializerSnapshot() {
return previousSerializerSnapshot;
} | Gets the previous serializer snapshot.
@return The previous serializer snapshot, or null if registered serializer was for a new
state, not a restored one. | getPreviousSerializerSnapshot | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | Apache-2.0 |
protected final void invalidateCurrentSchemaSerializerAccess() {
this.isRegisteredWithIncompatibleSerializer = true;
} | Invalidates access to the current schema serializer. This lets {@link
#currentSchemaSerializer()} fail when invoked.
<p>Access to the current schema serializer should be invalidated by the methods {@link
#registerNewSerializerForRestoredState(TypeSerializer)} or {@link
#setPreviousSerializerSnapshotForRestoredState(TypeSerializerSnapshot)} once the registered
serializer is determined to be incompatible. | invalidateCurrentSchemaSerializerAccess | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSerializerProvider.java | Apache-2.0 |
default TransformStrategy getFilterStrategy() {
return TransformStrategy.TRANSFORM_ALL;
} | Skip first null entries.
<p>While traversing collection entries, as optimisation, stops transforming if
encounters first non-null included entry and returns it plus the rest untouched. | getFilterStrategy | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSnapshotTransformer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSnapshotTransformer.java | Apache-2.0 |
public static long getStateSize(StateObject handle) {
return handle == null ? 0 : handle.getStateSize();
} | Returns the size of a state object.
@param handle The handle to the retrieved state | getStateSize | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateUtil.java | Apache-2.0 |
@SuppressWarnings("unchecked")
public static RuntimeException unexpectedStateHandleException(
Class<? extends StateObject> expectedStateHandleClass,
Class<? extends StateObject> actualStateHandleClass) {
return unexpectedStateHandleException(
new Class[] {expectedStateHandleClass}, actualStateHandleClass);
} | Creates an {@link RuntimeException} that signals that an operation did not get the type of
{@link StateObject} that was expected. This can mostly happen when a different {@link
StateBackend} from the one that was used for taking a checkpoint/savepoint is used when
restoring. | unexpectedStateHandleException | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/state/StateUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateUtil.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.