code
stringlengths
25
201k
docstring
stringlengths
19
96.2k
func_name
stringlengths
0
235
language
stringclasses
1 value
repo
stringlengths
8
51
path
stringlengths
11
314
url
stringlengths
62
377
license
stringclasses
7 values
public void addVirtualPartitionNode( Integer originalId, Integer virtualId, StreamPartitioner<?> partitioner, StreamExchangeMode exchangeMode) { if (virtualPartitionNodes.containsKey(virtualId)) { throw new IllegalStateException( "Already has virtual partition node with id " + virtualId); } virtualPartitionNodes.put(virtualId, new Tuple3<>(originalId, partitioner, exchangeMode)); }
Adds a new virtual node that is used to connect a downstream vertex to an input with a certain partitioning. <p>When adding an edge from the virtual node to a downstream node the connection will be made to the original node, but with the partitioning given here. @param originalId ID of the node that should be connected to. @param virtualId ID of the virtual node. @param partitioner The partitioner
addVirtualPartitionNode
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraph.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraph.java
Apache-2.0
public static int getNewIterationNodeId() { iterationIdCounter--; return iterationIdCounter; }
A generator that generates a {@link StreamGraph} from a graph of {@link Transformation}s. <p>This traverses the tree of {@code Transformations} starting from the sinks. At each transformation we recursively transform the inputs, then create a node in the {@code StreamGraph} and add edges from the input Nodes to our newly created node. The transformation methods return the IDs of the nodes in the StreamGraph that represent the input transformation. Several IDs can be returned to be able to deal with feedback transformations and unions. <p>Partitioning, split/select and union don't create actual nodes in the {@code StreamGraph}. For these, we create a virtual node in the {@code StreamGraph} that holds the specific property, i.e. partitioning, selector and so on. When an edge is created from a virtual node to a downstream node the {@code StreamGraph} resolved the id of the original node and creates an edge in the graph with the desired property. For example, if you have this graph: <pre> Map-1 -&gt; HashPartition-2 -&gt; Map-3 </pre> <p>where the numbers represent transformation IDs. We first recurse all the way down. {@code Map-1} is transformed, i.e. we create a {@code StreamNode} with ID 1. Then we transform the {@code HashPartition}, for this, we create virtual node of ID 4 that holds the property {@code HashPartition}. This transformation returns the ID 4. Then we transform the {@code Map-3}. We add the edge {@code 4 -> 3}. The {@code StreamGraph} resolved the actual node with ID 1 and creates and edge {@code 1 -> 3} with the property HashPartition.
getNewIterationNodeId
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphGenerator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphGenerator.java
Apache-2.0
private List<Collection<Integer>> getParentInputIds( @Nullable final Collection<Transformation<?>> parentTransformations) { final List<Collection<Integer>> allInputIds = new ArrayList<>(); if (parentTransformations == null) { return allInputIds; } for (Transformation<?> transformation : parentTransformations) { allInputIds.add(transform(transformation)); } return allInputIds; }
Returns a list of lists containing the ids of the nodes in the transformation graph that correspond to the provided transformations. Each transformation may have multiple nodes. <p>Parent transformations will be translated if they are not already translated. @param parentTransformations the transformations whose node ids to return. @return the nodeIds per transformation or an empty list if the {@code parentTransformations} are empty.
getParentInputIds
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphGenerator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphGenerator.java
Apache-2.0
private boolean generateNodeHash( StreamNode node, HashFunction hashFunction, Map<Integer, byte[]> hashes, boolean isChainingEnabled, StreamGraph streamGraph) { // Check for user-specified ID String userSpecifiedHash = node.getTransformationUID(); if (userSpecifiedHash == null) { // Check that all input nodes have their hashes computed for (StreamEdge inEdge : node.getInEdges()) { // If the input node has not been visited yet, the current // node will be visited again at a later point when all input // nodes have been visited and their hashes set. if (!hashes.containsKey(inEdge.getSourceId())) { return false; } } Hasher hasher = hashFunction.newHasher(); byte[] hash = generateDeterministicHash(node, hasher, hashes, isChainingEnabled, streamGraph); if (hashes.put(node.getId(), hash) != null) { // Sanity check throw new IllegalStateException( "Unexpected state. Tried to add node hash " + "twice. This is probably a bug in the JobGraph generator."); } return true; } else { Hasher hasher = hashFunction.newHasher(); byte[] hash = generateUserSpecifiedHash(node.getTransformationUID(), hasher); for (byte[] previousHash : hashes.values()) { if (Arrays.equals(previousHash, hash)) { throw new IllegalArgumentException( "Hash collision on user-specified ID " + "\"" + userSpecifiedHash + "\". " + "Most likely cause is a non-unique ID. Please check that all IDs " + "specified via `uid(String)` are unique."); } } if (hashes.put(node.getId(), hash) != null) { // Sanity check throw new IllegalStateException( "Unexpected state. Tried to add node hash " + "twice. This is probably a bug in the JobGraph generator."); } return true; } }
Generates a hash for the node and returns whether the operation was successful. @param node The node to generate the hash for @param hashFunction The hash function to use @param hashes The current state of generated hashes @return <code>true</code> if the node hash has been generated. <code>false</code>, otherwise. If the operation is not successful, the hash needs be generated at a later point when all input is available. @throws IllegalStateException If node has user-specified hash and is intermediate node of a chain
generateNodeHash
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphHasherV2.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphHasherV2.java
Apache-2.0
private byte[] generateDeterministicHash( StreamNode node, Hasher hasher, Map<Integer, byte[]> hashes, boolean isChainingEnabled, StreamGraph streamGraph) { // Include stream node to hash. We use the current size of the computed // hashes as the ID. We cannot use the node's ID, because it is // assigned from a static counter. This will result in two identical // programs having different hashes. generateNodeLocalHash(hasher, hashes.size()); // Include chained nodes to hash for (StreamEdge outEdge : node.getOutEdges()) { if (isChainable(outEdge, isChainingEnabled, streamGraph)) { // Use the hash size again, because the nodes are chained to // this node. This does not add a hash for the chained nodes. generateNodeLocalHash(hasher, hashes.size()); } } byte[] hash = hasher.hash().asBytes(); // Make sure that all input nodes have their hash set before entering // this loop (calling this method). for (StreamEdge inEdge : node.getInEdges()) { byte[] otherHash = hashes.get(inEdge.getSourceId()); // Sanity check if (otherHash == null) { throw new IllegalStateException( "Missing hash for input node " + streamGraph.getSourceVertex(inEdge) + ". Cannot generate hash for " + node + "."); } for (int j = 0; j < hash.length; j++) { hash[j] = (byte) (hash[j] * 37 ^ otherHash[j]); } } if (LOG.isDebugEnabled()) { String udfClassName = ""; if (node.getOperatorFactory() instanceof UdfStreamOperatorFactory) { udfClassName = ((UdfStreamOperatorFactory) node.getOperatorFactory()) .getUserFunctionClassName(); } LOG.debug( "Generated hash '" + byteToHexString(hash) + "' for node " + "'" + node.toString() + "' {id: " + node.getId() + ", " + "parallelism: " + node.getParallelism() + ", " + "user function: " + udfClassName + "}"); } return hash; }
Generates a deterministic hash from node-local properties and input and output edges.
generateDeterministicHash
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphHasherV2.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/graph/StreamGraphHasherV2.java
Apache-2.0
protected MailboxExecutor getMailboxExecutor() { return checkNotNull( mailboxExecutor, "Factory does not implement %s", YieldingOperatorFactory.class); }
Provides the mailbox executor iff this factory implements {@link YieldingOperatorFactory}.
getMailboxExecutor
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/AbstractStreamOperatorFactory.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/AbstractStreamOperatorFactory.java
Apache-2.0
@Experimental default void processRecordAttributes(RecordAttributes recordAttributes) throws Exception {}
Processes a {@link RecordAttributes} that arrived at this input. This method is guaranteed to not be called concurrently with other methods of the operator.
processRecordAttributes
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/Input.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/Input.java
Apache-2.0
public long getInputMask() { return inputMask; }
@param inputMask -1 to mark if all inputs are selected.
getInputMask
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
public boolean isInputSelected(int inputId) { return (inputMask & (1L << (inputId - 1))) != 0; }
Tests if the input specified by {@code inputId} is selected. @param inputId The input id, see the description of {@code inputId} in {@link Builder#select(int)}. @return {@code true} if the input is selected, {@code false} otherwise.
isInputSelected
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
public boolean areAllInputsSelected() { return inputMask == -1L; }
Tests if all inputs are selected. @return {@code true} if the input mask equals -1, {@code false} otherwise.
areAllInputsSelected
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
public int fairSelectNextIndexOutOf2(int availableInputsMask, int lastReadInputIndex) { return fairSelectNextIndexOutOf2((int) inputMask, availableInputsMask, lastReadInputIndex); }
Fairly select one of the two inputs for reading. When {@code inputMask} includes two inputs and both inputs are available, alternately select one of them. Otherwise, select the available one of {@code inputMask}, or return {@link InputSelection#NONE_AVAILABLE} to indicate no input is selected. <p>Note that this supports only two inputs for performance reasons. @param availableInputsMask The mask of all available inputs. @param lastReadInputIndex The index of last read input. @return the index of the input for reading or {@link InputSelection#NONE_AVAILABLE} (if {@code inputMask} is empty or the inputs in {@code inputMask} are unavailable).
fairSelectNextIndexOutOf2
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
public static int fairSelectNextIndexOutOf2( int selectionMask, int availableInputsMask, int lastReadInputIndex) { int combineMask = availableInputsMask & selectionMask; if (combineMask == 3) { return lastReadInputIndex == 0 ? 1 : 0; } else if (combineMask >= 0 && combineMask < 3) { return combineMask - 1; } throw new UnsupportedOperationException("Only two inputs are supported."); }
Fairly select one of the two inputs for reading. When {@code inputMask} includes two inputs and both inputs are available, alternately select one of them. Otherwise, select the available one of {@code inputMask}, or return {@link InputSelection#NONE_AVAILABLE} to indicate no input is selected. <p>Note that this supports only two inputs for performance reasons. @param selectionMask The mask of inputs that are selected. Note -1 for this is interpreted as all of the 32 inputs are available. @param availableInputsMask The mask of all available inputs. @param lastReadInputIndex The index of last read input. @return the index of the input for reading or {@link InputSelection#NONE_AVAILABLE} (if {@code inputMask} is empty or the inputs in {@code inputMask} are unavailable).
fairSelectNextIndexOutOf2
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
public int fairSelectNextIndex(long availableInputsMask, int lastReadInputIndex) { return fairSelectNextIndex(inputMask, availableInputsMask, lastReadInputIndex); }
Fairly select one of the available inputs for reading. @param availableInputsMask The mask of all available inputs. Note -1 for this is interpreted as all of the 32 inputs are available. @param lastReadInputIndex The index of last read input. @return the index of the input for reading or {@link InputSelection#NONE_AVAILABLE} (if {@code inputMask} is empty or the inputs in {@code inputMask} are unavailable).
fairSelectNextIndex
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
public static Builder from(InputSelection selection) { Builder builder = new Builder(); builder.inputMask = selection.inputMask; return builder; }
Returns a {@code Builder} that uses the input mask of the specified {@code selection} as the initial mask.
from
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
public Builder select(int inputId) { if (inputId > 0 && inputId <= 64) { inputMask |= 1L << (inputId - 1); } else if (inputId == -1L) { inputMask = -1L; } else { throw new IllegalArgumentException( "The inputId must be in the range of 1 to 64, or be -1."); } return this; }
Selects an input identified by the given {@code inputId}. @param inputId the input id numbered starting from 1 to 64, and `1` indicates the first input. Specially, `-1` indicates all inputs. @return a reference to this object.
select
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InputSelection.java
Apache-2.0
@Override public void advanceWatermark(long time) throws Exception { Preconditions.checkNotNull(asyncExecutionController); currentWatermark = time; InternalTimer<K, N> timer; while ((timer = eventTimeTimersQueue.peek()) != null && timer.getTimestamp() <= time && !cancellationContext.isCancelled()) { eventTimeTimersQueue.poll(); final InternalTimer<K, N> timerToTrigger = timer; maintainContextAndProcess( timerToTrigger, () -> triggerTarget.onEventTime(timerToTrigger)); taskIOMetricGroup.getNumFiredTimers().inc(); } }
Advance one watermark, this will fire some event timers. @param time the time in watermark.
advanceWatermark
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceAsyncImpl.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceAsyncImpl.java
Apache-2.0
public boolean tryAdvanceWatermark( long time, InternalTimeServiceManager.ShouldStopAdvancingFn shouldStopAdvancingFn) throws Exception { currentWatermark = time; InternalTimer<K, N> timer; boolean interrupted = false; while ((timer = eventTimeTimersQueue.peek()) != null && timer.getTimestamp() <= time && !cancellationContext.isCancelled() && !interrupted) { keyContext.setCurrentKey(timer.getKey()); eventTimeTimersQueue.poll(); triggerTarget.onEventTime(timer); taskIOMetricGroup.getNumFiredTimers().inc(); // Check if we should stop advancing after at least one iteration to guarantee progress // and prevent a potential starvation. interrupted = shouldStopAdvancingFn.test(); } return !interrupted; }
@return true if following watermarks can be processed immediately. False if the firing timers should be interrupted as soon as possible.
tryAdvanceWatermark
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceImpl.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InternalTimerServiceImpl.java
Apache-2.0
public static <K, N> InternalTimersSnapshotWriter getWriterForVersion( int version, InternalTimersSnapshot<K, N> timersSnapshot, TypeSerializer<K> keySerializer, TypeSerializer<N> namespaceSerializer) { switch (version) { case NO_VERSION: case 1: throw new IllegalStateException( "Since Flink 1.17 not versioned (<= Flink 1.4.0) and version 1 (< Flink 1.8.0) of " + "InternalTimersSnapshotWriter is no longer supported."); case InternalTimerServiceSerializationProxy.VERSION: return new InternalTimersSnapshotWriterV2<>( timersSnapshot, keySerializer, namespaceSerializer); default: // guard for future throw new IllegalStateException( "Unrecognized internal timers snapshot writer version: " + version); } }
Readers and writers for different versions of the {@link InternalTimersSnapshot}. Outdated formats are also kept here for documentation of history backlog.
getWriterForVersion
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InternalTimersSnapshotReaderWriters.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/InternalTimersSnapshotReaderWriters.java
Apache-2.0
private void waitCacheNotEmpty() { try { cacheNotEmpty.await(); } catch (InterruptedException e) { ExceptionUtils.rethrow(e); } }
Wait until the cache is not empty.
waitCacheNotEmpty
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/MapPartitionIterator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/MapPartitionIterator.java
Apache-2.0
private void waitCacheNotFull() { try { cacheNotFull.await(); } catch (InterruptedException e) { ExceptionUtils.rethrow(e); } }
Wait until the cache is not full.
waitCacheNotFull
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/MapPartitionIterator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/MapPartitionIterator.java
Apache-2.0
public boolean isInternalSorterSupported() { return internalSorterSupported; }
Returns true iff the operator uses an internal sorter to sort inputs by key. When it is true, the input records will not to be sorted externally before being fed into this operator.
isInternalSorterSupported
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/OperatorAttributes.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/OperatorAttributes.java
Apache-2.0
@Override public InputFormat<OUT, InputSplit> getInputFormat() { return operator.getUserFunction().getFormat(); }
Input format source operator factory which just wrap existed {@link StreamSource}. @param <OUT> The output type of the operator
getInputFormat
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/SimpleInputFormatOperatorFactory.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/SimpleInputFormatOperatorFactory.java
Apache-2.0
@Override public OutputFormat<IN> getOutputFormat() { return outputFormat; }
A simple operator factory which create an operator containing an {@link OutputFormat}. @param <IN> The input type of the operator.
getOutputFormat
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/SimpleOutputFormatOperatorFactory.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/SimpleOutputFormatOperatorFactory.java
Apache-2.0
@SuppressWarnings("unchecked") private static <T, SplitT extends SourceSplit> SourceOperator<T, SplitT> instantiateSourceOperator( StreamOperatorParameters<T> parameters, FunctionWithException<SourceReaderContext, SourceReader<T, ?>, Exception> readerFactory, OperatorEventGateway eventGateway, SimpleVersionedSerializer<?> splitSerializer, WatermarkStrategy<T> watermarkStrategy, ProcessingTimeService timeService, Configuration config, String localHostName, boolean emitProgressiveWatermarks, CanEmitBatchOfRecordsChecker canEmitBatchOfRecords, Collection<? extends WatermarkDeclaration> watermarkDeclarations) { // jumping through generics hoops: cast the generics away to then cast them back more // strictly typed final FunctionWithException<SourceReaderContext, SourceReader<T, SplitT>, Exception> typedReaderFactory = (FunctionWithException< SourceReaderContext, SourceReader<T, SplitT>, Exception>) (FunctionWithException<?, ?, ?>) readerFactory; final SimpleVersionedSerializer<SplitT> typedSplitSerializer = (SimpleVersionedSerializer<SplitT>) splitSerializer; Map<String, Boolean> watermarkIsAlignedMap = WatermarkUtils.convertToInternalWatermarkDeclarations( new HashSet<>(watermarkDeclarations)) .stream() .collect( Collectors.toMap( AbstractInternalWatermarkDeclaration::getIdentifier, AbstractInternalWatermarkDeclaration::isAligned)); return new SourceOperator<>( parameters, typedReaderFactory, eventGateway, typedSplitSerializer, watermarkStrategy, timeService, config, localHostName, emitProgressiveWatermarks, canEmitBatchOfRecords, watermarkIsAlignedMap); }
This is a utility method to conjure up a "SplitT" generics variable binding so that we can construct the SourceOperator without resorting to "all raw types". That way, this methods puts all "type non-safety" in one place and allows to maintain as much generics safety in the main code as possible.
instantiateSourceOperator
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/SourceOperatorFactory.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/SourceOperatorFactory.java
Apache-2.0
@Override public void processElement(StreamRecord<IN> element) throws Exception { if (userFunction.filter(element.getValue())) { output.collect(element); } }
A {@link StreamOperator} for executing {@link FilterFunction FilterFunctions}.
processElement
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamFilter.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamFilter.java
Apache-2.0
public void setKeyedStateStore(@Nullable KeyedStateStore keyedStateStore) { this.keyedStateStore = keyedStateStore; }
The task environment running the operator.
setKeyedStateStore
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
Apache-2.0
public GlobalAggregateManager getGlobalAggregateManager() { return taskEnvironment.getGlobalAggregateManager(); }
Returns the global aggregate manager for the current job. @return The global aggregate manager.
getGlobalAggregateManager
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
Apache-2.0
public String getOperatorUniqueID() { return operatorUniqueID; }
Returned value is guaranteed to be unique between operators within the same job and to be stable and the same across job submissions. <p>This operation is currently only supported in Streaming (DataStream) contexts. @return String representation of the operator's unique id.
getOperatorUniqueID
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
Apache-2.0
public boolean isCheckpointingEnabled() { return streamConfig.isCheckpointingEnabled(); }
Returns true if checkpointing is enabled for the running job. @return true if checkpointing is enabled.
isCheckpointingEnabled
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamingRuntimeContext.java
Apache-2.0
@Override public void processElement(StreamRecord<IN> element) throws Exception { output.collect(element.replace(userFunction.map(element.getValue()))); }
A {@link StreamOperator} for executing {@link MapFunction MapFunctions}.
processElement
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamMap.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamMap.java
Apache-2.0
@Experimental default OperatorAttributes getOperatorAttributes() { return new OperatorAttributesBuilder().build(); }
Called to get the OperatorAttributes of the operator. If there is no defined attribute, a default OperatorAttributes is built. @return OperatorAttributes of the operator.
getOperatorAttributes
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperator.java
Apache-2.0
default boolean isOutputTypeConfigurable() { return false; }
If the stream operator need access to the output type information at {@link StreamGraph} generation. This can be useful for cases where the output type is specified by the returns method and, thus, after the stream operator has been created.
isOutputTypeConfigurable
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorFactory.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorFactory.java
Apache-2.0
default boolean isInputTypeConfigurable() { return false; }
If the stream operator need to be configured with the data type they will operate on.
isInputTypeConfigurable
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorFactory.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorFactory.java
Apache-2.0
public static <OUT, OP extends StreamOperator<OUT>> Tuple2<OP, Optional<ProcessingTimeService>> createOperator( StreamOperatorFactory<OUT> operatorFactory, StreamTask<OUT, ?> containingTask, StreamConfig configuration, Output<StreamRecord<OUT>> output, OperatorEventDispatcher operatorEventDispatcher) { MailboxExecutor mailboxExecutor = containingTask .getMailboxExecutorFactory() .createExecutor(configuration.getChainIndex()); if (operatorFactory instanceof YieldingOperatorFactory) { ((YieldingOperatorFactory<?>) operatorFactory).setMailboxExecutor(mailboxExecutor); } final Supplier<ProcessingTimeService> processingTimeServiceFactory = () -> containingTask .getProcessingTimeServiceFactory() .createProcessingTimeService(mailboxExecutor); final ProcessingTimeService processingTimeService; if (operatorFactory instanceof ProcessingTimeServiceAware) { processingTimeService = processingTimeServiceFactory.get(); ((ProcessingTimeServiceAware) operatorFactory) .setProcessingTimeService(processingTimeService); } else { processingTimeService = null; } // TODO: what to do with ProcessingTimeServiceAware? OP op = operatorFactory.createStreamOperator( new StreamOperatorParameters<>( containingTask, configuration, output, processingTimeService != null ? () -> processingTimeService : processingTimeServiceFactory, operatorEventDispatcher, mailboxExecutor)); if (op instanceof YieldingOperator) { ((YieldingOperator<?>) op).setMailboxExecutor(mailboxExecutor); } return new Tuple2<>(op, Optional.ofNullable(processingTimeService)); }
Creates a new operator using a factory and makes sure that all special factory traits are properly handled. @param operatorFactory the operator factory. @param containingTask the containing task. @param configuration the configuration of the operator. @param output the output of the operator. @param operatorEventDispatcher the operator event dispatcher for communication between operator and coordinators. @return a newly created and configured operator, and the {@link ProcessingTimeService} instance it can access.
createOperator
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorFactoryUtil.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorFactoryUtil.java
Apache-2.0
protected void markCanceledOrStopped() { this.canceledOrStopped = true; }
Marks this source as canceled or stopped. <p>This indicates that any exit of the {@link #run(Object, Output, OperatorChain)} method cannot be interpreted as the result of a finite source.
markCanceledOrStopped
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamSource.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamSource.java
Apache-2.0
protected boolean isCanceledOrStopped() { return canceledOrStopped; }
Checks whether the source has been canceled or stopped. @return True, if the source is canceled or stopped, false is not.
isCanceledOrStopped
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamSource.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/StreamSource.java
Apache-2.0
@Experimental default void processRecordAttributes1(RecordAttributes recordAttributes) throws Exception {}
Processes a {@link RecordAttributes} that arrived on the first input of this operator. This method is guaranteed to not be called concurrently with other methods of the operator.
processRecordAttributes1
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/TwoInputStreamOperator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/TwoInputStreamOperator.java
Apache-2.0
@Experimental default void processRecordAttributes2(RecordAttributes recordAttributes) throws Exception {}
Processes a {@link RecordAttributes} that arrived on the second input of this operator. This method is guaranteed to not be called concurrently with other methods of the operator.
processRecordAttributes2
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/TwoInputStreamOperator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/TwoInputStreamOperator.java
Apache-2.0
protected void revert(long checkpointedOffset) { while (offset > checkpointedOffset) { buffer.removeLast(); offset--; } }
Revert the buffer back to the result whose offset is `checkpointedOffset`.
revert
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/collect/AbstractCollectResultBuffer.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/collect/AbstractCollectResultBuffer.java
Apache-2.0
@Override public void close() throws Exception { iterator.close(); }
A pair of an {@link Iterator} to receive results from a streaming application and a {@link JobClient} to interact with the program.
close
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/collect/ClientAndIterator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/collect/ClientAndIterator.java
Apache-2.0
public InetSocketAddress getAddress() { return address; }
An {@link OperatorEvent} that passes the socket server address in the sink to the coordinator.
getAddress
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkAddressEvent.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkAddressEvent.java
Apache-2.0
@Override public void endInput(int inputId) throws Exception { passThroughInputsIndices.remove(inputId); }
An {@link InputSelectable} that indicates which input contain the current smallest element of all sorting inputs. Should be used by the {@link StreamInputProcessor} to choose the next input to consume from.
endInput
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/sort/MultiInputSortingDataInput.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/sort/MultiInputSortingDataInput.java
Apache-2.0
public void setup(AsyncExecutionController<K> asyncExecutionController) { if (asyncExecutionController != null) { this.asyncExecutionController = asyncExecutionController; } }
Set up the async execution controller.
setup
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/sorted/state/BatchExecutionInternalTimeServiceWithAsyncState.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/operators/sorted/state/BatchExecutionInternalTimeServiceWithAsyncState.java
Apache-2.0
public List<TypeInformation<?>> getInputTypes() { return inputs.stream().map(Transformation::getOutputType).collect(Collectors.toList()); }
Returns the {@code TypeInformation} for the elements from the inputs.
getInputTypes
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/AbstractMultipleInputTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/AbstractMultipleInputTransformation.java
Apache-2.0
@VisibleForTesting public StreamSink<T> getOperator() { return (StreamSink<T>) ((SimpleOperatorFactory) operatorFactory).getOperator(); }
Creates a new {@code LegacySinkTransformation} from the given input {@code Transformation}. @param input The input {@code Transformation} @param name The name of the {@code Transformation}, this will be shown in Visualizations and the Log @param operator The sink operator @param parallelism The parallelism of this {@code LegacySinkTransformation} @param parallelismConfigured If true, the parallelism of the transformation is explicitly set and should be respected. Otherwise the parallelism can be changed at runtime.
getOperator
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/LegacySinkTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/LegacySinkTransformation.java
Apache-2.0
public StreamOperatorFactory<Object> getOperatorFactory() { return operatorFactory; }
Returns the {@link StreamOperatorFactory} of this {@code LegacySinkTransformation}.
getOperatorFactory
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/LegacySinkTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/LegacySinkTransformation.java
Apache-2.0
public void setStateKeySelector(KeySelector<T, ?> stateKeySelector) { this.stateKeySelector = stateKeySelector; updateManagedMemoryStateBackendUseCase(stateKeySelector != null); }
Sets the {@link KeySelector} that must be used for partitioning keyed state of this Sink. @param stateKeySelector The {@code KeySelector} to set
setStateKeySelector
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/LegacySinkTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/LegacySinkTransformation.java
Apache-2.0
public MultipleInputTransformation<OUT> addInput(Transformation<?> input) { inputs.add(input); return this; }
{@link AbstractMultipleInputTransformation} implementation for non-keyed streams.
addInput
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/MultipleInputTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/MultipleInputTransformation.java
Apache-2.0
public void setStateKeySelector(KeySelector<IN, ?> stateKeySelector) { this.stateKeySelector = stateKeySelector; updateManagedMemoryStateBackendUseCase(stateKeySelector != null); }
Sets the {@link KeySelector} that must be used for partitioning keyed state of this operation. @param stateKeySelector The {@code KeySelector} to set
setStateKeySelector
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/OneInputTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/OneInputTransformation.java
Apache-2.0
public StreamPartitioner<T> getPartitioner() { return partitioner; }
Returns the {@link StreamPartitioner} that must be used for partitioning the elements of the input {@link Transformation}.
getPartitioner
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/PartitionTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/PartitionTransformation.java
Apache-2.0
public LineageVertex getLineageVertex() { return lineageVertex; }
Returns the lineage vertex of this {@code Transformation}.
getLineageVertex
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TransformationWithLineage.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TransformationWithLineage.java
Apache-2.0
public void setLineageVertex(LineageVertex lineageVertex) { this.lineageVertex = lineageVertex; }
Change the lineage vertex of this {@code Transformation}.
setLineageVertex
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TransformationWithLineage.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TransformationWithLineage.java
Apache-2.0
public Transformation<IN1> getInput1() { return input1; }
Returns the first input {@code Transformation} of this {@code TwoInputTransformation}.
getInput1
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
Apache-2.0
public TypeInformation<IN1> getInputType1() { return input1.getOutputType(); }
Returns the {@code TypeInformation} for the elements from the first input.
getInputType1
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
Apache-2.0
public void setStateKeySelectors( KeySelector<IN1, ?> stateKeySelector1, KeySelector<IN2, ?> stateKeySelector2) { this.stateKeySelector1 = stateKeySelector1; this.stateKeySelector2 = stateKeySelector2; updateManagedMemoryStateBackendUseCase( stateKeySelector1 != null || stateKeySelector2 != null); }
Sets the {@link KeySelector KeySelectors} that must be used for partitioning keyed state of this transformation. @param stateKeySelector1 The {@code KeySelector} to set for the first input @param stateKeySelector2 The {@code KeySelector} to set for the first input
setStateKeySelectors
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
Apache-2.0
public KeySelector<IN1, ?> getStateKeySelector1() { return stateKeySelector1; }
Returns the {@code KeySelector} that must be used for partitioning keyed state in this Operation for the first input. @see #setStateKeySelectors
getStateKeySelector1
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/transformations/TwoInputTransformation.java
Apache-2.0
public static GlobalWindows create() { return new GlobalWindows(new NeverTrigger()); }
Creates a {@link WindowAssigner} that assigns all elements to the same {@link GlobalWindow}. The window is only useful if you also specify a custom trigger. Otherwise, the window will never be triggered and no computation will be performed.
create
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/GlobalWindows.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/GlobalWindows.java
Apache-2.0
public static GlobalWindows createWithEndOfStreamTrigger() { return new GlobalWindows(new EndOfStreamTrigger()); }
Creates a {@link WindowAssigner} that assigns all elements to the same {@link GlobalWindow} and the window is triggered if and only if the input stream is ended.
createWithEndOfStreamTrigger
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/GlobalWindows.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/GlobalWindows.java
Apache-2.0
public static SlidingEventTimeWindows of(Duration size, Duration slide) { return new SlidingEventTimeWindows(size.toMillis(), slide.toMillis(), 0); }
Creates a new {@code SlidingEventTimeWindows} {@link WindowAssigner} that assigns elements to sliding time windows based on the element timestamp. @param size The size of the generated windows. @param slide The slide interval of the generated windows. @return The time policy.
of
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingEventTimeWindows.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingEventTimeWindows.java
Apache-2.0
public static SlidingProcessingTimeWindows of(Duration size, Duration slide) { return new SlidingProcessingTimeWindows(size.toMillis(), slide.toMillis(), 0); }
Creates a new {@code SlidingProcessingTimeWindows} {@link WindowAssigner} that assigns elements to sliding time windows based on the element timestamp. @param size The size of the generated windows. @param slide The slide interval of the generated windows. @return The time policy.
of
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingProcessingTimeWindows.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/SlidingProcessingTimeWindows.java
Apache-2.0
public static TumblingEventTimeWindows of(Duration size) { return new TumblingEventTimeWindows(size.toMillis(), 0, WindowStagger.ALIGNED); }
Creates a new {@code TumblingEventTimeWindows} {@link WindowAssigner} that assigns elements to time windows based on the element timestamp. @param size The size of the generated windows. @return The time policy.
of
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingEventTimeWindows.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingEventTimeWindows.java
Apache-2.0
public static TumblingProcessingTimeWindows of(Duration size) { return new TumblingProcessingTimeWindows(size.toMillis(), 0, WindowStagger.ALIGNED); }
Creates a new {@code TumblingProcessingTimeWindows} {@link WindowAssigner} that assigns elements to time windows based on the element timestamp. @param size The size of the generated windows. @return The time policy.
of
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingProcessingTimeWindows.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/TumblingProcessingTimeWindows.java
Apache-2.0
@Override public long getStaggerOffset(final long currentProcessingTime, final long size) { return 0L; }
Default mode, all panes fire at the same time across all partitions.
getStaggerOffset
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/WindowStagger.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/WindowStagger.java
Apache-2.0
@Override public long getStaggerOffset(final long currentProcessingTime, final long size) { return (long) (ThreadLocalRandom.current().nextDouble() * size); }
Stagger offset is sampled from uniform distribution U(0, WindowSize) when first event ingested in the partitioned operator.
getStaggerOffset
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/WindowStagger.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/WindowStagger.java
Apache-2.0
@Override public long getStaggerOffset(final long currentProcessingTime, final long size) { final long currentProcessingWindowStart = TimeWindow.getWindowStartWithOffset(currentProcessingTime, 0, size); return Math.max(0, currentProcessingTime - currentProcessingWindowStart); }
When the first event is received in the window operator, take the difference between the start of the window and current procesing time as the offset. This way, windows are staggered based on when each parallel operator receives the first event.
getStaggerOffset
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/WindowStagger.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/assigners/WindowStagger.java
Apache-2.0
public static <W extends Window> CountEvictor<W> of(long maxCount) { return new CountEvictor<>(maxCount); }
Creates a {@code CountEvictor} that keeps the given number of elements. Eviction is done before the window function. @param maxCount The number of elements to keep in the pane.
of
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/evictors/CountEvictor.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/evictors/CountEvictor.java
Apache-2.0
public static <W extends Window> CountEvictor<W> of(long maxCount, boolean doEvictAfter) { return new CountEvictor<>(maxCount, doEvictAfter); }
Creates a {@code CountEvictor} that keeps the given number of elements in the pane Eviction is done before/after the window function based on the value of doEvictAfter. @param maxCount The number of elements to keep in the pane. @param doEvictAfter Whether to do eviction after the window function.
of
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/evictors/CountEvictor.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/evictors/CountEvictor.java
Apache-2.0
public boolean intersects(TimeWindow other) { return this.start <= other.end && this.end >= other.start; }
Returns {@code true} if this window intersects the given window or if this window is just after or before the given window.
intersects
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/windows/TimeWindow.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/api/windowing/windows/TimeWindow.java
Apache-2.0
@Override public DataInputStatus emitNext(DataOutput<T> output) throws Exception { while (true) { // get the stream element from the deserializer if (currentRecordDeserializer != null) { RecordDeserializer.DeserializationResult result; try { result = currentRecordDeserializer.getNextRecord(deserializationDelegate); } catch (IOException e) { throw new IOException( String.format("Can't get next record for channel %s", lastChannel), e); } if (result.isBufferConsumed()) { currentRecordDeserializer = null; } if (result.isFullRecord()) { final boolean breakBatchEmitting = processElement(deserializationDelegate.getInstance(), output); if (canEmitBatchOfRecords.check() && !breakBatchEmitting) { continue; } return DataInputStatus.MORE_AVAILABLE; } } Optional<BufferOrEvent> bufferOrEvent = checkpointedInputGate.pollNext(); if (bufferOrEvent.isPresent()) { // return to the mailbox after receiving a checkpoint barrier to avoid processing of // data after the barrier before checkpoint is performed for unaligned checkpoint // mode if (bufferOrEvent.get().isBuffer()) { processBuffer(bufferOrEvent.get()); } else { DataInputStatus status = processEvent(bufferOrEvent.get(), output); if (status == DataInputStatus.MORE_AVAILABLE && canEmitBatchOfRecords.check()) { continue; } return status; } } else { if (checkpointedInputGate.isFinished()) { checkState( checkpointedInputGate.getAvailableFuture().isDone(), "Finished BarrierHandler should be available"); return DataInputStatus.END_OF_INPUT; } return DataInputStatus.NOTHING_AVAILABLE; } } }
Valve that controls how watermarks and watermark statuses are forwarded.
emitNext
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/AbstractStreamTaskNetworkInput.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/AbstractStreamTaskNetworkInput.java
Apache-2.0
public static InputGate createInputGate(List<IndexedInputGate> inputGates) { if (inputGates.size() <= 0) { throw new RuntimeException("No such input gate."); } if (inputGates.size() == 1) { return inputGates.get(0); } else { return new UnionInputGate(inputGates.toArray(new IndexedInputGate[0])); } }
Utility for dealing with input gates. This will either just return the single {@link InputGate} that was passed in or create a {@link UnionInputGate} if several {@link InputGate input gates} are given.
createInputGate
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/InputGateUtil.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/InputGateUtil.java
Apache-2.0
public void anyOf(final int idx, CompletableFuture<?> availabilityFuture) { if (futuresToCombine[idx] == null || futuresToCombine[idx].isDone()) { futuresToCombine[idx] = availabilityFuture; assertNoException(availabilityFuture.thenRun(this::notifyCompletion)); } }
Combine {@code availabilityFuture} using anyOf logic with other previously registered futures.
anyOf
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/MultipleFuturesAvailabilityHelper.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/MultipleFuturesAvailabilityHelper.java
Apache-2.0
public void inputRecordAttributes( RecordAttributes recordAttributes, int channelIdx, DataOutput<?> output) throws Exception { LOG.debug("RecordAttributes: {} from channel idx: {}", recordAttributes, channelIdx); RecordAttributes lastChannelRecordAttributes = allChannelRecordAttributes[channelIdx]; allChannelRecordAttributes[channelIdx] = recordAttributes; // skip if the input RecordAttributes of the input channel is the same as the last. if (recordAttributes.equals(lastChannelRecordAttributes)) { return; } final RecordAttributesBuilder builder = new RecordAttributesBuilder(Collections.emptyList()); Boolean isBacklog = combineIsBacklog(lastChannelRecordAttributes, recordAttributes); if (isBacklog == null) { if (lastOutputAttributes == null) { return; } else { isBacklog = lastOutputAttributes.isBacklog(); } } builder.setBacklog(isBacklog); final RecordAttributes outputAttribute = builder.build(); if (!outputAttribute.equals(lastOutputAttributes)) { output.emitRecordAttributes(outputAttribute); lastOutputAttributes = outputAttribute; } }
RecordAttributesValve combine RecordAttributes from different input channels.
inputRecordAttributes
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/RecordAttributesCombiner.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/RecordAttributesCombiner.java
Apache-2.0
private Boolean combineIsBacklog( RecordAttributes lastRecordAttributes, RecordAttributes recordAttributes) { if (lastRecordAttributes == null) { backlogUndefinedChannelCnt--; if (recordAttributes.isBacklog()) { backlogChannelCnt++; } } else if (lastRecordAttributes.isBacklog() != recordAttributes.isBacklog()) { if (recordAttributes.isBacklog()) { backlogChannelCnt++; } else { backlogChannelCnt--; } } // The input is processing backlog if any channel is processing backlog if (backlogChannelCnt > 0) { return true; } // None of the input channel is processing backlog and some are undefined if (backlogUndefinedChannelCnt > 0) { return null; } // All the input channels are defined and not processing backlog return false; }
If any of the input channels is backlog, the combined RecordAttributes is backlog. Return null if the isBacklog cannot be determined, i.e. when none of the input channel is processing backlog and some input channels are undefined.
combineIsBacklog
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/RecordAttributesCombiner.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/RecordAttributesCombiner.java
Apache-2.0
public static <T> StreamTaskInput<T> create( CheckpointedInputGate checkpointedInputGate, TypeSerializer<T> inputSerializer, IOManager ioManager, StatusWatermarkValve statusWatermarkValve, int inputIndex, InflightDataRescalingDescriptor rescalingDescriptorinflightDataRescalingDescriptor, Function<Integer, StreamPartitioner<?>> gatePartitioners, TaskInfo taskInfo, CanEmitBatchOfRecordsChecker canEmitBatchOfRecords, Set<AbstractInternalWatermarkDeclaration<?>> watermarkDeclarationSet) { return rescalingDescriptorinflightDataRescalingDescriptor.equals( InflightDataRescalingDescriptor.NO_RESCALE) ? new StreamTaskNetworkInput<>( checkpointedInputGate, inputSerializer, ioManager, statusWatermarkValve, inputIndex, canEmitBatchOfRecords, watermarkDeclarationSet) : new RescalingStreamTaskNetworkInput<>( checkpointedInputGate, inputSerializer, ioManager, statusWatermarkValve, inputIndex, rescalingDescriptorinflightDataRescalingDescriptor, gatePartitioners, taskInfo, canEmitBatchOfRecords); }
Factory method for {@link StreamTaskNetworkInput} or {@link RescalingStreamTaskNetworkInput} depending on {@link InflightDataRescalingDescriptor}.
create
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/StreamTaskNetworkInputFactory.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/StreamTaskNetworkInputFactory.java
Apache-2.0
@Override public DataInputStatus emitNext(DataOutput<T> output) throws Exception { /** * Safe guard against best efforts availability checks. If despite being unavailable someone * polls the data from this source while it's blocked, it should return {@link * DataInputStatus.NOTHING_AVAILABLE}. */ if (isBlockedAvailability.isApproximatelyAvailable()) { return operator.emitNext(output); } return DataInputStatus.NOTHING_AVAILABLE; }
Implementation of {@link StreamTaskInput} that reads data from the {@link SourceOperator} and returns the {@link DataInputStatus} to indicate whether the source state is available, unavailable or finished.
emitNext
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/StreamTaskSourceInput.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/StreamTaskSourceInput.java
Apache-2.0
@VisibleForTesting long getLatestCheckpointId() { return barrierHandler.getLatestCheckpointId(); }
Gets the ID defining the current pending, or just completed, checkpoint. @return The ID of the pending of completed checkpoint.
getLatestCheckpointId
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGate.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGate.java
Apache-2.0
@VisibleForTesting long getAlignmentDurationNanos() { return barrierHandler.getAlignmentDurationNanos(); }
Gets the time that the latest alignment took, in nanoseconds. If there is currently an alignment in progress, it will return the time spent in the current alignment so far. @return The duration in nanoseconds
getAlignmentDurationNanos
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGate.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGate.java
Apache-2.0
@VisibleForTesting long getCheckpointStartDelayNanos() { return barrierHandler.getCheckpointStartDelayNanos(); }
@return the time that elapsed, in nanoseconds, between the creation of the latest checkpoint and the time when it's first {@link CheckpointBarrier} was received by this {@link InputGate}.
getCheckpointStartDelayNanos
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGate.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/io/checkpointing/CheckpointedInputGate.java
Apache-2.0
@Override public Long getValue() { return Arrays.stream(watermarkGauges).mapToLong(WatermarkGauge::getValue).min().orElse(0); }
A {@link Gauge} for exposing the minimum watermark of chosen {@link WatermarkGauge}s.
getValue
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/metrics/MinWatermarkGauge.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/metrics/MinWatermarkGauge.java
Apache-2.0
@Override public void processWatermark(org.apache.flink.streaming.api.watermark.Watermark mark) throws Exception { // if we receive a Long.MAX_VALUE watermark we forward it since it is used // to signal the end of input and to not block watermark progress downstream if (mark.getTimestamp() == Long.MAX_VALUE) { wmOutput.emitWatermark(Watermark.MAX_WATERMARK); } }
Override the base implementation to completely ignore watermarks propagated from upstream, except for the "end of time" watermark.
processWatermark
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/TimestampsAndWatermarksOperator.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/TimestampsAndWatermarksOperator.java
Apache-2.0
static <T> ThrowingConsumer<StreamRecord<T>, Exception> makeRecordProcessor( AsyncStateProcessingOperator asyncOperator, KeySelector<T, ?> keySelector, ThrowingConsumer<StreamRecord<T>, Exception> processor) { if (keySelector == null) { // A non-keyed input does not need to set the key context and perform async context // switches. return processor; } switch (asyncOperator.getElementOrder()) { case RECORD_ORDER: return (record) -> { asyncOperator.setAsyncKeyedContextElement(record, keySelector); asyncOperator.preserveRecordOrderAndProcess(() -> processor.accept(record)); asyncOperator.postProcessElement(); }; case FIRST_STATE_ORDER: return (record) -> { asyncOperator.setAsyncKeyedContextElement(record, keySelector); processor.accept(record); asyncOperator.postProcessElement(); }; default: throw new UnsupportedOperationException( "Unknown element order for async processing:" + asyncOperator.getElementOrder()); } }
Static method helper to make a record processor with given infos. @param asyncOperator the operator that can process state asynchronously. @param keySelector the key selector. @param processor the record processing logic. @return the built record processor that can returned by {@link #getRecordProcessor(int)}.
makeRecordProcessor
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/asyncprocessing/AsyncStateProcessing.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/asyncprocessing/AsyncStateProcessing.java
Apache-2.0
public T getValue() { return value; }
@return The value wrapped in this {@link TimestampedValue}.
getValue
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
Apache-2.0
public long getTimestamp() { if (hasTimestamp) { return timestamp; } else { throw new IllegalStateException( "Record has no timestamp. Is the time characteristic set to 'ProcessingTime', or " + "did you forget to call 'DataStream.assignTimestampsAndWatermarks(...)'?"); } }
@return The timestamp associated with this stream value in milliseconds.
getTimestamp
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
Apache-2.0
public StreamRecord<T> getStreamRecord() { StreamRecord<T> streamRecord = new StreamRecord<>(value); if (hasTimestamp) { streamRecord.setTimestamp(timestamp); } return streamRecord; }
Creates a {@link StreamRecord} from this TimestampedValue.
getStreamRecord
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
Apache-2.0
public static <T> TimestampedValue<T> from(StreamRecord<T> streamRecord) { if (streamRecord.hasTimestamp()) { return new TimestampedValue<>(streamRecord.getValue(), streamRecord.getTimestamp()); } else { return new TimestampedValue<>(streamRecord.getValue()); } }
Creates a TimestampedValue from given {@link StreamRecord}. @param streamRecord The StreamRecord object from which TimestampedValue is to be created.
from
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/TimestampedValue.java
Apache-2.0
public SubtaskStateMapper getUpstreamSubtaskStateMapper() { return SubtaskStateMapper.ARBITRARY; }
Defines the behavior of this partitioner, when upstream rescaled during recovery of in-flight data.
getUpstreamSubtaskStateMapper
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/partitioner/StreamPartitioner.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/partitioner/StreamPartitioner.java
Apache-2.0
public RecordAttributesBuilder setBacklog(boolean isBacklog) { this.isBacklog = isBacklog; return this; }
This constructor takes a list of the last RecordAttributes received from each of the operator's inputs. Each input is corresponding to an input edge of the job graph. When this list is not empty, it will be used to determine the default values for those attributes that have not been explicitly set by caller.
setBacklog
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/RecordAttributesBuilder.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/RecordAttributesBuilder.java
Apache-2.0
public final boolean isWatermark() { return this instanceof Watermark; }
Checks whether this element is a watermark. @return True, if this element is a watermark, false otherwise.
isWatermark
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final boolean isWatermarkStatus() { return getClass() == WatermarkStatus.class; }
Checks whether this element is a watermark status. @return True, if this element is a watermark status, false otherwise.
isWatermarkStatus
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final boolean isRecord() { return getClass() == StreamRecord.class; }
Checks whether this element is a record. @return True, if this element is a record, false otherwise.
isRecord
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final boolean isLatencyMarker() { return getClass() == LatencyMarker.class; }
Checks whether this element is a latency marker. @return True, if this element is a latency marker, false otherwise.
isLatencyMarker
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final boolean isRecordAttributes() { return getClass() == RecordAttributes.class; }
Check whether this element is record attributes. @return True, if this element is record attributes, false otherwise.
isRecordAttributes
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
@SuppressWarnings("unchecked") public final <E> StreamRecord<E> asRecord() { return (StreamRecord<E>) this; }
Casts this element into a StreamRecord. @return This element as a stream record. @throws java.lang.ClassCastException Thrown, if this element is actually not a stream record.
asRecord
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final Watermark asWatermark() { return (Watermark) this; }
Casts this element into a Watermark. @return This element as a Watermark. @throws java.lang.ClassCastException Thrown, if this element is actually not a Watermark.
asWatermark
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final WatermarkStatus asWatermarkStatus() { return (WatermarkStatus) this; }
Casts this element into a WatermarkStatus. @return This element as a WatermarkStatus. @throws java.lang.ClassCastException Thrown, if this element is actually not a Watermark Status.
asWatermarkStatus
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final LatencyMarker asLatencyMarker() { return (LatencyMarker) this; }
Casts this element into a LatencyMarker. @return This element as a LatencyMarker. @throws java.lang.ClassCastException Thrown, if this element is actually not a LatencyMarker.
asLatencyMarker
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public final RecordAttributes asRecordAttributes() { return (RecordAttributes) this; }
Casts this element into a RecordAttributes. @return This element as a RecordAttributes. @throws java.lang.ClassCastException Thrown, if this element is actually not a RecordAttributes.
asRecordAttributes
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamElement.java
Apache-2.0
public long getTimestamp() { if (hasTimestamp) { return timestamp; } else { return Long.MIN_VALUE; // throw new IllegalStateException( // "Record has no timestamp. Is the time characteristic set to 'ProcessingTime', or // " + // "did you forget to call 'DataStream.assignTimestampsAndWatermarks(...)'?"); } }
Returns the timestamp associated with this stream value in milliseconds.
getTimestamp
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamRecord.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamRecord.java
Apache-2.0
@SuppressWarnings("unchecked") public <X> StreamRecord<X> replace(X element) { this.value = (T) element; return (StreamRecord<X>) this; }
Replace the currently stored value by the given new value. This returns a StreamElement with the generic type parameter that matches the new value while keeping the old timestamp. @param element Element to set in this stream value @return Returns the StreamElement with replaced value
replace
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamRecord.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamRecord.java
Apache-2.0
public StreamRecord<T> copy(T valueCopy) { StreamRecord<T> copy = new StreamRecord<>(valueCopy); copy.timestamp = this.timestamp; copy.hasTimestamp = this.hasTimestamp; return copy; }
Creates a copy of this stream record. Uses the copied value as the value for the new record, i.e., only copies timestamp fields.
copy
java
apache/flink
flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamRecord.java
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/streaming/runtime/streamrecord/StreamRecord.java
Apache-2.0