code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1 value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7 values |
|---|---|---|---|---|---|---|---|
public static <T, W extends Window> ProcessingTimeoutTrigger<T, W> of(
Trigger<T, W> nestedTrigger, Duration timeout) {
return new ProcessingTimeoutTrigger<>(nestedTrigger, timeout.toMillis(), false, true);
} | Creates a new {@link ProcessingTimeoutTrigger} that fires when the inner trigger is fired or
when the timeout timer fires.
<p>For example: {@code ProcessingTimeoutTrigger.of(CountTrigger.of(3), 100)}, will create a
CountTrigger with timeout of 100 millis. So, if the first record arrives at time {@code t},
and the second record arrives at time {@code t+50 }, the trigger will fire when the third
record arrives or when the time is {code t+100} (timeout).
@param nestedTrigger the nested {@link Trigger}
@param timeout the timeout interval
@return {@link ProcessingTimeoutTrigger} with the above configuration. | of | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/ProcessingTimeoutTrigger.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/ProcessingTimeoutTrigger.java | Apache-2.0 |
public static <T, W extends Window> ProcessingTimeoutTrigger<T, W> of(
Trigger<T, W> nestedTrigger,
Duration timeout,
boolean resetTimerOnNewRecord,
boolean shouldClearOnTimeout) {
return new ProcessingTimeoutTrigger<>(
nestedTrigger, timeout.toMillis(), resetTimerOnNewRecord, shouldClearOnTimeout);
} | Creates a new {@link ProcessingTimeoutTrigger} that fires when the inner trigger is fired or
when the timeout timer fires.
<p>For example: {@code ProcessingTimeoutTrigger.of(CountTrigger.of(3), 100, false, true)},
will create a CountTrigger with timeout of 100 millis. So, if the first record arrives at
time {@code t}, and the second record arrives at time {@code t+50 }, the trigger will fire
when the third record arrives or when the time is {code t+100} (timeout).
@param nestedTrigger the nested {@link Trigger}
@param timeout the timeout interval
@param resetTimerOnNewRecord each time a new element arrives, reset the timer and start a new
one
@param shouldClearOnTimeout whether to call {@link Trigger#clear(Window, TriggerContext)}
when the processing-time timer fires
@param <T> The type of the element.
@param <W> The type of {@link Window Windows} on which this trigger can operate.
@return {@link ProcessingTimeoutTrigger} with the above configuration. | of | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/ProcessingTimeoutTrigger.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers/ProcessingTimeoutTrigger.java | Apache-2.0 |
@Override
public void invoke(IN value, Context context) throws Exception {
try {
serializer.serialize(value, streamWriter);
} catch (Exception e) {
throw new IOException(
"Error sending data back to client (" + hostIp.toString() + ":" + port + ')',
e);
}
} | Creates a CollectSink that will send the data to the specified host.
@param hostIp IP address of the Socket server.
@param port Port of the Socket server.
@param serializer A serializer for the data. | invoke | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/experimental/CollectSink.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/experimental/CollectSink.java | Apache-2.0 |
@Override
public void open(OpenContext openContext) throws Exception {
try {
client = new Socket(hostIp, port);
outputStream = client.getOutputStream();
streamWriter = new DataOutputViewStreamWrapper(outputStream);
} catch (IOException e) {
throw new IOException(
"Cannot get back the stream while opening connection to client at "
+ hostIp.toString()
+ ":"
+ port,
e);
}
} | Initialize the connection with the Socket in the server.
@param openContext the context. | open | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/experimental/CollectSink.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/experimental/CollectSink.java | Apache-2.0 |
@Override
public boolean hasNext() {
if (next == null) {
try {
next = readNextFromStream();
} catch (Exception e) {
throw new RuntimeException("Failed to receive next element: " + e.getMessage(), e);
}
}
return next != null;
} | Returns true if the DataStream has more elements. (Note: blocks if there will be more
elements, but they are not available yet.)
@return true if the DataStream has more elements | hasNext | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/experimental/SocketStreamIterator.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/experimental/SocketStreamIterator.java | Apache-2.0 |
public void setJobId(String id) throws Exception {
this.jobId = id;
} | Internally used to set the job ID after instantiation.
@param id
@throws Exception | setJobId | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/CheckpointCommitter.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/CheckpointCommitter.java | Apache-2.0 |
public void setOperatorId(String id) throws Exception {
this.operatorId = id;
} | Internally used to set the operator ID after instantiation.
@param id
@throws Exception | setOperatorId | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/CheckpointCommitter.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/CheckpointCommitter.java | Apache-2.0 |
public final V put(K key, V value) {
final int hash = hash(key);
final int slot = indexOf(hash);
// search the chain from the slot
for (Entry<K, V> e = table[slot]; e != null; e = e.next) {
Object k;
if (e.hashCode == hash && ((k = e.key) == key || key.equals(k))) {
// found match
V old = e.value;
e.value = value;
return old;
}
}
// no match, insert a new value
insertNewEntry(hash, key, value, slot);
return null;
} | Inserts the given value, mapped under the given key. If the table already contains a value
for the key, the value is replaced and returned. If no value is contained, yet, the function
returns null.
@param key The key to insert.
@param value The value to insert.
@return The previously mapped value for the key, or null, if no value was mapped for the key.
@throws java.lang.NullPointerException Thrown, if the key is null. | put | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
public final V putIfAbsent(K key, LazyFactory<V> factory) {
final int hash = hash(key);
final int slot = indexOf(hash);
// search the chain from the slot
for (Entry<K, V> entry = table[slot]; entry != null; entry = entry.next) {
if (entry.hashCode == hash && entry.key.equals(key)) {
// found match
return entry.value;
}
}
// no match, insert a new value
V value = factory.create();
insertNewEntry(hash, key, value, slot);
// return the created value
return value;
} | Inserts a value for the given key, if no value is yet contained for that key. Otherwise,
returns the value currently contained for the key.
<p>The value that is inserted in case that the key is not contained, yet, is lazily created
using the given factory.
@param key The key to insert.
@param factory The factory that produces the value, if no value is contained, yet, for the
key.
@return The value in the map after this operation (either the previously contained value, or
the newly created value).
@throws java.lang.NullPointerException Thrown, if the key is null. | putIfAbsent | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
public final V putOrAggregate(K key, V value, ReduceFunction<V> aggregator) throws Exception {
final int hash = hash(key);
final int slot = indexOf(hash);
// search the chain from the slot
for (Entry<K, V> entry = table[slot]; entry != null; entry = entry.next) {
if (entry.hashCode == hash && entry.key.equals(key)) {
// found match
entry.value = aggregator.reduce(entry.value, value);
return entry.value;
}
}
// no match, insert a new value
insertNewEntry(hash, key, value, slot);
// return the original value
return value;
} | Inserts or aggregates a value into the hash map. If the hash map does not yet contain the
key, this method inserts the value. If the table already contains the key (and a value) this
method will use the given ReduceFunction function to combine the existing value and the given
value to a new value, and store that value for the key.
@param key The key to map the value.
@param value The new value to insert, or aggregate with the existing value.
@param aggregator The aggregator to use if a value is already contained.
@return The value in the map after this operation: Either the given value, or the aggregated
value.
@throws java.lang.NullPointerException Thrown, if the key is null.
@throws Exception The method forwards exceptions from the aggregation function. | putOrAggregate | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
public int size() {
return numElements;
} | Gets the number of elements currently in the map.
@return The number of elements currently in the map. | size | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
public boolean isEmpty() {
return numElements == 0;
} | Checks whether the map is empty.
@return True, if the map is empty, false otherwise. | isEmpty | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
public int getCurrentTableCapacity() {
return table.length;
} | Gets the current table capacity, i.e., the number of slots in the hash table, without and
overflow chaining.
@return The number of slots in the hash table. | getCurrentTableCapacity | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
int traverseAndCountElements() {
int num = 0;
for (Entry<?, ?> entry : table) {
while (entry != null) {
num++;
entry = entry.next;
}
}
return num;
} | For testing only: Actively counts the number of entries, rather than using the counter
variable. This method has linear complexity, rather than constant.
@return The counted number of entries. | traverseAndCountElements | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
int getLongestChainLength() {
int maxLen = 0;
for (Entry<?, ?> entry : table) {
int thisLen = 0;
while (entry != null) {
thisLen++;
entry = entry.next;
}
maxLen = Math.max(maxLen, thisLen);
}
return maxLen;
} | For testing only: Gets the length of the longest overflow chain. This method has linear
complexity.
@return The length of the longest overflow chain. | getLongestChainLength | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
@Override
public int compare(KeyMap<?, ?> o1, KeyMap<?, ?> o2) {
// this sorts descending
int cmp = o2.getLog2TableCapacity() - o1.getLog2TableCapacity();
if (cmp != 0) {
return cmp;
} else {
return o2.size() - o1.size();
}
} | Comparator that defines a descending order on maps depending on their table capacity and
number of elements. | compare | java | apache/flink | flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/KeyMap.java | Apache-2.0 |
@Test
void testErgonomicWatermarkStrategy() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<String> input = env.fromData("bonjour");
// as soon as you have a chain of methods the first call needs a generic
input.assignTimestampsAndWatermarks(
WatermarkStrategy.forBoundedOutOfOrderness(Duration.ofMillis(10)));
// as soon as you have a chain of methods the first call needs to specify the generic type
input.assignTimestampsAndWatermarks(
WatermarkStrategy.<String>forBoundedOutOfOrderness(Duration.ofMillis(10))
.withTimestampAssigner((event, timestamp) -> 42L));
} | Ensure that WatermarkStrategy is easy to use in the API, without superfluous generics. | testErgonomicWatermarkStrategy | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/DataStreamTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/DataStreamTest.java | Apache-2.0 |
@Test
void testAssembleBucketPath() throws Exception {
final File outDir = TempDirUtils.newFolder(tempDir);
final Path basePath = new Path(outDir.toURI());
final long time = 1000L;
final RollingPolicy<String, String> rollingPolicy =
DefaultRollingPolicy.builder().withMaxPartSize(new MemorySize(7L)).build();
final Buckets<String, String> buckets =
new Buckets<>(
basePath,
new BasePathBucketAssigner<>(),
new DefaultBucketFactoryImpl<>(),
new RowWiseBucketWriter<>(
FileSystem.get(basePath.toUri()).createRecoverableWriter(),
new SimpleStringEncoder<>()),
rollingPolicy,
0,
OutputFileConfig.builder().build());
Bucket<String, String> bucket =
buckets.onElement("abc", new TestUtils.MockSinkContext(time, time, time));
assertThat(bucket.getBucketPath()).isEqualTo(new Path(basePath.toUri()));
} | Integration tests for {@link BucketAssigner bucket assigners}. | testAssembleBucketPath | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssignerTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssignerTest.java | Apache-2.0 |
@Override
public BulkWriter<Tuple2<String, Integer>> create(FSDataOutputStream out) {
return new TestBulkWriter(out);
} | A {@link BulkWriter.Factory} used for the tests. | create | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BulkWriterTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/BulkWriterTest.java | Apache-2.0 |
@Override
public void encode(Tuple2<String, Integer> element, OutputStream stream)
throws IOException {
stream.write((element.f0 + '@' + element.f1).getBytes(StandardCharsets.UTF_8));
stream.write('\n');
} | A simple {@link Encoder} that encodes {@code Tuple2} object. | encode | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/TestUtils.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/TestUtils.java | Apache-2.0 |
@Test
void testExchangeModePipelined() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// fromElements -> Map -> Print
DataStream<Integer> sourceDataStream = env.fromData(1, 2, 3);
DataStream<Integer> partitionAfterSourceDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
sourceDataStream.getTransformation(),
new ForwardPartitioner<>(),
StreamExchangeMode.PIPELINED));
DataStream<Integer> mapDataStream =
partitionAfterSourceDataStream.map(value -> value).setParallelism(1);
DataStream<Integer> partitionAfterMapDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
mapDataStream.getTransformation(),
new RescalePartitioner<>(),
StreamExchangeMode.PIPELINED));
partitionAfterMapDataStream.print().setParallelism(2);
JobGraph jobGraph = createJobGraph(env.getStreamGraph());
List<JobVertex> verticesSorted = jobGraph.getVerticesSortedTopologicallyFromSources();
assertThat(verticesSorted).hasSize(2);
// it can be chained with PIPELINED exchange mode
JobVertex sourceAndMapVertex = verticesSorted.get(0);
// PIPELINED exchange mode is translated into PIPELINED_BOUNDED result partition
assertThat(sourceAndMapVertex.getProducedDataSets().get(0).getResultType())
.isEqualTo(ResultPartitionType.PIPELINED_BOUNDED);
} | Test setting exchange mode to {@link StreamExchangeMode#PIPELINED}. | testExchangeModePipelined | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | Apache-2.0 |
@Test
void testExchangeModeBatch() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);
env.setBufferTimeout(-1);
// fromElements -> Map -> Print
DataStream<Integer> sourceDataStream = env.fromData(1, 2, 3);
DataStream<Integer> partitionAfterSourceDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
sourceDataStream.getTransformation(),
new ForwardPartitioner<>(),
StreamExchangeMode.BATCH));
DataStream<Integer> mapDataStream =
partitionAfterSourceDataStream.map(value -> value).setParallelism(1);
DataStream<Integer> partitionAfterMapDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
mapDataStream.getTransformation(),
new RescalePartitioner<>(),
StreamExchangeMode.BATCH));
partitionAfterMapDataStream.print().setParallelism(2);
JobGraph jobGraph = createJobGraph(env.getStreamGraph());
List<JobVertex> verticesSorted = jobGraph.getVerticesSortedTopologicallyFromSources();
assertThat(verticesSorted).hasSize(3);
// it can not be chained with BATCH exchange mode
JobVertex sourceVertex = verticesSorted.get(0);
JobVertex mapVertex = verticesSorted.get(1);
// BATCH exchange mode is translated into BLOCKING result partition
assertThat(sourceVertex.getProducedDataSets().get(0).getResultType())
.isEqualTo(ResultPartitionType.BLOCKING);
assertThat(mapVertex.getProducedDataSets().get(0).getResultType())
.isEqualTo(ResultPartitionType.BLOCKING);
} | Test setting exchange mode to {@link StreamExchangeMode#BATCH}. | testExchangeModeBatch | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | Apache-2.0 |
@Test
void testExchangeModeUndefined() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// fromElements -> Map -> Print
DataStream<Integer> sourceDataStream = env.fromData(1, 2, 3);
DataStream<Integer> partitionAfterSourceDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
sourceDataStream.getTransformation(),
new ForwardPartitioner<>(),
StreamExchangeMode.UNDEFINED));
DataStream<Integer> mapDataStream =
partitionAfterSourceDataStream.map(value -> value).setParallelism(1);
DataStream<Integer> partitionAfterMapDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
mapDataStream.getTransformation(),
new RescalePartitioner<>(),
StreamExchangeMode.UNDEFINED));
partitionAfterMapDataStream.print().setParallelism(2);
JobGraph jobGraph = createJobGraph(env.getStreamGraph());
List<JobVertex> verticesSorted = jobGraph.getVerticesSortedTopologicallyFromSources();
assertThat(verticesSorted).hasSize(2);
// it can be chained with UNDEFINED exchange mode
JobVertex sourceAndMapVertex = verticesSorted.get(0);
// UNDEFINED exchange mode is translated into PIPELINED_BOUNDED result partition by default
assertThat(sourceAndMapVertex.getProducedDataSets().get(0).getResultType())
.isEqualTo(ResultPartitionType.PIPELINED_BOUNDED);
} | Test setting exchange mode to {@link StreamExchangeMode#UNDEFINED}. | testExchangeModeUndefined | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | Apache-2.0 |
@Test
void testExchangeModeHybridFull() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);
// fromElements -> Map -> Print
DataStream<Integer> sourceDataStream = env.fromData(1, 2, 3);
DataStream<Integer> partitionAfterSourceDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
sourceDataStream.getTransformation(),
new ForwardPartitioner<>(),
StreamExchangeMode.HYBRID_FULL));
DataStream<Integer> mapDataStream =
partitionAfterSourceDataStream.map(value -> value).setParallelism(1);
DataStream<Integer> partitionAfterMapDataStream =
new DataStream<>(
env,
new PartitionTransformation<>(
mapDataStream.getTransformation(),
new RescalePartitioner<>(),
StreamExchangeMode.HYBRID_FULL));
partitionAfterMapDataStream.print().setParallelism(2);
JobGraph jobGraph = createJobGraph(env.getStreamGraph());
List<JobVertex> verticesSorted = jobGraph.getVerticesSortedTopologicallyFromSources();
assertThat(verticesSorted).hasSize(2);
// it can be chained with HYBRID_FULL exchange mode
JobVertex sourceAndMapVertex = verticesSorted.get(0);
// HYBRID_FULL exchange mode is translated into HYBRID_FULL result partition
assertThat(sourceAndMapVertex.getProducedDataSets().get(0).getResultType())
.isEqualTo(ResultPartitionType.HYBRID_FULL);
} | Test setting exchange mode to {@link StreamExchangeMode#HYBRID_FULL}. | testExchangeModeHybridFull | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/JobGraphGeneratorTestBase.java | Apache-2.0 |
@Test
void testSetupOfKeyGroupPartitioner() {
int maxParallelism = 42;
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().setMaxParallelism(maxParallelism);
DataStream<Integer> source = env.fromData(1, 2, 3);
DataStream<Integer> keyedResult = source.keyBy(value -> value).map(new NoOpIntMap());
keyedResult.sinkTo(new DiscardingSink<>());
StreamGraph graph = env.getStreamGraph();
StreamNode keyedResultNode = graph.getStreamNode(keyedResult.getId());
StreamPartitioner<?> streamPartitioner =
keyedResultNode.getInEdges().get(0).getPartitioner();
} | Tests that the KeyGroupStreamPartitioner are properly set up with the correct value of
maximum parallelism. | testSetupOfKeyGroupPartitioner | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | Apache-2.0 |
@Test
void testMaxParallelismForwarding() {
int globalMaxParallelism = 42;
int keyedResult2MaxParallelism = 17;
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().setMaxParallelism(globalMaxParallelism);
DataStream<Integer> source = env.fromData(1, 2, 3);
DataStream<Integer> keyedResult1 = source.keyBy(value -> value).map(new NoOpIntMap());
DataStream<Integer> keyedResult2 =
keyedResult1
.keyBy(value -> value)
.map(new NoOpIntMap())
.setMaxParallelism(keyedResult2MaxParallelism);
keyedResult2.sinkTo(new DiscardingSink<>());
StreamGraph graph = env.getStreamGraph();
StreamNode keyedResult1Node = graph.getStreamNode(keyedResult1.getId());
StreamNode keyedResult2Node = graph.getStreamNode(keyedResult2.getId());
assertThat(keyedResult1Node.getMaxParallelism()).isEqualTo(globalMaxParallelism);
assertThat(keyedResult2Node.getMaxParallelism()).isEqualTo(keyedResult2MaxParallelism);
} | Tests that the global and operator-wide max parallelism setting is respected. | testMaxParallelismForwarding | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | Apache-2.0 |
@Test
void testAutoMaxParallelism() {
int globalParallelism = 42;
int mapParallelism = 17;
int maxParallelism = 21;
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(globalParallelism);
DataStream<Integer> source = env.fromData(1, 2, 3);
DataStream<Integer> keyedResult1 = source.keyBy(value -> value).map(new NoOpIntMap());
DataStream<Integer> keyedResult2 =
keyedResult1
.keyBy(value -> value)
.map(new NoOpIntMap())
.setParallelism(mapParallelism);
DataStream<Integer> keyedResult3 =
keyedResult2
.keyBy(value -> value)
.map(new NoOpIntMap())
.setMaxParallelism(maxParallelism);
DataStream<Integer> keyedResult4 =
keyedResult3
.keyBy(value -> value)
.map(new NoOpIntMap())
.setMaxParallelism(maxParallelism)
.setParallelism(mapParallelism);
keyedResult4.sinkTo(new DiscardingSink<>());
StreamGraph graph = env.getStreamGraph();
StreamNode keyedResult3Node = graph.getStreamNode(keyedResult3.getId());
StreamNode keyedResult4Node = graph.getStreamNode(keyedResult4.getId());
assertThat(keyedResult3Node.getMaxParallelism()).isEqualTo(maxParallelism);
assertThat(keyedResult4Node.getMaxParallelism()).isEqualTo(maxParallelism);
} | Tests that the max parallelism is automatically set to the parallelism if it has not been
specified. | testAutoMaxParallelism | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | Apache-2.0 |
@Test
void testMaxParallelismWithConnectedKeyedStream() {
int maxParallelism = 42;
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Long> input1 = env.fromSequence(1, 4).setMaxParallelism(128);
DataStream<Long> input2 = env.fromSequence(1, 4).setMaxParallelism(129);
env.getConfig().setMaxParallelism(maxParallelism);
DataStream<Long> keyedResult =
input1.connect(input2)
.keyBy(value -> value, value -> value)
.map(new NoOpLongCoMap());
keyedResult.sinkTo(new DiscardingSink<>());
StreamGraph graph = env.getStreamGraph();
StreamNode keyedResultNode = graph.getStreamNode(keyedResult.getId());
StreamPartitioner<?> streamPartitioner1 =
keyedResultNode.getInEdges().get(0).getPartitioner();
StreamPartitioner<?> streamPartitioner2 =
keyedResultNode.getInEdges().get(1).getPartitioner();
} | Tests that the max parallelism is properly set for connected streams. | testMaxParallelismWithConnectedKeyedStream | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamGraphGeneratorTest.java | Apache-2.0 |
@Override
public void timeout(Integer input, ResultFuture<Integer> resultFuture) throws Exception {
resultFuture.complete(Collections.singletonList(input * 3));
} | A special {@link LazyAsyncFunction} for timeout handling. Complete the result future with 3
times the input when the timeout occurred. | timeout | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | Apache-2.0 |
@Override
public int compare(Object o1, Object o2) {
if (o1 instanceof Watermark || o2 instanceof Watermark) {
return 0;
} else {
StreamRecord<Integer> sr0 = (StreamRecord<Integer>) o1;
StreamRecord<Integer> sr1 = (StreamRecord<Integer>) o2;
if (sr0.getTimestamp() != sr1.getTimestamp()) {
return (int) (sr0.getTimestamp() - sr1.getTimestamp());
}
int comparison = sr0.getValue().compareTo(sr1.getValue());
if (comparison != 0) {
return comparison;
} else {
return sr0.getValue() - sr1.getValue();
}
}
} | A {@link Comparator} to compare {@link StreamRecord} while sorting them. | compare | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | Apache-2.0 |
@Test
void testEventTimeOrdered() throws Exception {
testEventTime(AsyncDataStream.OutputMode.ORDERED);
} | Test the AsyncWaitOperator with ordered mode and event time. | testEventTimeOrdered | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | Apache-2.0 |
@Test
void testTimeoutCleanup() throws Exception {
OneInputStreamOperatorTestHarness<Integer, Integer> harness =
createTestHarness(
new MyAsyncFunction(), TIMEOUT, 1, AsyncDataStream.OutputMode.UNORDERED);
harness.open();
synchronized (harness.getCheckpointLock()) {
harness.processElement(42, 1L);
}
synchronized (harness.getCheckpointLock()) {
harness.endInput();
harness.close();
}
// check that we actually outputted the result of the single input
assertThat(harness.getOutput()).containsOnly(new StreamRecord<>(42 * 2, 1L));
// check that we have cancelled our registered timeout
assertThat(harness.getProcessingTimeService().getNumActiveTimers()).isZero();
} | FLINK-5652 Tests that registered timers are properly canceled upon completion of a {@link
StreamElement} in order to avoid resource leaks because TriggerTasks hold a reference on the
StreamRecordQueueEntry. | testTimeoutCleanup | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | Apache-2.0 |
@Override
public void asyncInvoke(Integer input, ResultFuture<Integer> resultFuture)
throws Exception {
resultFuture.completeExceptionally(new Exception("Test exception"));
} | AsyncFunction which completes the result with an {@link Exception}. | asyncInvoke | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | Apache-2.0 |
@Test
void testProcessingTimeRepeatedCompleteUnorderedWithRetry() throws Exception {
testProcessingTimeWithRetry(
AsyncDataStream.OutputMode.UNORDERED,
new IllWrittenOddInputEmptyResultAsyncFunction());
} | Test the AsyncWaitOperator with an ill-written async function under unordered mode and
processing time. | testProcessingTimeRepeatedCompleteUnorderedWithRetry | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | Apache-2.0 |
@Test
void testProcessingTimeWithTimeoutFunctionUnorderedWithRetry() throws Exception {
testProcessingTimeAlwaysTimeoutFunctionWithRetry(AsyncDataStream.OutputMode.UNORDERED);
} | Test the AsyncWaitOperator with an always-timeout async function under unordered mode and
processing time. | testProcessingTimeWithTimeoutFunctionUnorderedWithRetry | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java | Apache-2.0 |
@Test
void testCompletionOrder() {
final OrderedStreamElementQueue<Integer> queue = new OrderedStreamElementQueue<>(4);
ResultFuture<Integer> entry1 = putSuccessfully(queue, new StreamRecord<>(1, 0L));
ResultFuture<Integer> entry2 = putSuccessfully(queue, new StreamRecord<>(2, 1L));
putSuccessfully(queue, new Watermark(2L));
ResultFuture<Integer> entry4 = putSuccessfully(queue, new StreamRecord<>(3, 3L));
assertThat(popCompleted(queue)).isEmpty();
assertThat(queue.size()).isEqualTo(4L);
assertThat(queue.isEmpty()).isFalse();
entry2.complete(Collections.singleton(11));
entry4.complete(Collections.singleton(13));
assertThat(popCompleted(queue)).isEmpty();
assertThat(queue.size()).isEqualTo(4L);
assertThat(queue.isEmpty()).isFalse();
entry1.complete(Collections.singleton(10));
List<StreamElement> expected =
Arrays.asList(
new StreamRecord<>(10, 0L),
new StreamRecord<>(11, 1L),
new Watermark(2L),
new StreamRecord<>(13, 3L));
assertThat(popCompleted(queue)).isEqualTo(expected);
assertThat(queue.size()).isZero();
assertThat(queue.isEmpty()).isTrue();
} | Tests that only the head element is pulled from the ordered queue if it has been completed. | testCompletionOrder | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/OrderedStreamElementQueueTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/OrderedStreamElementQueueTest.java | Apache-2.0 |
static List<StreamElement> popCompleted(StreamElementQueue<Integer> queue) {
final List<StreamElement> completed = new ArrayList<>();
TimestampedCollector<Integer> collector =
new TimestampedCollector<>(new CollectorOutput<>(completed));
while (queue.hasCompletedElements()) {
queue.emitCompletedElement(collector);
}
collector.close();
return completed;
} | Pops all completed elements from the head of this queue.
@return Completed elements or empty list if none exists. | popCompleted | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/QueueUtil.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/QueueUtil.java | Apache-2.0 |
@TestTemplate
void testPutOnFull() {
final StreamElementQueue<Integer> queue = createStreamElementQueue(1);
// fill up queue
ResultFuture<Integer> resultFuture = putSuccessfully(queue, new StreamRecord<>(42, 0L));
assertThat(queue.size()).isOne();
// cannot add more
putUnsuccessfully(queue, new StreamRecord<>(43, 1L));
// popping the completed element frees the queue again
resultFuture.complete(Collections.singleton(42 * 42));
assertThat(popCompleted(queue)).containsExactly(new StreamRecord<>(42 * 42, 0L));
// now the put operation should complete
putSuccessfully(queue, new StreamRecord<>(43, 1L));
} | Tests that a put operation fails if the queue is full. | testPutOnFull | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/StreamElementQueueTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/StreamElementQueueTest.java | Apache-2.0 |
@TestTemplate
void testWatermarkOnly() {
final StreamElementQueue<Integer> queue = createStreamElementQueue(2);
putSuccessfully(queue, new Watermark(2L));
putSuccessfully(queue, new Watermark(5L));
assertThat(queue.size()).isEqualTo(2);
assertThat(queue.isEmpty()).isFalse();
assertThat(popCompleted(queue)).containsExactly(new Watermark(2L), new Watermark(5L));
assertThat(queue.size()).isZero();
assertThat(popCompleted(queue)).isEmpty();
} | Tests two adjacent watermarks can be processed successfully. | testWatermarkOnly | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/StreamElementQueueTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/StreamElementQueueTest.java | Apache-2.0 |
@Test
void testCompletionOrder() {
final UnorderedStreamElementQueue<Integer> queue = new UnorderedStreamElementQueue<>(8);
ResultFuture<Integer> record1 = putSuccessfully(queue, new StreamRecord<>(1, 0L));
ResultFuture<Integer> record2 = putSuccessfully(queue, new StreamRecord<>(2, 1L));
putSuccessfully(queue, new Watermark(2L));
ResultFuture<Integer> record3 = putSuccessfully(queue, new StreamRecord<>(3, 3L));
ResultFuture<Integer> record4 = putSuccessfully(queue, new StreamRecord<>(4, 4L));
putSuccessfully(queue, new Watermark(5L));
ResultFuture<Integer> record5 = putSuccessfully(queue, new StreamRecord<>(5, 6L));
ResultFuture<Integer> record6 = putSuccessfully(queue, new StreamRecord<>(6, 7L));
assertThat(popCompleted(queue)).isEmpty();
assertThat(queue.size()).isEqualTo(8);
assertThat(queue.isEmpty()).isFalse();
// this should not make any item completed, because R3 is behind W1
record3.complete(Arrays.asList(13));
assertThat(popCompleted(queue)).isEmpty();
assertThat(queue.size()).isEqualTo(8);
assertThat(queue.isEmpty()).isFalse();
record2.complete(Arrays.asList(12));
assertThat(popCompleted(queue)).containsExactly(new StreamRecord<>(12, 1L));
assertThat(queue.size()).isEqualTo(7);
assertThat(queue.isEmpty()).isFalse();
// Should not be completed because R1 has not been completed yet
record6.complete(Arrays.asList(16));
record4.complete(Arrays.asList(14));
assertThat(popCompleted(queue)).isEmpty();
assertThat(queue.size()).isEqualTo(7);
assertThat(queue.isEmpty()).isFalse();
// Now W1, R3, R4 and W2 are completed and should be pollable
record1.complete(Arrays.asList(11));
assertThat(popCompleted(queue))
.containsExactly(
new StreamRecord<>(11, 0L),
new Watermark(2L),
new StreamRecord<>(13, 3L),
new StreamRecord<>(14, 4L),
new Watermark(5L),
new StreamRecord<>(16, 7L));
assertThat(queue.size()).isOne();
assertThat(queue.isEmpty()).isFalse();
// only R5 left in the queue
record5.complete(Arrays.asList(15));
assertThat(popCompleted(queue)).containsExactly(new StreamRecord<>(15, 6L));
assertThat(queue.size()).isZero();
assertThat(queue.isEmpty()).isTrue();
assertThat(popCompleted(queue)).isEmpty();
} | Tests that only elements before the oldest watermark are returned if they are completed. | testCompletionOrder | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueueTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/queue/UnorderedStreamElementQueueTest.java | Apache-2.0 |
@Override
public void run() throws Exception {
Random random = new Random();
functionWrapper.openFunction();
while (data.size() > 0) {
// feed some data to the function
int size = Math.min(data.size(), random.nextInt(MAX_RESULTS_PER_BATCH * 3) + 1);
for (int i = 0; i < size; i++) {
functionWrapper.invoke(data.removeFirst());
}
if (!failedBefore && data.size() < originalData.size() / 2) {
if (random.nextBoolean()) {
// with 50% chance we fail half-way
// we shuffle the data to simulate jobs whose result order is undetermined
data = new LinkedList<>(originalData);
Collections.shuffle(data);
functionWrapper.closeFunctionAbnormally();
functionWrapper.openFunction();
}
failedBefore = true;
}
if (random.nextBoolean()) {
Thread.sleep(random.nextInt(10));
}
}
functionWrapper.closeFunctionNormally();
jobFinished = true;
} | A {@link RunnableWithException} feeding data to the function. It will fail when half of the
data is fed. | run | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | Apache-2.0 |
@Override
public void run() throws Exception {
Random random = new Random();
functionWrapper.openFunctionWithState();
while (data.size() > 0) {
// countdown each on-going checkpoint
ListIterator<CheckpointCountdown> iterator = checkpointCountdowns.listIterator();
while (iterator.hasNext()) {
CheckpointCountdown countdown = iterator.next();
if (countdown.id < lastSuccessCheckpointId) {
// this checkpoint is stale, throw it away
iterator.remove();
} else if (countdown.tick()) {
// complete a checkpoint
checkpointedData = countdown.data;
functionWrapper.checkpointComplete(countdown.id);
lastSuccessCheckpointId = countdown.id;
iterator.remove();
}
}
int r = random.nextInt(10);
if (r < 6) {
// with 60% chance we add some data
int size = Math.min(data.size(), random.nextInt(MAX_RESULTS_PER_BATCH * 3) + 1);
for (int i = 0; i < size; i++) {
functionWrapper.invoke(data.removeFirst());
}
} else if (r < 9) {
// with 30% chance we make a checkpoint
checkpointId++;
if (random.nextBoolean()) {
// with 50% chance this checkpoint will succeed in the future
checkpointCountdowns.add(
new CheckpointCountdown(checkpointId, data, random.nextInt(3) + 1));
}
functionWrapper.checkpointFunction(checkpointId);
} else {
// with 10% chance we fail
checkpointCountdowns.clear();
// we shuffle data to simulate jobs whose result order is undetermined
Collections.shuffle(checkpointedData);
data = new LinkedList<>(checkpointedData);
functionWrapper.closeFunctionAbnormally();
functionWrapper.openFunctionWithState();
}
if (random.nextBoolean()) {
Thread.sleep(random.nextInt(10));
}
}
functionWrapper.closeFunctionNormally();
jobFinished = true;
} | A {@link RunnableWithException} feeding data to the function. It will randomly do checkpoint
or fail. | run | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | Apache-2.0 |
private boolean tick() {
if (countdown > 0) {
countdown--;
return countdown == 0;
}
return false;
} | Countdown for a checkpoint which will succeed in the future. | tick | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | Apache-2.0 |
@Override
public void run() {
try {
runnable.run();
} catch (Exception e) {
throw new RuntimeException(e);
}
} | A subclass of thread which wraps a {@link RunnableWithException} for the ease of tests. | run | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunctionRandomITCase.java | Apache-2.0 |
@Test
void testNodeHashIsDeterministic() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
DataStream<String> src0 =
env.addSource(new NoOpSourceFunction(), "src0")
.map(new NoOpMapFunction())
.filter(new NoOpFilterFunction())
.keyBy(new NoOpKeySelector())
.reduce(new NoOpReduceFunction())
.name("reduce");
DataStream<String> src1 =
env.addSource(new NoOpSourceFunction(), "src1").filter(new NoOpFilterFunction());
DataStream<String> src2 =
env.addSource(new NoOpSourceFunction(), "src2").filter(new NoOpFilterFunction());
src0.map(new NoOpMapFunction())
.union(src1, src2)
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>())
.name("sink");
JobGraph jobGraph = env.getStreamGraph().getJobGraph();
final Map<JobVertexID, String> ids = rememberIds(jobGraph);
// Do it again and verify
env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
src0 =
env.addSource(new NoOpSourceFunction(), "src0")
.map(new NoOpMapFunction())
.filter(new NoOpFilterFunction())
.keyBy(new NoOpKeySelector())
.reduce(new NoOpReduceFunction())
.name("reduce");
src1 = env.addSource(new NoOpSourceFunction(), "src1").filter(new NoOpFilterFunction());
src2 = env.addSource(new NoOpSourceFunction(), "src2").filter(new NoOpFilterFunction());
src0.map(new NoOpMapFunction())
.union(src1, src2)
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>())
.name("sink");
jobGraph = env.getStreamGraph().getJobGraph();
verifyIdsEqual(jobGraph, ids);
} | Creates the same flow twice and checks that all IDs are the same.
<pre>
[ (src) -> (map) -> (filter) -> (reduce) -> (map) -> (sink) ]
//
[ (src) -> (filter) ] -------------------------------//
/
[ (src) -> (filter) ] ------------------------------/
</pre> | testNodeHashIsDeterministic | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testNodeHashIdenticalSources() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.disableOperatorChaining();
DataStream<String> src0 = env.addSource(new NoOpSourceFunction());
DataStream<String> src1 = env.addSource(new NoOpSourceFunction());
src0.union(src1)
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
JobGraph jobGraph = env.getStreamGraph().getJobGraph();
List<JobVertex> vertices = jobGraph.getVerticesSortedTopologicallyFromSources();
assertThat(vertices.get(0).isInputVertex()).isTrue();
assertThat(vertices.get(1).isInputVertex()).isTrue();
assertThat(vertices.get(0).getID()).isNotNull();
assertThat(vertices.get(1).getID()).isNotNull();
assertThat(vertices.get(0).getID()).isNotEqualTo(vertices.get(1).getID());
} | Tests that there are no collisions with two identical sources.
<pre>
[ (src0) ] --\
+--> [ (sink) ]
[ (src1) ] --/
</pre> | testNodeHashIdenticalSources | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testNodeHashAfterSourceUnchaining() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.addSource(new NoOpSourceFunction())
.map(new NoOpMapFunction())
.filter(new NoOpFilterFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
JobGraph jobGraph = env.getStreamGraph().getJobGraph();
JobVertexID sourceId = jobGraph.getVerticesSortedTopologicallyFromSources().get(0).getID();
env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.addSource(new NoOpSourceFunction())
.map(new NoOpMapFunction())
.startNewChain()
.filter(new NoOpFilterFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
jobGraph = env.getStreamGraph().getJobGraph();
JobVertexID unchainedSourceId =
jobGraph.getVerticesSortedTopologicallyFromSources().get(0).getID();
assertThat(unchainedSourceId).isNotEqualTo(sourceId);
} | Tests that (un)chaining affects the node hash (for sources).
<pre>
A (chained): [ (src0) -> (map) -> (filter) -> (sink) ]
B (unchained): [ (src0) ] -> [ (map) -> (filter) -> (sink) ]
</pre>
<p>The hashes for the single vertex in A and the source vertex in B need to be different. | testNodeHashAfterSourceUnchaining | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testNodeHashAfterIntermediateUnchaining() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.addSource(new NoOpSourceFunction())
.map(new NoOpMapFunction())
.name("map")
.startNewChain()
.filter(new NoOpFilterFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
JobGraph jobGraph = env.getStreamGraph().getJobGraph();
JobVertex chainedMap = jobGraph.getVerticesSortedTopologicallyFromSources().get(1);
assertThat(chainedMap.getName()).startsWith("map");
JobVertexID chainedMapId = chainedMap.getID();
env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.addSource(new NoOpSourceFunction())
.map(new NoOpMapFunction())
.name("map")
.startNewChain()
.filter(new NoOpFilterFunction())
.startNewChain()
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
jobGraph = env.getStreamGraph().getJobGraph();
JobVertex unchainedMap = jobGraph.getVerticesSortedTopologicallyFromSources().get(1);
assertThat(unchainedMap.getName()).isEqualTo("map");
JobVertexID unchainedMapId = unchainedMap.getID();
assertThat(unchainedMapId).isNotEqualTo(chainedMapId);
} | Tests that (un)chaining affects the node hash (for intermediate nodes).
<pre>
A (chained): [ (src0) -> (map) -> (filter) -> (sink) ]
B (unchained): [ (src0) ] -> [ (map) -> (filter) -> (sink) ]
</pre>
<p>The hashes for the single vertex in A and the source vertex in B need to be different. | testNodeHashAfterIntermediateUnchaining | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testNodeHashIdenticalNodes() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.disableOperatorChaining();
DataStream<String> src = env.addSource(new NoOpSourceFunction());
src.map(new NoOpMapFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
src.map(new NoOpMapFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
JobGraph jobGraph = env.getStreamGraph().getJobGraph();
Set<JobVertexID> vertexIds = new HashSet<>();
for (JobVertex vertex : jobGraph.getVertices()) {
assertThat(vertexIds.add(vertex.getID())).isTrue();
}
} | Tests that there are no collisions with two identical intermediate nodes connected to the
same predecessor.
<pre>
/-> [ (map) ] -> [ (sink) ]
[ (src) ] -+
\-> [ (map) ] -> [ (sink) ]
</pre> | testNodeHashIdenticalNodes | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testChangedOperatorName() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.addSource(new NoOpSourceFunction(), "A").map(new NoOpMapFunction());
JobGraph jobGraph = env.getStreamGraph().getJobGraph();
JobVertexID expected = jobGraph.getVerticesAsArray()[0].getID();
env = StreamExecutionEnvironment.createLocalEnvironment();
env.addSource(new NoOpSourceFunction(), "B").map(new NoOpMapFunction());
jobGraph = env.getStreamGraph().getJobGraph();
JobVertexID actual = jobGraph.getVerticesAsArray()[0].getID();
assertThat(actual).isEqualTo(expected);
} | Tests that a changed operator name does not affect the hash. | testChangedOperatorName | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testManualHashAssignment() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.disableOperatorChaining();
DataStream<String> src =
env.addSource(new NoOpSourceFunction()).name("source").uid("source");
src.map(new NoOpMapFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>())
.name("sink0")
.uid("sink0");
src.map(new NoOpMapFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>())
.name("sink1")
.uid("sink1");
JobGraph jobGraph = env.getStreamGraph().getJobGraph();
Set<JobVertexID> ids = new HashSet<>();
for (JobVertex vertex : jobGraph.getVertices()) {
assertThat(ids.add(vertex.getID())).isTrue();
}
// Resubmit a slightly different program
env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.disableOperatorChaining();
src =
env.addSource(new NoOpSourceFunction())
// New map function, should be mapped to the source state
.map(new NoOpMapFunction())
.name("source")
.uid("source");
src.map(new NoOpMapFunction())
.keyBy(new NoOpKeySelector())
.reduce(new NoOpReduceFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>())
.name("sink0")
.uid("sink0");
src.map(new NoOpMapFunction())
.keyBy(new NoOpKeySelector())
.reduce(new NoOpReduceFunction())
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>())
.name("sink1")
.uid("sink1");
JobGraph newJobGraph = env.getStreamGraph().getJobGraph();
assertThat(newJobGraph.getJobID()).isNotEqualTo(jobGraph.getJobID());
for (JobVertex vertex : newJobGraph.getVertices()) {
// Verify that the expected IDs are the same
if (vertex.getName().endsWith("source")
|| vertex.getName().endsWith("sink0")
|| vertex.getName().endsWith("sink1")) {
assertThat(vertex.getID()).isIn(ids);
}
}
} | Tests that manual hash assignments are mapped to the same operator ID.
<pre>
/-> [ (map) ] -> [ (sink)@sink0 ]
[ (src@source ) ] -+
\-> [ (map) ] -> [ (sink)@sink1 ]
</pre>
<pre>
/-> [ (map) ] -> [ (reduce) ] -> [ (sink)@sink0 ]
[ (src)@source ] -+
\-> [ (map) ] -> [ (reduce) ] -> [ (sink)@sink1 ]
</pre> | testManualHashAssignment | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testManualHashAssignmentCollisionThrowsException() {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.disableOperatorChaining();
env.addSource(new NoOpSourceFunction())
.uid("source")
.map(new NoOpMapFunction())
.uid("source") // Collision
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
// This call is necessary to generate the job graph
assertThatThrownBy(() -> env.getStreamGraph().getJobGraph())
.isInstanceOf(IllegalArgumentException.class);
} | Tests that a collision on the manual hash throws an Exception. | testManualHashAssignmentCollisionThrowsException | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testManualHashAssignmentForIntermediateNodeInChain() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
env.setParallelism(4);
env.addSource(new NoOpSourceFunction())
// Intermediate chained node
.map(new NoOpMapFunction())
.uid("map")
.sinkTo(new org.apache.flink.streaming.api.functions.sink.v2.DiscardingSink<>());
env.getStreamGraph().getJobGraph();
} | Tests that a manual hash for an intermediate chain node is accepted. | testManualHashAssignmentForIntermediateNodeInChain | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
private Map<JobVertexID, String> rememberIds(JobGraph jobGraph) {
final Map<JobVertexID, String> ids = new HashMap<>();
for (JobVertex vertex : jobGraph.getVertices()) {
ids.put(vertex.getID(), vertex.getName());
}
return ids;
} | Returns a {@link JobVertexID} to vertex name mapping for the given graph. | rememberIds | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
private void verifyIdsEqual(JobGraph jobGraph, Map<JobVertexID, String> ids) {
// Verify same number of vertices
assertThat(ids).hasSize(jobGraph.getNumberOfVertices());
// Verify that all IDs->name mappings are identical
for (JobVertex vertex : jobGraph.getVertices()) {
String expectedName = ids.get(vertex.getID());
assertThat(vertex.getName()).isNotNull().isEqualTo(expectedName);
}
} | Verifies that each {@link JobVertexID} of the {@link JobGraph} is contained in the given map
and mapped to the same vertex name. | verifyIdsEqual | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/graph/StreamingJobGraphGeneratorNodeHashTest.java | Apache-2.0 |
@Test
void testTriggerHandlesAllOnTimerCalls() throws Exception {
TriggerTestHarness<Object, TimeWindow> testHarness =
new TriggerTestHarness<>(
ContinuousEventTimeTrigger.<TimeWindow>of(Duration.ofMillis(5)),
new TimeWindow.Serializer());
assertThat(testHarness.numStateEntries()).isZero();
assertThat(testHarness.numProcessingTimeTimers()).isZero();
assertThat(testHarness.numEventTimeTimers()).isZero();
// this will make the elements we now process fall into late windows, i.e. no trigger state
// will be created
testHarness.advanceWatermark(10);
// late fires immediately
assertThat(testHarness.processElement(new StreamRecord<>(1), new TimeWindow(0, 2)))
.isEqualTo(TriggerResult.FIRE);
// simulate a GC timer firing
testHarness.invokeOnEventTime(20, new TimeWindow(0, 2));
} | Verify that the trigger doesn't fail with an NPE if we insert a timer firing when there is no
trigger state. | testTriggerHandlesAllOnTimerCalls | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/ContinuousEventTimeTriggerTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/ContinuousEventTimeTriggerTest.java | Apache-2.0 |
@Test
void testWindowSeparationAndFiring() throws Exception {
TriggerTestHarness<Object, TimeWindow> testHarness =
new TriggerTestHarness<>(
ContinuousEventTimeTrigger.<TimeWindow>of(Duration.ofHours(1)),
new TimeWindow.Serializer());
// inject several elements
assertThat(testHarness.processElement(new StreamRecord<>(1), new TimeWindow(0, 2)))
.isEqualTo(TriggerResult.CONTINUE);
assertThat(testHarness.processElement(new StreamRecord<>(1), new TimeWindow(0, 2)))
.isEqualTo(TriggerResult.CONTINUE);
assertThat(testHarness.processElement(new StreamRecord<>(1), new TimeWindow(0, 2)))
.isEqualTo(TriggerResult.CONTINUE);
assertThat(testHarness.processElement(new StreamRecord<>(1), new TimeWindow(2, 4)))
.isEqualTo(TriggerResult.CONTINUE);
assertThat(testHarness.processElement(new StreamRecord<>(1), new TimeWindow(2, 4)))
.isEqualTo(TriggerResult.CONTINUE);
assertThat(testHarness.numStateEntries()).isEqualTo(2);
assertThat(testHarness.numProcessingTimeTimers()).isZero();
assertThat(testHarness.numEventTimeTimers()).isEqualTo(4);
assertThat(testHarness.numEventTimeTimers(new TimeWindow(0, 2))).isEqualTo(2);
assertThat(testHarness.numEventTimeTimers(new TimeWindow(2, 4))).isEqualTo(2);
Collection<Tuple2<TimeWindow, TriggerResult>> triggerResults =
testHarness.advanceWatermark(2);
boolean sawFiring = false;
for (Tuple2<TimeWindow, TriggerResult> r : triggerResults) {
if (r.f0.equals(new TimeWindow(0, 2))) {
sawFiring = true;
assertThat(r.f1).isEqualTo(TriggerResult.FIRE);
}
}
assertThat(sawFiring).isTrue();
assertThat(testHarness.numStateEntries()).isEqualTo(2);
assertThat(testHarness.numProcessingTimeTimers()).isZero();
assertThat(testHarness.numEventTimeTimers()).isEqualTo(3);
assertThat(testHarness.numEventTimeTimers(new TimeWindow(0, 2))).isOne();
assertThat(testHarness.numEventTimeTimers(new TimeWindow(2, 4))).isEqualTo(2);
triggerResults = testHarness.advanceWatermark(4);
sawFiring = false;
for (Tuple2<TimeWindow, TriggerResult> r : triggerResults) {
if (r.f0.equals(new TimeWindow(2, 4))) {
sawFiring = true;
assertThat(r.f1).isEqualTo(TriggerResult.FIRE);
}
}
assertThat(sawFiring).isTrue();
assertThat(testHarness.numStateEntries()).isEqualTo(2);
assertThat(testHarness.numProcessingTimeTimers()).isZero();
assertThat(testHarness.numEventTimeTimers()).isEqualTo(2);
} | Verify that state <TimeWindow>of separate windows does not leak into other windows. | testWindowSeparationAndFiring | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/ContinuousEventTimeTriggerTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/ContinuousEventTimeTriggerTest.java | Apache-2.0 |
@ParameterizedTest(name = "Enable async state = {0}")
@ValueSource(booleans = {false, true})
void testTimeEvictorNoTimestamp(boolean enableAsyncState) throws Exception {
AtomicInteger closeCalled = new AtomicInteger(0);
final int triggerCount = 2;
final boolean evictAfter = true;
@SuppressWarnings({"unchecked", "rawtypes"})
TypeSerializer<StreamRecord<Tuple2<String, Integer>>> streamRecordSerializer =
(TypeSerializer<StreamRecord<Tuple2<String, Integer>>>)
new StreamElementSerializer(
STRING_INT_TUPLE.createSerializer(new SerializerConfigImpl()));
ListStateDescriptor<StreamRecord<Tuple2<String, Integer>>> stateDesc =
new ListStateDescriptor<>("window-contents", streamRecordSerializer);
EvictingWindowOperatorFactory<
String, Tuple2<String, Integer>, Tuple2<String, Integer>, GlobalWindow>
operator =
new EvictingWindowOperatorFactory<>(
GlobalWindows.create(),
new GlobalWindow.Serializer(),
new TupleKeySelector(),
BasicTypeInfo.STRING_TYPE_INFO.createSerializer(
new SerializerConfigImpl()),
stateDesc,
new InternalIterableWindowFunction<>(
new RichSumReducer<GlobalWindow>(closeCalled)),
CountTrigger.of(triggerCount),
TimeEvictor.of(Duration.ofSeconds(2), evictAfter),
0,
null /* late data output tag */);
WindowOperatorBuilder<Tuple2<String, Integer>, String, GlobalWindow> builder =
new WindowOperatorBuilder<>(
GlobalWindows.create(),
null,
new ExecutionConfig(),
STRING_INT_TUPLE,
new TupleKeySelector(),
TypeInformation.of(String.class))
.asyncTrigger(AsyncCountTrigger.of(triggerCount));
builder.evictor(TimeEvictor.of(Duration.ofSeconds(2), evictAfter));
OneInputStreamOperatorTestHarness<Tuple2<String, Integer>, Tuple2<String, Integer>>
testHarness =
enableAsyncState
? createAsyncTestHarness(
builder.asyncApply(new RichSumReducer<>(closeCalled)))
: new KeyedOneInputStreamOperatorTestHarness<>(
operator,
new TupleKeySelector(),
BasicTypeInfo.STRING_TYPE_INFO);
ConcurrentLinkedQueue<Object> expectedOutput = new ConcurrentLinkedQueue<>();
testHarness.open();
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1)));
expectedOutput.add(new StreamRecord<>(new Tuple2<>("key2", 2), Long.MAX_VALUE));
expectedOutput.add(new StreamRecord<>(new Tuple2<>("key1", 2), Long.MAX_VALUE));
expectedOutput.add(new StreamRecord<>(new Tuple2<>("key2", 4), Long.MAX_VALUE));
TestHarnessUtil.assertOutputEqualsSorted(
"Output was not correct.",
expectedOutput,
testHarness.getOutput(),
new ResultSortComparator());
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1)));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1)));
expectedOutput.add(new StreamRecord<>(new Tuple2<>("key1", 4), Long.MAX_VALUE));
expectedOutput.add(new StreamRecord<>(new Tuple2<>("key2", 6), Long.MAX_VALUE));
TestHarnessUtil.assertOutputEqualsSorted(
"Output was not correct.",
expectedOutput,
testHarness.getOutput(),
new ResultSortComparator());
testHarness.close();
assertThat(closeCalled).as("Close was not called.").hasValue(1);
} | Tests time evictor, if no timestamp information in the StreamRecord. No element will be
evicted from the window. | testTimeEvictorNoTimestamp | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/EvictingWindowOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/EvictingWindowOperatorTest.java | Apache-2.0 |
@Test
void testSessionWindowsWithCountTrigger() throws Exception {
closeCalled.set(0);
final int sessionSize = 3;
ListStateDescriptor<Tuple2<String, Integer>> stateDesc =
new ListStateDescriptor<>(
"window-contents",
STRING_INT_TUPLE.createSerializer(new SerializerConfigImpl()));
WindowOperatorFactory<
String,
Tuple2<String, Integer>,
Iterable<Tuple2<String, Integer>>,
Tuple3<String, Long, Long>,
TimeWindow>
operator =
new WindowOperatorFactory<>(
EventTimeSessionWindows.withGap(Duration.ofSeconds(sessionSize)),
new TimeWindow.Serializer(),
new TupleKeySelector(),
BasicTypeInfo.STRING_TYPE_INFO.createSerializer(
new SerializerConfigImpl()),
stateDesc,
new InternalIterableWindowFunction<>(new SessionWindowFunction()),
PurgingTrigger.of(CountTrigger.of(4)),
0,
null /* late data output tag */);
OneInputStreamOperatorTestHarness<Tuple2<String, Integer>, Tuple3<String, Long, Long>>
testHarness = createTestHarness(operator);
ConcurrentLinkedQueue<Object> expectedOutput = new ConcurrentLinkedQueue<>();
testHarness.open();
// add elements out-of-order
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 0));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 2), 1000));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 3), 2500));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 4), 3500));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 10));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 2), 1000));
// do a snapshot, close and restore again
OperatorSubtaskState snapshot = testHarness.snapshot(0L, 0L);
testHarness.close();
expectedOutput.add(new StreamRecord<>(new Tuple3<>("key2-10", 0L, 6500L), 6499));
TestHarnessUtil.assertOutputEqualsSorted(
"Output was not correct.",
expectedOutput,
testHarness.getOutput(),
new Tuple3ResultSortComparator());
expectedOutput.clear();
testHarness = createTestHarness(operator);
testHarness.setup();
testHarness.initializeState(snapshot);
testHarness.open();
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 3), 2500));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 6000));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 2), 6500));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 3), 7000));
TestHarnessUtil.assertOutputEqualsSorted(
"Output was not correct.",
expectedOutput,
testHarness.getOutput(),
new Tuple3ResultSortComparator());
// add an element that merges the two "key1" sessions, they should now have count 6, and
// therefore fire
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 10), 4500));
expectedOutput.add(new StreamRecord<>(new Tuple3<>("key1-22", 10L, 10000L), 9999L));
TestHarnessUtil.assertOutputEqualsSorted(
"Output was not correct.",
expectedOutput,
testHarness.getOutput(),
new Tuple3ResultSortComparator());
testHarness.close();
} | This tests whether merging works correctly with the CountTrigger. | testSessionWindowsWithCountTrigger | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorTest.java | Apache-2.0 |
@Test
void testSessionWindowsWithContinuousEventTimeTrigger() throws Exception {
closeCalled.set(0);
final int sessionSize = 3;
ListStateDescriptor<Tuple2<String, Integer>> stateDesc =
new ListStateDescriptor<>(
"window-contents",
STRING_INT_TUPLE.createSerializer(new SerializerConfigImpl()));
WindowOperatorFactory<
String,
Tuple2<String, Integer>,
Iterable<Tuple2<String, Integer>>,
Tuple3<String, Long, Long>,
TimeWindow>
operator =
new WindowOperatorFactory<>(
EventTimeSessionWindows.withGap(Duration.ofSeconds(sessionSize)),
new TimeWindow.Serializer(),
new TupleKeySelector(),
BasicTypeInfo.STRING_TYPE_INFO.createSerializer(
new SerializerConfigImpl()),
stateDesc,
new InternalIterableWindowFunction<>(new SessionWindowFunction()),
ContinuousEventTimeTrigger.of(Duration.ofSeconds(2)),
0,
null /* late data output tag */);
OneInputStreamOperatorTestHarness<Tuple2<String, Integer>, Tuple3<String, Long, Long>>
testHarness = createTestHarness(operator);
ConcurrentLinkedQueue<Object> expectedOutput = new ConcurrentLinkedQueue<>();
testHarness.open();
// add elements out-of-order and first trigger time is 2000
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 1), 1500));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 0));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 3), 2500));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 2), 1000));
// triggers emit and next trigger time is 4000
testHarness.processWatermark(new Watermark(2500));
expectedOutput.add(new StreamRecord<>(new Tuple3<>("key1-1", 1500L, 4500L), 4499));
expectedOutput.add(new StreamRecord<>(new Tuple3<>("key2-6", 0L, 5500L), 5499));
expectedOutput.add(new Watermark(2500));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 5), 4000));
testHarness.processWatermark(new Watermark(3000));
expectedOutput.add(new Watermark(3000));
// do a snapshot, close and restore again
OperatorSubtaskState snapshot = testHarness.snapshot(0L, 0L);
TestHarnessUtil.assertOutputEqualsSorted(
"Output was not correct.",
expectedOutput,
testHarness.getOutput(),
new Tuple3ResultSortComparator());
testHarness.close();
expectedOutput.clear();
testHarness = createTestHarness(operator);
testHarness.setup();
testHarness.initializeState(snapshot);
testHarness.open();
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key1", 2), 4000));
testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 4), 3500));
// triggers emit and next trigger time is 6000
testHarness.processWatermark(new Watermark(4000));
expectedOutput.add(new StreamRecord<>(new Tuple3<>("key1-3", 1500L, 7000L), 6999));
expectedOutput.add(new StreamRecord<>(new Tuple3<>("key2-15", 0L, 7000L), 6999));
expectedOutput.add(new Watermark(4000));
TestHarnessUtil.assertOutputEqualsSorted(
"Output was not correct.",
expectedOutput,
testHarness.getOutput(),
new Tuple3ResultSortComparator());
testHarness.close();
} | This tests whether merging works correctly with the ContinuousEventTimeTrigger. | testSessionWindowsWithContinuousEventTimeTrigger | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorTest.java | Apache-2.0 |
@Override
public <T extends StreamOperator<OUT>> T createStreamOperator(
StreamOperatorParameters<OUT> parameters) {
EventReceivingOperator<OUT> operator =
new EventReceivingOperatorWithFailure<>(parameters, name, numEvents);
operator.setup(
parameters.getContainingTask(),
parameters.getStreamConfig(),
parameters.getOutput());
parameters
.getOperatorEventDispatcher()
.registerEventHandler(parameters.getStreamConfig().getOperatorID(), operator);
return (T) operator;
} | A wrapper operator factory for {@link EventSendingCoordinatorWithGuaranteedCheckpoint} and
{@link EventReceivingOperatorWithFailure}. | createStreamOperator | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/CoordinatorEventsToStreamOperatorRecipientExactlyOnceITCase.java | Apache-2.0 |
@TestTemplate
void testSourceCheckpointFirst() throws Exception {
try (StreamTaskMailboxTestHarness<String> testHarness = buildTestHarness(objectReuse)) {
testHarness.setAutoProcess(false);
ArrayDeque<Object> expectedOutput = new ArrayDeque<>();
CheckpointBarrier barrier = createBarrier(testHarness);
addRecordsAndBarriers(testHarness, barrier);
Future<Boolean> checkpointFuture =
testHarness
.getStreamTask()
.triggerCheckpointAsync(metaData, barrier.getCheckpointOptions());
processSingleStepUntil(testHarness, checkpointFuture::isDone);
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
ArrayList<Object> actualOutput = new ArrayList<>(testHarness.getOutput());
assertThat(actualOutput.subList(0, expectedOutput.size()))
.containsExactlyInAnyOrderElementsOf(expectedOutput);
assertThat(actualOutput.get(expectedOutput.size())).isEqualTo(barrier);
}
} | In this scenario: 1. checkpoint is triggered via RPC and source is blocked 2. network inputs
are processed until CheckpointBarriers are processed 3. aligned checkpoint is performed | testSourceCheckpointFirst | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | Apache-2.0 |
@TestTemplate
void testSourceCheckpointFirstUnaligned() throws Exception {
try (StreamTaskMailboxTestHarness<String> testHarness =
buildTestHarness(true, objectReuse)) {
testHarness.setAutoProcess(false);
ArrayDeque<Object> expectedOutput = new ArrayDeque<>();
addRecords(testHarness);
CheckpointBarrier barrier = createBarrier(testHarness);
Future<Boolean> checkpointFuture =
testHarness
.getStreamTask()
.triggerCheckpointAsync(metaData, barrier.getCheckpointOptions());
processSingleStepUntil(testHarness, checkpointFuture::isDone);
assertThat(testHarness.getOutput()).containsExactly(barrier);
testHarness.processAll();
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
ArrayList<Object> actualOutput = new ArrayList<>(testHarness.getOutput());
assertThat(actualOutput.subList(1, expectedOutput.size() + 1))
.containsExactlyInAnyOrderElementsOf(expectedOutput);
}
} | In this scenario: 1. checkpoint is triggered via RPC and source is blocked 2. unaligned
checkpoint is performed 3. all data from network inputs are processed | testSourceCheckpointFirstUnaligned | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | Apache-2.0 |
@TestTemplate
void testSourceCheckpointLast() throws Exception {
try (StreamTaskMailboxTestHarness<String> testHarness = buildTestHarness(objectReuse)) {
testHarness.setAutoProcess(false);
ArrayDeque<Object> expectedOutput = new ArrayDeque<>();
CheckpointBarrier barrier = createBarrier(testHarness);
addRecordsAndBarriers(testHarness, barrier);
testHarness.processAll();
Future<Boolean> checkpointFuture =
testHarness
.getStreamTask()
.triggerCheckpointAsync(metaData, barrier.getCheckpointOptions());
processSingleStepUntil(testHarness, checkpointFuture::isDone);
expectedOutput.add(new StreamRecord<>("42", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("42", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("42", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
ArrayList<Object> actualOutput = new ArrayList<>(testHarness.getOutput());
assertThat(actualOutput.subList(0, expectedOutput.size()))
.containsExactlyInAnyOrderElementsOf(expectedOutput);
assertThat(actualOutput.get(expectedOutput.size())).isEqualTo(barrier);
}
} | In this scenario: 1a. network inputs are processed until CheckpointBarriers are processed 1b.
source records are processed at the same time 2. checkpoint is triggered via RPC 3. aligned
checkpoint is performed | testSourceCheckpointLast | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | Apache-2.0 |
@TestTemplate
void testSourceCheckpointLastUnaligned() throws Exception {
boolean unaligned = true;
try (StreamTaskMailboxTestHarness<String> testHarness =
buildTestHarness(unaligned, objectReuse)) {
testHarness.setAutoProcess(false);
ArrayDeque<Object> expectedOutput = new ArrayDeque<>();
addNetworkRecords(testHarness);
CheckpointBarrier barrier = createBarrier(testHarness);
addBarriers(testHarness, barrier);
testHarness.processAll();
addSourceRecords(testHarness, 1, 1337, 1337, 1337);
testHarness.processAll();
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("44", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(new StreamRecord<>("47.0", TimestampAssigner.NO_TIMESTAMP));
expectedOutput.add(barrier);
assertThat(testHarness.getOutput()).containsExactlyInAnyOrderElementsOf(expectedOutput);
}
} | In this scenario: 1. network inputs are processed until CheckpointBarriers are processed 2.
there are no source records to be processed 3. checkpoint is triggered on first received
CheckpointBarrier 4. unaligned checkpoint is performed at some point of time blocking the
source 5. more source records are added, that shouldn't be processed | testSourceCheckpointLastUnaligned | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/MultipleInputStreamTaskChainedSourcesCheckpointingTest.java | Apache-2.0 |
@Test
void testWatermarksNotForwardedWithinChainWhenIdle() throws Exception {
final OneInputStreamTaskTestHarness<String, String> testHarness =
new OneInputStreamTaskTestHarness<>(
OneInputStreamTask::new,
1,
1,
BasicTypeInfo.STRING_TYPE_INFO,
BasicTypeInfo.STRING_TYPE_INFO);
TriggerableFailOnWatermarkTestOperator headOperator =
new TriggerableFailOnWatermarkTestOperator();
WatermarkGeneratingTestOperator watermarkOperator = new WatermarkGeneratingTestOperator();
TriggerableFailOnWatermarkTestOperator tailOperator =
new TriggerableFailOnWatermarkTestOperator();
testHarness
.setupOperatorChain(new OperatorID(42L, 42L), headOperator)
.chain(new OperatorID(4711L, 42L), watermarkOperator, StringSerializer.INSTANCE)
.chain(new OperatorID(123L, 123L), tailOperator, StringSerializer.INSTANCE)
.finish();
// --------------------- begin test ---------------------
ConcurrentLinkedQueue<Object> expectedOutput = new ConcurrentLinkedQueue<>();
testHarness.invoke();
testHarness.waitForTaskRunning();
// the task starts as active, so all generated watermarks should be forwarded
testHarness.processElement(
new StreamRecord<>(
TriggerableFailOnWatermarkTestOperator.EXPECT_FORWARDED_WATERMARKS_MARKER));
testHarness.processElement(new StreamRecord<>("10"), 0, 0);
// this watermark will be forwarded since the task is currently active,
// but should not be in the final output because it should be blocked by the watermark
// generator in the chain
testHarness.processElement(new Watermark(15));
testHarness.processElement(new StreamRecord<>("20"), 0, 0);
testHarness.processElement(new StreamRecord<>("30"), 0, 0);
testHarness.waitForInputProcessing();
expectedOutput.add(
new StreamRecord<>(
TriggerableFailOnWatermarkTestOperator.EXPECT_FORWARDED_WATERMARKS_MARKER));
expectedOutput.add(new StreamRecord<>("10"));
expectedOutput.add(new Watermark(10));
expectedOutput.add(new StreamRecord<>("20"));
expectedOutput.add(new Watermark(20));
expectedOutput.add(new StreamRecord<>("30"));
expectedOutput.add(new Watermark(30));
TestHarnessUtil.assertOutputEquals(
"Output was not correct.", expectedOutput, testHarness.getOutput());
// now, toggle the task to be idle, and let the watermark generator produce some watermarks
testHarness.processElement(WatermarkStatus.IDLE);
// after this, the operators will throw an exception if they are forwarded watermarks
// anywhere in the chain
testHarness.processElement(
new StreamRecord<>(
TriggerableFailOnWatermarkTestOperator.NO_FORWARDED_WATERMARKS_MARKER));
// NOTE: normally, tasks will not have records to process while idle;
// we're doing this here only to mimic watermark generating in operators
testHarness.processElement(new StreamRecord<>("40"), 0, 0);
testHarness.processElement(new StreamRecord<>("50"), 0, 0);
testHarness.processElement(new StreamRecord<>("60"), 0, 0);
testHarness.processElement(
new Watermark(
65)); // the test will fail if any of the operators were forwarded this
testHarness.waitForInputProcessing();
// the 40 - 60 watermarks should not be forwarded, only the watermark status toggle element
// and
// records
expectedOutput.add(WatermarkStatus.IDLE);
expectedOutput.add(
new StreamRecord<>(
TriggerableFailOnWatermarkTestOperator.NO_FORWARDED_WATERMARKS_MARKER));
expectedOutput.add(new StreamRecord<>("40"));
expectedOutput.add(new StreamRecord<>("50"));
expectedOutput.add(new StreamRecord<>("60"));
TestHarnessUtil.assertOutputEquals(
"Output was not correct.", expectedOutput, testHarness.getOutput());
// re-toggle the task to be active and see if new watermarks are correctly forwarded again
testHarness.processElement(WatermarkStatus.ACTIVE);
testHarness.processElement(
new StreamRecord<>(
TriggerableFailOnWatermarkTestOperator.EXPECT_FORWARDED_WATERMARKS_MARKER));
testHarness.processElement(new StreamRecord<>("70"), 0, 0);
testHarness.processElement(new StreamRecord<>("80"), 0, 0);
testHarness.processElement(new StreamRecord<>("90"), 0, 0);
testHarness.waitForInputProcessing();
expectedOutput.add(WatermarkStatus.ACTIVE);
expectedOutput.add(
new StreamRecord<>(
TriggerableFailOnWatermarkTestOperator.EXPECT_FORWARDED_WATERMARKS_MARKER));
expectedOutput.add(new StreamRecord<>("70"));
expectedOutput.add(new Watermark(70));
expectedOutput.add(new StreamRecord<>("80"));
expectedOutput.add(new Watermark(80));
expectedOutput.add(new StreamRecord<>("90"));
expectedOutput.add(new Watermark(90));
TestHarnessUtil.assertOutputEquals(
"Output was not correct.", expectedOutput, testHarness.getOutput());
testHarness.endInput();
testHarness.waitForTaskCompletion();
List<String> resultElements =
TestHarnessUtil.getRawElementsFromOutput(testHarness.getOutput());
assertThat(resultElements).hasSize(12);
} | This test verifies that watermarks are not forwarded when the task is idle. It also verifies
that when task is idle, watermarks generated in the middle of chains are also blocked and
never forwarded.
<p>The tested chain will be: (HEAD: normal operator) --> (watermark generating operator) -->
(normal operator). The operators will throw an exception and fail the test if either of them
were forwarded watermarks when the task is idle. | testWatermarksNotForwardedWithinChainWhenIdle | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/OneInputStreamTaskTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/OneInputStreamTaskTest.java | Apache-2.0 |
void testCancellationWithSourceBlockedOnLock(boolean withPendingMail, boolean throwInCancel)
throws Exception {
final StreamTaskTestHarness<String> testHarness =
new StreamTaskTestHarness<>(SourceStreamTask::new, STRING_TYPE_INFO);
CancelLockingSource.reset();
testHarness
.setupOperatorChain(
new OperatorID(),
new StreamSource<>(new CancelLockingSource(throwInCancel)))
.chain(
new OperatorID(),
new TestBoundedOneInputStreamOperator("Operator1"),
STRING_TYPE_INFO.createSerializer(new SerializerConfigImpl()))
.finish();
StreamConfig streamConfig = testHarness.getStreamConfig();
testHarness.invoke();
CancelLockingSource.awaitRunning();
if (withPendingMail) {
// This pending mail should be blocked on checkpointLock acquisition, blocking the
// mailbox (task) thread.
testHarness
.getTask()
.getMailboxExecutorFactory()
.createExecutor(0)
.execute(
() ->
assertThat(testHarness.getTask().isRunning())
.as("This should never execute before task cancelation")
.isFalse(),
"Test");
}
try {
testHarness.getTask().cancel();
} catch (ExpectedTestException e) {
checkState(throwInCancel);
}
try {
testHarness.waitForTaskCompletion();
} catch (Throwable t) {
if (!ExceptionUtils.findThrowable(t, InterruptedException.class).isPresent()
&& !ExceptionUtils.findThrowable(t, CancelTaskException.class).isPresent()) {
throw t;
}
}
} | Note that this test is testing also for the shared cancellation logic inside {@link
StreamTask} which, as of the time this test is being written, is not tested anywhere else
(like {@link StreamTaskTest} or {@link OneInputStreamTaskTest}). | testCancellationWithSourceBlockedOnLock | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java | Apache-2.0 |
public void testMetrics(
FunctionWithException<Environment, ? extends StreamTask<Integer, ?>, Exception>
taskFactory,
StreamOperatorFactory<?> operatorFactory,
Consumer<AbstractDoubleAssert<?>> busyTimeMatcher)
throws Exception {
long sleepTime = 42;
StreamTaskMailboxTestHarnessBuilder<Integer> builder =
new StreamTaskMailboxTestHarnessBuilder<>(taskFactory, INT_TYPE_INFO);
final Map<String, Metric> metrics = new ConcurrentHashMap<>();
final TaskMetricGroup taskMetricGroup =
StreamTaskTestHarness.createTaskMetricGroup(metrics);
try (StreamTaskMailboxTestHarness<Integer> harness =
builder.setupOutputForSingletonOperatorChain(operatorFactory)
.setTaskMetricGroup(taskMetricGroup)
.build()) {
Future<Boolean> triggerFuture =
harness.streamTask.triggerCheckpointAsync(
new CheckpointMetaData(1L, System.currentTimeMillis()),
CheckpointOptions.forCheckpointWithDefaultLocation());
OneShotLatch checkpointAcknowledgeLatch = new OneShotLatch();
harness.getCheckpointResponder().setAcknowledgeLatch(checkpointAcknowledgeLatch);
assertThat(triggerFuture).isNotDone();
Thread.sleep(sleepTime);
while (!triggerFuture.isDone()) {
harness.streamTask.runMailboxStep();
}
Gauge<Long> checkpointStartDelayGauge =
(Gauge<Long>) metrics.get(MetricNames.CHECKPOINT_START_DELAY_TIME);
assertThat(checkpointStartDelayGauge.getValue())
.isGreaterThanOrEqualTo(sleepTime * 1_000_000);
Gauge<Double> busyTimeGauge = (Gauge<Double>) metrics.get(MetricNames.TASK_BUSY_TIME);
busyTimeMatcher.accept(assertThat(busyTimeGauge.getValue()));
checkpointAcknowledgeLatch.await();
TestCheckpointResponder.AcknowledgeReport acknowledgeReport =
Iterables.getOnlyElement(
harness.getCheckpointResponder().getAcknowledgeReports());
assertThat(acknowledgeReport.getCheckpointMetrics().getCheckpointStartDelayNanos())
.isGreaterThanOrEqualTo(sleepTime * 1_000_000);
}
} | Common base class for testing source tasks. | testMetrics | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTestBase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTestBase.java | Apache-2.0 |
private void outputEdgeConfiguration(Configuration taskConfiguration) {
StreamConfig streamConfig = new StreamConfig(taskConfiguration);
streamConfig.setStreamOperatorFactory(new UnusedOperatorFactory());
StreamConfigChainer cfg =
new StreamConfigChainer(new OperatorID(42, 42), streamConfig, this, 1);
// The OutputFlusher thread is started only if the buffer timeout more than 0(default value
// is 0).
cfg.setBufferTimeout(1);
cfg.chain(
new OperatorID(44, 44),
new UnusedOperatorFactory(),
StringSerializer.INSTANCE,
StringSerializer.INSTANCE,
false);
cfg.finish();
} | Make sure that there is some output edge in the config so that some RecordWriter is created. | outputEdgeConfiguration | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskITCase.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskITCase.java | Apache-2.0 |
@Test
void testAsyncExceptionHandlerHandleExceptionForwardsMessageProperly() {
MockEnvironment mockEnvironment = MockEnvironment.builder().build();
RuntimeException expectedException = new RuntimeException("RUNTIME EXCEPTION");
final StreamTask.StreamTaskAsyncExceptionHandler asyncExceptionHandler =
new StreamTask.StreamTaskAsyncExceptionHandler(mockEnvironment);
mockEnvironment.setExpectedExternalFailureCause(AsynchronousException.class);
final String expectedErrorMessage = "EXPECTED_ERROR MESSAGE";
asyncExceptionHandler.handleAsyncException(expectedErrorMessage, expectedException);
// expect an AsynchronousException containing the supplied error details
Optional<? extends Throwable> actualExternalFailureCause =
mockEnvironment.getActualExternalFailureCause();
final Throwable actualException =
actualExternalFailureCause.orElseThrow(
() -> new AssertionError("Expected exceptional completion"));
assertThat(actualException)
.isInstanceOf(AsynchronousException.class)
.hasMessage(expectedErrorMessage)
.hasCause(expectedException);
} | This test checks the async exceptions handling wraps the message and cause as an
AsynchronousException and propagates this to the environment. | testAsyncExceptionHandlerHandleExceptionForwardsMessageProperly | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java | Apache-2.0 |
@Test
void testFailingAsyncCheckpointRunnable() throws Exception {
// mock the new state operator snapshots
OperatorSnapshotFutures operatorSnapshotResult1 = mock(OperatorSnapshotFutures.class);
OperatorSnapshotFutures operatorSnapshotResult2 = mock(OperatorSnapshotFutures.class);
OperatorSnapshotFutures operatorSnapshotResult3 = mock(OperatorSnapshotFutures.class);
RunnableFuture<SnapshotResult<OperatorStateHandle>> failingFuture =
mock(RunnableFuture.class);
when(failingFuture.get())
.thenThrow(new ExecutionException(new Exception("Test exception")));
when(operatorSnapshotResult3.getOperatorStateRawFuture()).thenReturn(failingFuture);
try (MockEnvironment mockEnvironment = new MockEnvironmentBuilder().build()) {
RunningTask<MockStreamTask> task =
runTask(
() ->
createMockStreamTask(
mockEnvironment,
operatorChain(
streamOperatorWithSnapshot(
operatorSnapshotResult1),
streamOperatorWithSnapshot(
operatorSnapshotResult2),
streamOperatorWithSnapshot(
operatorSnapshotResult3))));
MockStreamTask streamTask = task.streamTask;
waitTaskIsRunning(streamTask, task.invocationFuture);
mockEnvironment.setExpectedExternalFailureCause(Throwable.class);
streamTask
.triggerCheckpointAsync(
new CheckpointMetaData(42L, 1L),
CheckpointOptions.forCheckpointWithDefaultLocation())
.get();
// wait for the completion of the async task
ExecutorService executor = streamTask.getAsyncOperationsThreadPool();
executor.shutdown();
if (!executor.awaitTermination(10000L, TimeUnit.MILLISECONDS)) {
fail(
"Executor did not shut down within the given timeout. This indicates that the "
+ "checkpointing did not resume.");
}
assertThat(mockEnvironment.getActualExternalFailureCause()).isPresent();
verify(operatorSnapshotResult1).cancel();
verify(operatorSnapshotResult2).cancel();
verify(operatorSnapshotResult3).cancel();
streamTask.finishInput();
task.waitForTaskCompletion(false);
}
} | Tests that in case of a failing AsyncCheckpointRunnable all operator snapshot results are
cancelled and all non partitioned state handles are discarded. | testFailingAsyncCheckpointRunnable | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java | Apache-2.0 |
@Test
void testAsyncCheckpointingConcurrentCloseBeforeAcknowledge() throws Exception {
final TestingKeyedStateHandle managedKeyedStateHandle = new TestingKeyedStateHandle();
final TestingKeyedStateHandle rawKeyedStateHandle = new TestingKeyedStateHandle();
final TestingOperatorStateHandle managedOperatorStateHandle =
new TestingOperatorStateHandle();
final TestingOperatorStateHandle rawOperatorStateHandle = new TestingOperatorStateHandle();
final BlockingRunnableFuture<SnapshotResult<KeyedStateHandle>> rawKeyedStateHandleFuture =
new BlockingRunnableFuture<>(2, SnapshotResult.of(rawKeyedStateHandle));
OperatorSnapshotFutures operatorSnapshotResult =
new OperatorSnapshotFutures(
DoneFuture.of(SnapshotResult.of(managedKeyedStateHandle)),
rawKeyedStateHandleFuture,
DoneFuture.of(SnapshotResult.of(managedOperatorStateHandle)),
DoneFuture.of(SnapshotResult.of(rawOperatorStateHandle)),
DoneFuture.of(SnapshotResult.empty()),
DoneFuture.of(SnapshotResult.empty()));
final OneInputStreamOperator<String, String> streamOperator =
streamOperatorWithSnapshot(operatorSnapshotResult);
final AcknowledgeDummyEnvironment mockEnvironment = new AcknowledgeDummyEnvironment();
RunningTask<MockStreamTask> task =
runTask(() -> createMockStreamTask(mockEnvironment, operatorChain(streamOperator)));
waitTaskIsRunning(task.streamTask, task.invocationFuture);
final long checkpointId = 42L;
task.streamTask.triggerCheckpointAsync(
new CheckpointMetaData(checkpointId, 1L),
CheckpointOptions.forCheckpointWithDefaultLocation());
rawKeyedStateHandleFuture.awaitRun();
task.streamTask.cancel();
final FutureUtils.ConjunctFuture<Void> discardFuture =
FutureUtils.waitForAll(
asList(
managedKeyedStateHandle.getDiscardFuture(),
rawKeyedStateHandle.getDiscardFuture(),
managedOperatorStateHandle.getDiscardFuture(),
rawOperatorStateHandle.getDiscardFuture()));
// make sure that all state handles have been discarded
discardFuture.get();
assertThatThrownBy(
() -> {
// future should not be completed
mockEnvironment
.getAcknowledgeCheckpointFuture()
.get(10L, TimeUnit.MILLISECONDS);
})
.isInstanceOf(TimeoutException.class);
task.waitForTaskCompletion(true);
} | FLINK-5667
<p>Tests that a concurrent cancel operation discards the state handles of a not yet
acknowledged checkpoint and prevents sending an acknowledge message to the
CheckpointCoordinator. The situation can only happen if the cancel call is executed before
Environment.acknowledgeCheckpoint(). | testAsyncCheckpointingConcurrentCloseBeforeAcknowledge | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTest.java | Apache-2.0 |
public void waitForTaskCompletion(
long timeout, boolean ignoreCancellationOrInterruptedException) throws Exception {
Preconditions.checkState(taskThread != null, "Task thread was not started.");
taskThread.join(timeout);
if (taskThread.getError() != null) {
boolean errorIsCancellationOrInterrupted =
ExceptionUtils.findThrowable(taskThread.getError(), CancelTaskException.class)
.isPresent()
|| ExceptionUtils.findThrowable(
taskThread.getError(), InterruptedException.class)
.isPresent();
if (ignoreCancellationOrInterruptedException && errorIsCancellationOrInterrupted) {
return;
}
throw new Exception("error in task", taskThread.getError());
}
} | Waits for the task completion. If this does not happen within the timeout, then a
TimeoutException is thrown.
@param timeout Timeout for the task completion | waitForTaskCompletion | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | Apache-2.0 |
public void processElement(Object element) {
inputGates[0].sendElement(element, 0);
} | Sends the element to input gate 0 on channel 0. | processElement | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | Apache-2.0 |
public void processElement(Object element, int inputGate, int channel) {
inputGates[inputGate].sendElement(element, channel);
} | Sends the element to the specified channel on the specified input gate. | processElement | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | Apache-2.0 |
public void processEvent(AbstractEvent event) {
inputGates[0].sendEvent(event, 0);
} | Sends the event to input gate 0 on channel 0. | processEvent | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | Apache-2.0 |
public void processEvent(AbstractEvent event, int inputGate, int channel) {
inputGates[inputGate].sendEvent(event, channel);
} | Sends the event to the specified channel on the specified input gate. | processEvent | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | Apache-2.0 |
public void waitForInputProcessing() throws Exception {
while (taskThread.isAlive()) {
boolean allEmpty = true;
for (int i = 0; i < numInputGates; i++) {
if (!inputGates[i].allQueuesEmpty()) {
allEmpty = false;
}
}
if (allEmpty) {
break;
}
}
// Wait for all currently available input has been processed.
final AtomicBoolean allInputProcessed = new AtomicBoolean();
final MailboxProcessor mailboxProcessor = taskThread.task.mailboxProcessor;
final MailboxExecutor mailboxExecutor = mailboxProcessor.getMainMailboxExecutor();
while (taskThread.isAlive()) {
try {
final CountDownLatch latch = new CountDownLatch(1);
mailboxExecutor.execute(
() -> {
allInputProcessed.set(!mailboxProcessor.isDefaultActionAvailable());
latch.countDown();
},
"query-whether-processInput-has-suspend-itself");
// Mail could be dropped due to task exception, so we do timed-await here.
latch.await(1, TimeUnit.SECONDS);
} catch (RejectedExecutionException ex) {
// Loop until task thread exit for possible task exception.
}
if (allInputProcessed.get()) {
break;
}
try {
Thread.sleep(1);
} catch (InterruptedException ignored) {
}
}
Throwable error = taskThread.getError();
if (error != null) {
throw new Exception("Exception in the task thread", error);
}
} | This only returns after all input queues are empty. | waitForInputProcessing | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | Apache-2.0 |
public void endInput() {
for (int i = 0; i < numInputGates; i++) {
inputGates[i].endInput();
}
} | Notifies all input channels on all input gates that no more input will arrive. This will
usually make the Task exit from his internal loop. | endInput | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/StreamTaskTestHarness.java | Apache-2.0 |
@Override
public void processElement(StreamRecord<T> element) {
output.collect(element.replace(String.format("[%d]: %s", inputId, element.getValue())));
} | {@link AbstractInput} that converts argument to string pre-pending input id. | processElement | java | apache/flink | flink-streaming-java/src/test/java/org/apache/flink/streaming/util/TestAnyModeMultipleInputStreamOperator.java | https://github.com/apache/flink/blob/master/flink-streaming-java/src/test/java/org/apache/flink/streaming/util/TestAnyModeMultipleInputStreamOperator.java | Apache-2.0 |
public void executeInNonInteractiveMode(URI uri) {
try {
terminal = terminalFactory.get();
if (isApplicationMode(executor.getSessionConfig())) {
String scheme = StringUtils.lowerCase(uri.getScheme());
String clusterId;
// local files
if (scheme == null || scheme.equals("file")) {
clusterId = executor.deployScript(readFile(uri), null);
} else {
clusterId = executor.deployScript(null, uri);
}
terminal.writer().println(messageInfo(MESSAGE_DEPLOY_SCRIPT).toAnsi());
terminal.writer().println(String.format("Cluster ID: %s\n", clusterId));
terminal.flush();
} else {
executeFile(
readFile(uri), terminal.output(), ExecutionMode.NON_INTERACTIVE_EXECUTION);
}
} finally {
closeTerminal();
}
} | Opens the non-interactive CLI shell. | executeInNonInteractiveMode | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java | Apache-2.0 |
public boolean executeInitialization(URI file) {
try {
OutputStream outputStream = new ByteArrayOutputStream(256);
terminal = TerminalUtils.createDumbTerminal(outputStream);
boolean success =
executeFile(readFile(file), outputStream, ExecutionMode.INITIALIZATION);
LOG.info(outputStream.toString());
return success;
} finally {
closeTerminal();
}
} | Initialize the Cli Client with the content. | executeInitialization | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java | Apache-2.0 |
public Optional<URL> getGatewayAddress() {
return Optional.ofNullable(gatewayAddress);
} | Command option lines to configure SQL Client in the gateway mode. | getGatewayAddress | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliOptions.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliOptions.java | Apache-2.0 |
protected void resetAllParts() {
titleLine = null;
headerLines = null;
mainHeaderLines = null;
mainLines = null;
footerLines = null;
totalMainWidth = 0;
} | Must be called when values in one or more parts have changed. | resetAllParts | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliView.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliView.java | Apache-2.0 |
public AttributedStyle getKeywordStyle() {
return keywordStyle;
} | Returns the style for a SQL keyword such as {@code SELECT} or {@code ON}.
@return Style for SQL keywords | getKeywordStyle | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | Apache-2.0 |
public AttributedStyle getQuotedStyle() {
return singleQuotedStyle;
} | Returns the style for a SQL character literal, such as {@code 'Hello, world!'}.
@return Style for SQL character literals | getQuotedStyle | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | Apache-2.0 |
public AttributedStyle getSqlIdentifierStyle() {
return sqlIdentifierStyle;
} | Returns the style for a SQL identifier, such as {@code `My_table`} or {@code `My table`}.
@return Style for SQL identifiers | getSqlIdentifierStyle | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | Apache-2.0 |
public AttributedStyle getCommentStyle() {
return commentStyle;
} | Returns the style for a SQL comments, such as {@literal /* This is a comment *}{@literal /}
or {@literal -- End of line comment}.
@return Style for SQL comments | getCommentStyle | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | Apache-2.0 |
public AttributedStyle getHintStyle() {
return hintStyle;
} | Returns the style for a SQL hint, such as {@literal /*+ This is a hint *}{@literal /}.
@return Style for SQL hints | getHintStyle | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | Apache-2.0 |
public AttributedStyle getDefaultStyle() {
return defaultStyle;
} | Returns the style for text that does not match any other style.
@return Default style | getDefaultStyle | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/SyntaxHighlightStyle.java | Apache-2.0 |
@Nullable
public URL unregisterJarResource(String jarPath) {
Path path = new Path(jarPath);
try {
checkPath(path, ResourceType.JAR);
return resourceInfos.remove(
new ResourceUri(ResourceType.JAR, getURLFromPath(path).getPath()));
} catch (IOException e) {
throw new SqlExecutionException(
String.format("Failed to unregister the jar resource [%s]", jarPath), e);
}
} | The {@link ClientResourceManager} is able to remove the registered JAR resources with the
specified jar path.
<p>After removing the JAR resource, the {@link ResourceManager} is able to register the JAR
resource with the same JAR path. Please notice that the removal doesn't promise the loaded {@link
Class} from the removed jar is inaccessible. | unregisterJarResource | java | apache/flink | flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/resource/ClientResourceManager.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/resource/ClientResourceManager.java | Apache-2.0 |
public static SqlGatewayOptions parseSqlGatewayOptions(String[] args) {
try {
DefaultParser parser = new DefaultParser();
CommandLine line = parser.parse(getSqlGatewayOptions(), args, true);
return new SqlGatewayOptions(
line.hasOption(SqlGatewayOptionsParser.OPTION_HELP.getOpt()),
line.getOptionProperties(DYNAMIC_PROPERTY_OPTION.getOpt()));
} catch (ParseException e) {
throw new SqlGatewayException(e.getMessage());
}
} | Parser to parse the command line options. | parseSqlGatewayOptions | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/cli/SqlGatewayOptionsParser.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/cli/SqlGatewayOptionsParser.java | Apache-2.0 |
public static void setAsContext(ClassLoader classLoader) {
final StreamExecutionEnvironmentFactory factory =
conf -> new StreamExecutionEnvironment(conf, classLoader);
initializeContextEnvironment(factory);
} | The SqlGatewayStreamExecutionEnvironment is a {@link StreamExecutionEnvironment} that runs the
program with SQL gateway. | setAsContext | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/environment/SqlGatewayStreamExecutionEnvironment.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/environment/SqlGatewayStreamExecutionEnvironment.java | Apache-2.0 |
@Override
protected CompletableFuture<Void> respondToRequest(
ChannelHandlerContext ctx,
HttpRequest httpRequest,
HandlerRequest<R> handlerRequest,
NonLeaderRetrievalRestfulGateway gateway) {
CompletableFuture<P> response;
try {
response =
handleRequest(
SqlGatewayRestAPIVersion.fromURIToVersion(httpRequest.uri()),
handlerRequest);
} catch (RestHandlerException e) {
response = FutureUtils.completedExceptionally(e);
}
return response.thenAccept(
resp ->
HandlerUtils.sendResponse(
ctx,
httpRequest,
resp,
messageHeaders.getResponseStatusCode(),
responseHeaders));
} | Super class for sql gateway handlers that work with {@link RequestBody}s and {@link
ResponseBody}s.
@param <R> type of incoming requests
@param <P> type of outgoing responses | respondToRequest | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/AbstractSqlGatewayRestHandler.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/AbstractSqlGatewayRestHandler.java | Apache-2.0 |
@Override
protected CompletableFuture<DeployScriptResponseBody> handleRequest(
@Nullable SqlGatewayRestAPIVersion version,
@Nonnull HandlerRequest<DeployScriptRequestBody> request)
throws RestHandlerException {
return CompletableFuture.completedFuture(
new DeployScriptResponseBody(
service.deployScript(
request.getPathParameter(
SessionHandleIdPathParameter.class),
request.getRequestBody().getScriptUri() == null
? null
: URI.create(
request.getRequestBody().getScriptUri()),
request.getRequestBody().getScript(),
Configuration.fromMap(
request.getRequestBody().getExecutionConfig()))
.toString()));
} | Handler to deploy the script in application mode. | handleRequest | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/application/DeployScriptHandler.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/application/DeployScriptHandler.java | Apache-2.0 |
@Override
protected CompletableFuture<RefreshMaterializedTableResponseBody> handleRequest(
@Nullable SqlGatewayRestAPIVersion version,
@Nonnull HandlerRequest<RefreshMaterializedTableRequestBody> request)
throws RestHandlerException {
try {
SessionHandle sessionHandle =
request.getPathParameter(SessionHandleIdPathParameter.class);
String materializedTableIdentifier =
request.getPathParameter(MaterializedTableIdentifierPathParameter.class);
boolean isPeriodic = request.getRequestBody().isPeriodic();
String scheduleTime = request.getRequestBody().getScheduleTime();
Map<String, String> dynamicOptions = request.getRequestBody().getDynamicOptions();
Map<String, String> staticPartitions = request.getRequestBody().getStaticPartitions();
Map<String, String> executionConfig = request.getRequestBody().getExecutionConfig();
OperationHandle operationHandle =
service.refreshMaterializedTable(
sessionHandle,
materializedTableIdentifier,
isPeriodic,
scheduleTime,
dynamicOptions == null ? Collections.emptyMap() : dynamicOptions,
staticPartitions == null ? Collections.emptyMap() : staticPartitions,
executionConfig == null ? Collections.emptyMap() : executionConfig);
return CompletableFuture.completedFuture(
new RefreshMaterializedTableResponseBody(
operationHandle.getIdentifier().toString()));
} catch (Exception e) {
throw new RestHandlerException(
e.getMessage(), HttpResponseStatus.INTERNAL_SERVER_ERROR, e);
}
} | Handler to execute materialized table refresh operation. | handleRequest | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/RefreshMaterializedTableHandler.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/RefreshMaterializedTableHandler.java | Apache-2.0 |
@Override
protected CompletableFuture<CreateEmbeddedSchedulerWorkflowResponseBody> handleRequest(
@Nullable SqlGatewayRestAPIVersion version,
@Nonnull HandlerRequest<CreateEmbeddedSchedulerWorkflowRequestBody> request)
throws RestHandlerException {
String materializedTableIdentifier =
request.getRequestBody().getMaterializedTableIdentifier();
String cronExpression = request.getRequestBody().getCronExpression();
Map<String, String> initConfig = request.getRequestBody().getInitConfig();
Map<String, String> executionConfig = request.getRequestBody().getExecutionConfig();
String restEndpointURL = request.getRequestBody().getRestEndpointUrl();
WorkflowInfo workflowInfo =
new WorkflowInfo(
materializedTableIdentifier,
Collections.emptyMap(),
initConfig == null ? Collections.emptyMap() : initConfig,
executionConfig == null ? Collections.emptyMap() : executionConfig,
restEndpointURL);
try {
JobDetail jobDetail =
quartzScheduler.createScheduleWorkflow(workflowInfo, cronExpression);
JobKey jobKey = jobDetail.getKey();
return CompletableFuture.completedFuture(
new CreateEmbeddedSchedulerWorkflowResponseBody(
jobKey.getName(), jobKey.getGroup()));
} catch (Exception e) {
throw new RestHandlerException(
e.getMessage(), HttpResponseStatus.INTERNAL_SERVER_ERROR, e);
}
} | Handler to create workflow in embedded scheduler. | handleRequest | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/CreateEmbeddedSchedulerWorkflowHandler.java | Apache-2.0 |
@Override
protected CompletableFuture<EmptyResponseBody> handleRequest(
@Nullable SqlGatewayRestAPIVersion version,
@Nonnull HandlerRequest<EmbeddedSchedulerWorkflowRequestBody> request)
throws RestHandlerException {
String workflowName = request.getRequestBody().getWorkflowName();
String workflowGroup = request.getRequestBody().getWorkflowGroup();
try {
quartzScheduler.deleteScheduleWorkflow(workflowName, workflowGroup);
return CompletableFuture.completedFuture(EmptyResponseBody.getInstance());
} catch (Exception e) {
throw new RestHandlerException(
e.getMessage(), HttpResponseStatus.INTERNAL_SERVER_ERROR, e);
}
} | Handler to delete workflow in embedded scheduler. | handleRequest | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHandler.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/DeleteEmbeddedSchedulerWorkflowHandler.java | Apache-2.0 |
@Override
protected CompletableFuture<EmptyResponseBody> handleRequest(
@Nullable SqlGatewayRestAPIVersion version,
@Nonnull HandlerRequest<ResumeEmbeddedSchedulerWorkflowRequestBody> request)
throws RestHandlerException {
String workflowName = request.getRequestBody().getWorkflowName();
String workflowGroup = request.getRequestBody().getWorkflowGroup();
Map<String, String> dynamicOptions = request.getRequestBody().getDynamicOptions();
try {
quartzScheduler.resumeScheduleWorkflow(
workflowName,
workflowGroup,
dynamicOptions == null ? Collections.emptyMap() : dynamicOptions);
return CompletableFuture.completedFuture(EmptyResponseBody.getInstance());
} catch (Exception e) {
throw new RestHandlerException(
e.getMessage(), HttpResponseStatus.INTERNAL_SERVER_ERROR, e);
}
} | Handler to resume workflow in embedded scheduler. | handleRequest | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/ResumeEmbeddedSchedulerWorkflowHandler.java | Apache-2.0 |
@Override
protected CompletableFuture<EmptyResponseBody> handleRequest(
@Nullable SqlGatewayRestAPIVersion version,
@Nonnull HandlerRequest<EmbeddedSchedulerWorkflowRequestBody> request)
throws RestHandlerException {
String workflowName = request.getRequestBody().getWorkflowName();
String workflowGroup = request.getRequestBody().getWorkflowGroup();
try {
quartzScheduler.suspendScheduleWorkflow(workflowName, workflowGroup);
return CompletableFuture.completedFuture(EmptyResponseBody.getInstance());
} catch (Exception e) {
throw new RestHandlerException(
e.getMessage(), HttpResponseStatus.INTERNAL_SERVER_ERROR, e);
}
} | Handler to suspend workflow in embedded scheduler. | handleRequest | java | apache/flink | flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHandler.java | https://github.com/apache/flink/blob/master/flink-table/flink-sql-gateway/src/main/java/org/apache/flink/table/gateway/rest/handler/materializedtable/scheduler/SuspendEmbeddedSchedulerWorkflowHandler.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.