code
stringlengths
25
201k
docstring
stringlengths
19
96.2k
func_name
stringlengths
0
235
language
stringclasses
1 value
repo
stringlengths
8
51
path
stringlengths
11
314
url
stringlengths
62
377
license
stringclasses
7 values
private static PatternProcessFunction<Event, String> extractCurrentProcessingTimeAndNames( int stateNumber) { return new AccessContextWithNames( stateNumber, context -> String.valueOf(context.currentProcessingTime())); }
Creates a {@link PatternProcessFunction} that as a result will produce Strings as follows: {@code [currentProcessingTime]:[Event.getName]...}. The Event.getName will occur stateNumber times. If the match does not contain n-th pattern it will replace this position with "null". @param stateNumber number of states in the pattern @return created PatternProcessFunction
extractCurrentProcessingTimeAndNames
java
apache/flink
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CepProcessFunctionContextTest.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CepProcessFunctionContextTest.java
Apache-2.0
@Override public NFA<Event> createNFA() { Pattern<Event, ?> pattern = Pattern.begin("1"); return NFACompiler.compileFactory(pattern, false).createNFA(); }
This NFA consists of one state accepting any element.
createNFA
java
apache/flink
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CepProcessFunctionContextTest.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CepProcessFunctionContextTest.java
Apache-2.0
@Override public NFA<Event> createNFA() { Pattern<Event, ?> pattern = Pattern.<Event>begin("1").next("2").within(Duration.ofMillis(10)); return NFACompiler.compileFactory(pattern, true).createNFA(); }
This NFA consists of two states accepting any element. It times out after 10 milliseconds
createNFA
java
apache/flink
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CepProcessFunctionContextTest.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CepProcessFunctionContextTest.java
Apache-2.0
@Override public Integer getKey(Event value) throws Exception { return value.getId(); }
A simple {@link KeySelector} that returns as key the id of the {@link Event} provided as argument in the {@link #getKey(Event)}.
getKey
java
apache/flink
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CEPRescalingTest.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CEPRescalingTest.java
Apache-2.0
public static NFATestHarnessBuilderPattern forPattern(Pattern<Event, ?> pattern) { return new NFATestHarnessBuilderPattern(pattern); }
Constructs a test harness starting from a given {@link Pattern}.
forPattern
java
apache/flink
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/utils/NFATestHarness.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/utils/NFATestHarness.java
Apache-2.0
public static <T> NFA<T> compile(Pattern<T, ?> pattern, boolean timeoutHandling) { NFACompiler.NFAFactory<T> factory = compileFactory(pattern, timeoutHandling); return factory.createNFA(); }
Compiles the given pattern into a {@link NFA}. @param pattern Definition of sequence pattern @param timeoutHandling True if the NFA shall return timed out event patterns @param <T> Type of the input events @return Non-deterministic finite automaton representing the given pattern
compile
java
apache/flink
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/utils/NFAUtils.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/utils/NFAUtils.java
Apache-2.0
public <K, T, ACC, R> DataStream<R> aggregate( String uid, AggregateFunction<T, ACC, R> aggregateFunction, TypeInformation<K> keyType, TypeInformation<T> inputType, TypeInformation<R> outputType) throws IOException { return aggregate( uid, aggregateFunction, new PassThroughReader<>(), keyType, inputType, outputType); }
Reads window state generated using an {@link AggregateFunction}. @param uid The uid of the operator. @param aggregateFunction The aggregate function used to create the window. @param keyType The key type of the window. @param inputType The type information of the accumulator function. @param outputType The output type of the reader function. @param <K> The type of the key. @param <T> The type of the values that are aggregated. @param <ACC> The type of the accumulator (intermediate aggregate state). @param <R> The type of the aggregated result. @return A {@code DataStream} of objects read from keyed state. @throws IOException If savepoint does not contain the specified uid.
aggregate
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/EvictingWindowSavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/EvictingWindowSavepointReader.java
Apache-2.0
public <K, T, ACC, R, OUT> DataStream<OUT> aggregate( String uid, AggregateFunction<T, ACC, R> aggregateFunction, WindowReaderFunction<R, OUT, K, W> readerFunction, TypeInformation<K> keyType, TypeInformation<T> inputType, TypeInformation<OUT> outputType) throws IOException { WindowReaderOperator<?, K, StreamRecord<T>, W, OUT> operator = WindowReaderOperator.evictingWindow( new AggregateEvictingWindowReaderFunction<>( readerFunction, aggregateFunction), keyType, windowSerializer, inputType, env.getConfig()); return readWindowOperator(uid, outputType, operator); }
Reads window state generated using an {@link AggregateFunction}. @param uid The uid of the operator. @param aggregateFunction The aggregate function used to create the window. @param readerFunction The window reader function. @param keyType The key type of the window. @param inputType The type information of the accumulator function. @param outputType The output type of the reader function. @param <K> The type of the key. @param <T> The type of the values that are aggregated. @param <ACC> The type of the accumulator (intermediate aggregate state). @param <R> The type of the aggregated result. @param <OUT> The output type of the reader function. @return A {@code DataStream} of objects read from keyed state. @throws IOException If savepoint does not contain the specified uid.
aggregate
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/EvictingWindowSavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/EvictingWindowSavepointReader.java
Apache-2.0
public StateBootstrapTransformation<T> transform( KeyedStateBootstrapFunction<K, T> processFunction) { SavepointWriterOperatorFactory factory = (timestamp, path) -> SimpleOperatorFactory.of( new KeyedStateBootstrapOperator<>( checkpointId, timestamp, path, processFunction)); return transform(factory); }
Applies the given {@link KeyedStateBootstrapFunction} on the keyed input. <p>The function will be called for every element in the input and can be used for writing both keyed and operator state into a {@link Savepoint}. @param processFunction The {@link KeyedStateBootstrapFunction} that is called for each element. @return An {@link StateBootstrapTransformation} that can be added to a {@link Savepoint}.
transform
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/KeyedStateTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/KeyedStateTransformation.java
Apache-2.0
public StateBootstrapTransformation<T> transform(StateBootstrapFunction<T> processFunction) { SavepointWriterOperatorFactory factory = (timestamp, path) -> SimpleOperatorFactory.of( new StateBootstrapOperator<>( checkpointId, timestamp, path, processFunction)); return transform(factory); }
Applies the given {@link StateBootstrapFunction} on the non-keyed input. <p>The function will be called for every element in the input and can be used for writing operator state into a {@link Savepoint}. @param processFunction The {@link StateBootstrapFunction} that is called for each element. @return An {@link OperatorTransformation} that can be added to a {@link Savepoint}.
transform
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
Apache-2.0
public StateBootstrapTransformation<T> transform( BroadcastStateBootstrapFunction<T> processFunction) { SavepointWriterOperatorFactory factory = (timestamp, path) -> SimpleOperatorFactory.of( new BroadcastStateBootstrapOperator<>( checkpointId, timestamp, path, processFunction)); return transform(factory); }
Applies the given {@link BroadcastStateBootstrapFunction} on the non-keyed input. <p>The function will be called for every element in the input and can be used for writing broadcast state into a {@link Savepoint}. @param processFunction The {@link BroadcastStateBootstrapFunction} that is called for each element. @return An {@link StateBootstrapTransformation} that can be added to a {@link Savepoint}.
transform
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
Apache-2.0
public <K> KeyedStateTransformation<K, T> keyBy(KeySelector<T, K> keySelector) { TypeInformation<K> keyType = TypeExtractor.getKeySelectorTypes(keySelector, stream.getType()); return new KeyedStateTransformation<>( stream, checkpointId, operatorMaxParallelism, keySelector, keyType); }
It creates a new {@link KeyedOperatorTransformation} that uses the provided key for partitioning its operator states. @param keySelector The KeySelector to be used for extracting the key for partitioning. @return The {@code BootstrapTransformation} with partitioned state.
keyBy
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
Apache-2.0
public <K> KeyedStateTransformation<K, T> keyBy( KeySelector<T, K> keySelector, TypeInformation<K> keyType) { return new KeyedStateTransformation<>( stream, checkpointId, operatorMaxParallelism, keySelector, keyType); }
It creates a new {@link KeyedOperatorTransformation} that uses the provided key with explicit type information for partitioning its operator states. @param keySelector The KeySelector to be used for extracting the key for partitioning. @param keyType The type information describing the key type. @return The {@code BootstrapTransformation} with partitioned state.
keyBy
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
Apache-2.0
public KeyedStateTransformation<Tuple, T> keyBy(int... fields) { if (stream.getType() instanceof BasicArrayTypeInfo || stream.getType() instanceof PrimitiveArrayTypeInfo) { return keyBy(KeySelectorUtil.getSelectorForArray(fields, stream.getType())); } else { return keyBy(new Keys.ExpressionKeys<>(fields, stream.getType())); } }
Partitions the operator state of a {@link OperatorTransformation} by the given key positions. @param fields The position of the fields on which the {@code OperatorTransformation} will be grouped. @return The {@code OperatorTransformation} with partitioned state.
keyBy
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OneInputStateTransformation.java
Apache-2.0
public static <T> OneInputStateTransformation<T> bootstrapWith(DataStream<T> stream) { return new OneInputStateTransformation<>(stream, 0L); }
Create a new {@link OneInputStateTransformation} from a {@link DataStream}. @param stream A data stream of elements. @param <T> The type of the input. @return A {@link OneInputStateTransformation}.
bootstrapWith
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OperatorTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OperatorTransformation.java
Apache-2.0
public static <T> OneInputStateTransformation<T> bootstrapWith( DataStream<T> stream, long checkpointId) { return new OneInputStateTransformation<>(stream, checkpointId); }
Create a new {@link OneInputStateTransformation} from a {@link DataStream}. @param stream A data stream of elements. @param checkpointId checkpoint ID. @param <T> The type of the input. @return A {@link OneInputStateTransformation}.
bootstrapWith
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OperatorTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/OperatorTransformation.java
Apache-2.0
public static SavepointReader read(StreamExecutionEnvironment env, String path) throws IOException { CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path); int maxParallelism = metadata.getOperatorStates().stream() .map(OperatorState::getMaxParallelism) .max(Comparator.naturalOrder()) .orElseThrow( () -> new RuntimeException( "Savepoint must contain at least one operator state.")); SavepointMetadataV2 savepointMetadata = new SavepointMetadataV2( metadata.getCheckpointId(), maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates()); return new SavepointReader(env, savepointMetadata, null); }
Loads an existing savepoint. Useful if you want to query the state of an existing application. The savepoint will be read using the state backend defined via the clusters configuration. @param env The execution environment used to transform the savepoint. @param path The path to an existing savepoint on disk. @return A {@link SavepointReader}.
read
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public static SavepointReader read( StreamExecutionEnvironment env, String path, StateBackend stateBackend) throws IOException { CheckpointMetadata metadata = SavepointLoader.loadSavepointMetadata(path); int maxParallelism = metadata.getOperatorStates().stream() .map(OperatorState::getMaxParallelism) .max(Comparator.naturalOrder()) .orElseThrow( () -> new RuntimeException( "Savepoint must contain at least one operator state.")); SavepointMetadataV2 savepointMetadata = new SavepointMetadataV2( metadata.getCheckpointId(), maxParallelism, metadata.getMasterStates(), metadata.getOperatorStates()); return new SavepointReader(env, savepointMetadata, stateBackend); }
Loads an existing savepoint. Useful if you want to query the state of an existing application. @param env The execution environment used to transform the savepoint. @param path The path to an existing savepoint on disk. @param stateBackend The state backend of the savepoint. @return A {@link SavepointReader}.
read
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public <T> DataStream<T> readListState( OperatorIdentifier identifier, String name, TypeInformation<T> typeInfo) throws IOException { return readListState(identifier, typeInfo, new ListStateDescriptor<>(name, typeInfo)); }
Read operator {@code ListState} from a {@code Savepoint}. @param identifier The identifier of the operator. @param name The (unique) name for the state. @param typeInfo The type of the elements in the state. @param <T> The type of the values that are in the list state. @return A {@code DataStream} representing the elements in state. @throws IOException If the savepoint path is invalid or the uid does not exist.
readListState
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public <T> DataStream<T> readUnionState( OperatorIdentifier identifier, String name, TypeInformation<T> typeInfo) throws IOException { return readUnionState(identifier, typeInfo, new ListStateDescriptor<>(name, typeInfo)); }
Read operator {@code UnionState} from a {@code Savepoint}. @param identifier The identifier of the operator. @param name The (unique) name for the state. @param typeInfo The type of the elements in the state. @param <T> The type of the values that are in the union state. @return A {@code DataStream} representing the elements in state. @throws IOException If the savepoint path is invalid or the uid does not exist.
readUnionState
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public <K, V> DataStream<Tuple2<K, V>> readBroadcastState( OperatorIdentifier identifier, String name, TypeInformation<K> keyTypeInfo, TypeInformation<V> valueTypeInfo) throws IOException { return readBroadcastState( identifier, keyTypeInfo, valueTypeInfo, new MapStateDescriptor<>(name, keyTypeInfo, valueTypeInfo)); }
Read operator {@code BroadcastState} from a {@code Savepoint}. @param identifier The identifier of the operator. @param name The (unique) name for the state. @param keyTypeInfo The type information for the keys in the state. @param valueTypeInfo The type information for the values in the state. @param <K> The type of keys in state. @param <V> The type of values in state. @return A {@code DataStream} of key-value pairs from state. @throws IOException If the savepoint does not contain the specified uid.
readBroadcastState
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public <K, OUT> DataStream<OUT> readKeyedState( OperatorIdentifier identifier, KeyedStateReaderFunction<K, OUT> function) throws IOException { TypeInformation<K> keyTypeInfo; TypeInformation<OUT> outType; try { keyTypeInfo = TypeExtractor.createTypeInfo( KeyedStateReaderFunction.class, function.getClass(), 0, null, null); } catch (InvalidTypesException e) { throw new InvalidProgramException( "The key type of the KeyedStateReaderFunction could not be automatically determined. Please use " + "Savepoint#readKeyedState(String, KeyedStateReaderFunction, TypeInformation, TypeInformation) instead.", e); } try { outType = TypeExtractor.getUnaryOperatorReturnType( function, KeyedStateReaderFunction.class, 0, 1, TypeExtractor.NO_INDEX, keyTypeInfo, Utils.getCallLocationName(), false); } catch (InvalidTypesException e) { throw new InvalidProgramException( "The output type of the KeyedStateReaderFunction could not be automatically determined. Please use " + "Savepoint#readKeyedState(String, KeyedStateReaderFunction, TypeInformation, TypeInformation) instead.", e); } return readKeyedState(identifier, function, keyTypeInfo, outType); }
Read keyed state from an operator in a {@code Savepoint}. @param identifier The identifier of the operator. @param function The {@link KeyedStateReaderFunction} that is called for each key in state. @param <K> The type of the key in state. @param <OUT> The output type of the transform function. @return A {@code DataStream} of objects read from keyed state. @throws IOException If the savepoint does not contain operator state with the given uid.
readKeyedState
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public <K, OUT> DataStream<OUT> readKeyedState( OperatorIdentifier identifier, KeyedStateReaderFunction<K, OUT> function, TypeInformation<K> keyTypeInfo, TypeInformation<OUT> outTypeInfo) throws IOException { OperatorState operatorState = metadata.getOperatorState(identifier); KeyedStateInputFormat<K, VoidNamespace, OUT> inputFormat = new KeyedStateInputFormat<>( operatorState, stateBackend, MutableConfig.of(env.getConfiguration()), new KeyedStateReaderOperator<>(function, keyTypeInfo), env.getConfig()); return SourceBuilder.fromFormat(env, inputFormat, outTypeInfo); }
Read keyed state from an operator in a {@code Savepoint}. @param identifier The identifier of the operator. @param function The {@link KeyedStateReaderFunction} that is called for each key in state. @param keyTypeInfo The type information of the key in state. @param outTypeInfo The type information of the output of the transform reader function. @param <K> The type of the key in state. @param <OUT> The output type of the transform function. @return A {@code DataStream} of objects read from keyed state. @throws IOException If the savepoint does not contain operator state with the given uid.
readKeyedState
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public <W extends Window> WindowSavepointReader<W> window(WindowAssigner<?, W> assigner) { Preconditions.checkNotNull(assigner, "The window assigner must not be null"); TypeSerializer<W> windowSerializer = assigner.getWindowSerializer(env.getConfig()); return window(windowSerializer); }
Read window state from an operator in a {@code Savepoint}. This method supports reading from any type of window. @param assigner The {@link WindowAssigner} used to write out the operator. @return A {@link WindowSavepointReader}.
window
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public <W extends Window> WindowSavepointReader<W> window(TypeSerializer<W> windowSerializer) { Preconditions.checkNotNull(windowSerializer, "The window serializer must not be null"); return new WindowSavepointReader<>(env, metadata, stateBackend, windowSerializer); }
Read window state from an operator in a {@code Savepoint}. This method supports reading from any type of window. @param windowSerializer The serializer used for the window type. @return A {@link WindowSavepointReader}.
window
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointReader.java
Apache-2.0
public static SavepointWriter fromExistingSavepoint( StreamExecutionEnvironment executionEnvironment, String path) throws IOException { return new SavepointWriter(readSavepointMetadata(path), null, executionEnvironment); }
Loads an existing savepoint. Useful if you want to modify or extend the state of an existing application. The savepoint will be written using the state backend defined via the clusters configuration. @param path The path to an existing savepoint on disk. @return A {@link SavepointWriter}. @see #fromExistingSavepoint(StreamExecutionEnvironment, String, StateBackend) @see #withConfiguration(ConfigOption, Object)
fromExistingSavepoint
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
Apache-2.0
public static SavepointWriter newSavepoint( StreamExecutionEnvironment executionEnvironment, int maxParallelism) { return new SavepointWriter( createSavepointMetadata(0L, maxParallelism), null, executionEnvironment); }
Creates a new savepoint. The savepoint will be written using the state backend defined via the clusters configuration. @param maxParallelism The max parallelism of the savepoint. @return A {@link SavepointWriter}. @see #newSavepoint(StreamExecutionEnvironment, StateBackend, int) @see #withConfiguration(ConfigOption, Object)
newSavepoint
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
Apache-2.0
public static SavepointWriter newSavepoint( StreamExecutionEnvironment executionEnvironment, long checkpointId, int maxParallelism) { return new SavepointWriter( createSavepointMetadata(checkpointId, maxParallelism), null, executionEnvironment); }
Creates a new savepoint. The savepoint will be written using the state backend defined via the clusters configuration. @param maxParallelism The max parallelism of the savepoint. @param checkpointId checkpoint ID. @return A {@link SavepointWriter}. @see #newSavepoint(StreamExecutionEnvironment, StateBackend, int) @see #withConfiguration(ConfigOption, Object)
newSavepoint
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
Apache-2.0
public static SavepointWriter newSavepoint( StreamExecutionEnvironment executionEnvironment, StateBackend stateBackend, int maxParallelism) { return new SavepointWriter( createSavepointMetadata(0L, maxParallelism), stateBackend, executionEnvironment); }
Creates a new savepoint. @param stateBackend The state backend of the savepoint used for keyed state. @param maxParallelism The max parallelism of the savepoint. @return A {@link SavepointWriter}. @see #newSavepoint(StreamExecutionEnvironment, int)
newSavepoint
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
Apache-2.0
public SavepointWriter removeOperator(OperatorIdentifier identifier) { metadata.removeOperator(identifier); return this; }
Drop an existing operator from the savepoint. @param identifier The identifier of the operator. @return A modified savepoint.
removeOperator
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
Apache-2.0
public <T> SavepointWriter withOperator( OperatorIdentifier identifier, StateBootstrapTransformation<T> transformation) { metadata.addOperator(identifier, transformation); return this; }
Adds a new operator to the savepoint. @param identifier The identifier of the operator. @param transformation The operator to be included. @return The modified savepoint.
withOperator
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
Apache-2.0
public <T> SavepointWriter withConfiguration(ConfigOption<T> option, T value) { configuration.set(option, value); return this; }
Sets a configuration that will be applied to the stream operators used to bootstrap a new savepoint. @param option metadata information @param value value to be stored @param <T> type of the value to be stored @return The modified savepoint.
withConfiguration
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/SavepointWriter.java
Apache-2.0
int getMaxParallelism(int globalMaxParallelism) { return operatorMaxParallelism.orElse(globalMaxParallelism); }
@return The max parallelism for this operator.
getMaxParallelism
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/StateBootstrapTransformation.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/StateBootstrapTransformation.java
Apache-2.0
public EvictingWindowSavepointReader<W> evictor() { return new EvictingWindowSavepointReader<>(env, metadata, stateBackend, windowSerializer); }
Reads from a window that uses an evictor.
evictor
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/WindowSavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/WindowSavepointReader.java
Apache-2.0
public <K, T, ACC, R> DataStream<R> aggregate( String uid, AggregateFunction<T, ACC, R> aggregateFunction, TypeInformation<K> keyType, TypeInformation<ACC> accType, TypeInformation<R> outputType) throws IOException { return aggregate( uid, aggregateFunction, new PassThroughReader<>(), keyType, accType, outputType); }
Reads window state generated using an {@link AggregateFunction}. @param uid The uid of the operator. @param aggregateFunction The aggregate function used to create the window. @param keyType The key type of the window. @param accType The type information of the accumulator function. @param outputType The output type of the reader function. @param <K> The type of the key. @param <T> The type of the values that are aggregated. @param <ACC> The type of the accumulator (intermediate aggregate state). @param <R> The type of the aggregated result. @return A {@code DataStream} of objects read from keyed state. @throws IOException If savepoint does not contain the specified uid.
aggregate
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/WindowSavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/WindowSavepointReader.java
Apache-2.0
public <K, T, ACC, R, OUT> DataStream<OUT> aggregate( String uid, AggregateFunction<T, ACC, R> aggregateFunction, WindowReaderFunction<R, OUT, K, W> readerFunction, TypeInformation<K> keyType, TypeInformation<ACC> accType, TypeInformation<OUT> outputType) throws IOException { WindowReaderOperator<?, K, R, W, OUT> operator = WindowReaderOperator.aggregate( aggregateFunction, readerFunction, keyType, windowSerializer, accType); return readWindowOperator(uid, outputType, operator); }
Reads window state generated using an {@link AggregateFunction}. @param uid The uid of the operator. @param aggregateFunction The aggregate function used to create the window. @param readerFunction The window reader function. @param keyType The key type of the window. @param accType The type information of the accumulator function. @param outputType The output type of the reader function. @param <K> The type of the key. @param <T> The type of the values that are aggregated. @param <ACC> The type of the accumulator (intermediate aggregate state). @param <R> The type of the aggregated result. @param <OUT> The output type of the reader function. @return A {@code DataStream} of objects read from keyed state. @throws IOException If savepoint does not contain the specified uid.
aggregate
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/WindowSavepointReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/WindowSavepointReader.java
Apache-2.0
@Override protected final Iterable<Tuple2<K, V>> getElements(OperatorStateBackend restoredBackend) throws Exception { Iterable<Map.Entry<K, V>> entries = restoredBackend.getBroadcastState(descriptor).entries(); return () -> StreamSupport.stream(entries.spliterator(), false) .map(entry -> Tuple2.of(entry.getKey(), entry.getValue())) .iterator(); }
Creates an input format for reading broadcast state from an operator in a savepoint. @param operatorState The state to be queried. @param configuration The cluster configuration for restoring the backend. @param backend The state backend used to restore the state. @param descriptor The descriptor for this state, providing a name and serializer.
getElements
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/BroadcastStateInputFormat.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/BroadcastStateInputFormat.java
Apache-2.0
@Override protected final Iterable<OT> getElements(OperatorStateBackend restoredBackend) throws Exception { return restoredBackend.getListState(descriptor).get(); }
Creates an input format for reading list state from an operator in a savepoint. @param operatorState The state to be queried. @param configuration The cluster configuration for restoring the backend. @param backend The state backend used to restore the state. @param descriptor The descriptor for this state, providing a name and serializer.
getElements
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/ListStateInputFormat.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/ListStateInputFormat.java
Apache-2.0
public static <OUT> DataStreamSource<OUT> fromFormat( StreamExecutionEnvironment env, InputFormat<OUT, ?> inputFormat, TypeInformation<OUT> typeInfo) { InputFormatSourceFunction<OUT> function = new InputFormatSourceFunction<>(inputFormat, typeInfo); env.clean(function); final StreamSource<OUT, ?> sourceOperator = new StreamSource<>(function); return new DataStreamSource<>( env, typeInfo, sourceOperator, true, SOURCE_NAME, Boundedness.BOUNDED); }
Creates a new source that is bounded. @param env The stream execution environment. @param inputFormat The input source to consume. @param typeInfo The type of the output. @param <OUT> The output type. @return A source that is bounded.
fromFormat
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/SourceBuilder.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/SourceBuilder.java
Apache-2.0
@Override protected final Iterable<OT> getElements(OperatorStateBackend restoredBackend) throws Exception { return restoredBackend.getUnionListState(descriptor).get(); }
Creates an input format for reading union state from an operator in a savepoint. @param operatorState The state to be queried. @param configuration The cluster configuration for restoring the backend. @param backend The state backend used to restore the state. @param descriptor The descriptor for this state, providing a name and serializer.
getElements
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/UnionStateInputFormat.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/UnionStateInputFormat.java
Apache-2.0
@Override public Iterable<R> transform(Iterable<StreamRecord<IN>> elements) throws Exception { ACC acc = aggFunction.createAccumulator(); for (StreamRecord<IN> element : elements) { acc = aggFunction.add(element.getValue(), acc); } R result = aggFunction.getResult(acc); return Collections.singletonList(result); }
A wrapper for reading an evicting window operator with an aggregate function.
transform
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/operator/window/AggregateEvictingWindowReaderFunction.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/operator/window/AggregateEvictingWindowReaderFunction.java
Apache-2.0
@Override public Iterable<IN> transform(Iterable<StreamRecord<IN>> elements) throws Exception { return () -> StreamSupport.stream(elements.spliterator(), false) .map(StreamRecord::getValue) .iterator(); }
A wrapper function for reading an evicting window with no pre-aggregation.
transform
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/operator/window/ProcessEvictingWindowReader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/operator/window/ProcessEvictingWindowReader.java
Apache-2.0
@Override public Iterable<IN> transform(Iterable<StreamRecord<IN>> elements) throws Exception { IN curr = null; for (StreamRecord<IN> element : elements) { if (curr == null) { curr = element.getValue(); } else { curr = reduceFunction.reduce(curr, element.getValue()); } } return Collections.singletonList(curr); }
A wrapper function for reading state from an evicting window operator with a reduce function.
transform
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/operator/window/ReduceEvictingWindowReaderFunction.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/operator/window/ReduceEvictingWindowReaderFunction.java
Apache-2.0
@Override public void reduce(Iterable<OperatorState> values, Collector<CheckpointMetadata> out) { CheckpointMetadata metadata = new CheckpointMetadata( checkpointId, StreamSupport.stream(values.spliterator(), false) .collect(Collectors.toList()), masterStates); out.collect(metadata); }
A reducer that aggregates multiple {@link OperatorState}'s into a single {@link CheckpointMetadata}.
reduce
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/output/MergeOperatorStates.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/output/MergeOperatorStates.java
Apache-2.0
@Override public void flatMap(OperatorState operatorState, Collector<Path> out) throws Exception { for (OperatorSubtaskState subTaskState : operatorState.getSubtaskStates().values()) { // managed operator state for (OperatorStateHandle operatorStateHandle : subTaskState.getManagedOperatorState()) { Path path = getStateFilePathFromStreamStateHandle(operatorStateHandle); if (path != null) { out.collect(path); } } // managed keyed state for (KeyedStateHandle keyedStateHandle : subTaskState.getManagedKeyedState()) { if (keyedStateHandle instanceof KeyGroupsStateHandle) { Path path = getStateFilePathFromStreamStateHandle( (KeyGroupsStateHandle) keyedStateHandle); if (path != null) { out.collect(path); } } } // raw operator state for (OperatorStateHandle operatorStateHandle : subTaskState.getRawOperatorState()) { Path path = getStateFilePathFromStreamStateHandle(operatorStateHandle); if (path != null) { out.collect(path); } } // raw keyed state for (KeyedStateHandle keyedStateHandle : subTaskState.getRawKeyedState()) { if (keyedStateHandle instanceof KeyGroupsStateHandle) { Path path = getStateFilePathFromStreamStateHandle( (KeyGroupsStateHandle) keyedStateHandle); if (path != null) { out.collect(path); } } } } }
Extracts all file paths that are part of the provided {@link OperatorState}.
flatMap
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/output/StatePathExtractor.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/output/StatePathExtractor.java
Apache-2.0
private @Nullable Path getStateFilePathFromStreamStateHandle(StreamStateHandle handle) { if (handle instanceof FileStateHandle) { return ((FileStateHandle) handle).getFilePath(); } else if (handle instanceof OperatorStateHandle) { return getStateFilePathFromStreamStateHandle( ((OperatorStateHandle) handle).getDelegateStateHandle()); } else if (handle instanceof KeyedStateHandle) { if (handle instanceof KeyGroupsStateHandle) { return getStateFilePathFromStreamStateHandle( ((KeyGroupsStateHandle) handle).getDelegateStateHandle()); } // other KeyedStateHandles either do not contains FileStateHandle, or are not part of a // savepoint } return null; }
This method recursively looks for the contained {@link FileStateHandle}s in a given {@link StreamStateHandle}. @param handle the {@code StreamStateHandle} to check for a contained {@code FileStateHandle} @return the file path if the given {@code StreamStateHandle} contains a {@code FileStateHandle} object, null otherwise
getStateFilePathFromStreamStateHandle
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/output/StatePathExtractor.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/output/StatePathExtractor.java
Apache-2.0
public static Configuration of(ReadableConfig config) { if (!(config instanceof Configuration)) { throw new IllegalStateException( "Unexpected implementation of ReadableConfig: " + config.getClass()); } return new Configuration((Configuration) config); }
Creates a new {@link Configuration}. @param config A readable configuration. @return A mutable Configuration.
of
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/MutableConfig.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/MutableConfig.java
Apache-2.0
public static CheckpointMetadata loadSavepointMetadata(String savepointPath) throws IOException { CompletedCheckpointStorageLocation location = AbstractFsCheckpointStorageAccess.resolveCheckpointPointer(savepointPath); try (DataInputStream stream = new DataInputStream(location.getMetadataHandle().openInputStream())) { return Checkpoints.loadCheckpointMetadata( stream, Thread.currentThread().getContextClassLoader(), savepointPath); } }
Takes the given string (representing a pointer to a checkpoint) and resolves it to a file status for the checkpoint's metadata file. <p>This should only be used when the user code class loader is the current classloader for the thread. @param savepointPath The path to an external savepoint. @return A state handle to savepoint's metadata. @throws IOException Thrown, if the path cannot be resolved, the file system not accessed, or the path points to a location that does not seem to be a savepoint.
loadSavepointMetadata
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/SavepointLoader.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/SavepointLoader.java
Apache-2.0
public List<OperatorState> getExistingOperators() { return operatorStateIndex.values().stream() .filter(OperatorStateSpecV2::isExistingState) .map(OperatorStateSpecV2::asExistingState) .collect(Collectors.toList()); }
@return List of {@link OperatorState} that already exists within the savepoint.
getExistingOperators
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/metadata/SavepointMetadataV2.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/metadata/SavepointMetadataV2.java
Apache-2.0
public List<StateBootstrapTransformationWithID<?>> getNewOperators() { return operatorStateIndex.values().stream() .filter(OperatorStateSpecV2::isNewStateTransformation) .map(OperatorStateSpecV2::asNewStateTransformation) .collect(Collectors.toList()); }
@return List of new operator states for the savepoint, represented by their target {@link OperatorID} and {@link StateBootstrapTransformation}.
getNewOperators
java
apache/flink
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/metadata/SavepointMetadataV2.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/runtime/metadata/SavepointMetadataV2.java
Apache-2.0
@Override protected Tuple2<Configuration, EmbeddedRocksDBStateBackend> getStateBackendTuple() { return Tuple2.of( new Configuration().set(StateBackendOptions.STATE_BACKEND, "rocksdb"), new EmbeddedRocksDBStateBackend()); }
IT Case for reading window state with the embedded rocksdb state backend.
getStateBackendTuple
java
apache/flink
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/EmbeddedRocksDBStateBackendWindowITCase.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/EmbeddedRocksDBStateBackendWindowITCase.java
Apache-2.0
@Override protected Tuple2<Configuration, HashMapStateBackend> getStateBackendTuple() { return Tuple2.of( new Configuration().set(StateBackendOptions.STATE_BACKEND, "hashmap"), new HashMapStateBackend()); }
IT Case for reading window state with the hashmap state backend.
getStateBackendTuple
java
apache/flink
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/HashMapStateBackendWindowITCase.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/HashMapStateBackendWindowITCase.java
Apache-2.0
@Test public void testIteratorPullsSingleKeyFromAllDescriptors() throws AssertionError { CountingKeysKeyedStateBackend keyedStateBackend = createCountingKeysKeyedStateBackend(100_000_000); MultiStateKeyIterator<Integer> testedIterator = new MultiStateKeyIterator<>(descriptors, keyedStateBackend); testedIterator.hasNext(); Assert.assertEquals( "Unexpected number of keys enumerated", 1, keyedStateBackend.numberOfKeysEnumerated); }
Test for lazy enumeration of inner iterators.
testIteratorPullsSingleKeyFromAllDescriptors
java
apache/flink
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/input/MultiStateKeyIteratorTest.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/input/MultiStateKeyIteratorTest.java
Apache-2.0
@Override public StateBackend createFromConfig(ReadableConfig config, ClassLoader classLoader) throws IllegalConfigurationException, IOException { throw new ExpectedException(); }
A simple custom {@link StateBackendFactory} that throws an exception.
createFromConfig
java
apache/flink
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/utils/CustomStateBackendFactory.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/utils/CustomStateBackendFactory.java
Apache-2.0
@Override public TypeInformation<?> getTypeInformation() { return new GenericRecordAvroTypeInfo(AvroRecord.getClassSchema()); }
{@link SavepointTypeInformationFactory} for generic avro record.
getTypeInformation
java
apache/flink
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/table/GenericAvroSavepointTypeInformationFactory.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/table/GenericAvroSavepointTypeInformationFactory.java
Apache-2.0
@Test public void testReadMetadata() throws Exception { Configuration config = new Configuration(); config.set(RUNTIME_MODE, RuntimeExecutionMode.BATCH); StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(config); StreamTableEnvironment tEnv = StreamTableEnvironment.create(env); tEnv.executeSql("LOAD MODULE state"); Table table = tEnv.sqlQuery("SELECT * FROM savepoint_metadata('src/test/resources/table-state')"); List<Row> result = tEnv.toDataStream(table).executeAndCollect(100); result.sort(Comparator.comparing(a -> ((String) a.getField("operator-uid-hash")))); assertThat(result.size()).isEqualTo(7); Iterator<Row> it = result.iterator(); assertThat(it.next().toString()) .isEqualTo( "+I[2, Source: broadcast-source, broadcast-source-uid, 3a6f51704798c4f418be51bfb6813b77, 1, 128, 0, 0, 0]"); assertThat(it.next().toString()) .isEqualTo( "+I[2, keyed-broadcast-process, keyed-broadcast-process-uid, 413c1d6f88ee8627fe4b8bc533b4cf1b, 2, 128, 2, 0, 4548]"); }
Unit tests for the savepoint metadata SQL reader.
testReadMetadata
java
apache/flink
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/table/SavepointMetadataDynamicTableSourceTest.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/table/SavepointMetadataDynamicTableSourceTest.java
Apache-2.0
@Override public TypeInformation<?> getTypeInformation() { return new AvroTypeInfo<>(AvroRecord.class); }
{@link SavepointTypeInformationFactory} for specific avro record.
getTypeInformation
java
apache/flink
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/table/SpecificAvroSavepointTypeInformationFactory.java
https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/table/SpecificAvroSavepointTypeInformationFactory.java
Apache-2.0
static EventBuilder builder(Class<?> classScope, String name) { return new EventBuilder(classScope, name); }
{@link Event} represents an event that happened in Flink for reporting, e.g. a completed checkpoint or a restart.
builder
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/Event.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/Event.java
Apache-2.0
public EventBuilder setObservedTsMillis(long observedTsMillis) { this.observedTsMillis = observedTsMillis; return this; }
Sets the timestamp for when the event happened or was observed, in milliseconds.
setObservedTsMillis
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
Apache-2.0
public EventBuilder setName(String name) { this.name = name; return this; }
Sets the name of the event.
setName
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
Apache-2.0
public EventBuilder setBody(String body) { this.body = body; return this; }
Sets the textual description of the event.
setBody
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
Apache-2.0
public EventBuilder setSeverity(String severity) { this.severity = severity; return this; }
Sets the severity of the event, e.g. DEBUG, INFO, ...
setSeverity
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/events/EventBuilder.java
Apache-2.0
@Override default MetricType getMetricType() { return MetricType.GAUGE; }
Calculates and returns the measured value. @return calculated value
getMetricType
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Gauge.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Gauge.java
Apache-2.0
@Override default MetricType getMetricType() { return MetricType.HISTOGRAM; }
Create statistics for the currently recorded elements. @return Statistics about the currently recorded elements
getMetricType
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Histogram.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Histogram.java
Apache-2.0
static LogicalScopeProvider castFrom(MetricGroup metricGroup) throws IllegalStateException { if (metricGroup instanceof LogicalScopeProvider) { return (LogicalScopeProvider) metricGroup; } else { throw new IllegalStateException( "The given metric group does not implement the LogicalScopeProvider interface."); } }
Casts the given metric group to a {@link LogicalScopeProvider}, if it implements the interface. @param metricGroup metric group to cast @return cast metric group @throws IllegalStateException if the metric group did not implement the LogicalScopeProvider interface
castFrom
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/LogicalScopeProvider.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/LogicalScopeProvider.java
Apache-2.0
@Override default MetricType getMetricType() { return MetricType.METER; }
Get number of events marked on the meter. @return number of events marked on the meter
getMetricType
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Meter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Meter.java
Apache-2.0
default MetricType getMetricType() { throw new UnsupportedOperationException("Custom metric types are not supported."); }
Common super interface for all metrics.
getMetricType
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Metric.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/Metric.java
Apache-2.0
public int getInteger(String key, int defaultValue) { String argument = getProperty(key, null); return argument == null ? defaultValue : Integer.parseInt(argument); }
Searches for the property with the specified key in this property list. If the key is not found in this property list, the default property list, and its defaults, recursively, are then checked. The method returns the default value argument if the property is not found. @param key the hashtable key. @param defaultValue a default value. @return the value in this property list with the specified key value parsed as an int.
getInteger
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
Apache-2.0
public long getLong(String key, long defaultValue) { String argument = getProperty(key, null); return argument == null ? defaultValue : Long.parseLong(argument); }
Searches for the property with the specified key in this property list. If the key is not found in this property list, the default property list, and its defaults, recursively, are then checked. The method returns the default value argument if the property is not found. @param key the hashtable key. @param defaultValue a default value. @return the value in this property list with the specified key value parsed as a long.
getLong
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
Apache-2.0
public float getFloat(String key, float defaultValue) { String argument = getProperty(key, null); return argument == null ? defaultValue : Float.parseFloat(argument); }
Searches for the property with the specified key in this property list. If the key is not found in this property list, the default property list, and its defaults, recursively, are then checked. The method returns the default value argument if the property is not found. @param key the hashtable key. @param defaultValue a default value. @return the value in this property list with the specified key value parsed as a float.
getFloat
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
Apache-2.0
public double getDouble(String key, double defaultValue) { String argument = getProperty(key, null); return argument == null ? defaultValue : Double.parseDouble(argument); }
Searches for the property with the specified key in this property list. If the key is not found in this property list, the default property list, and its defaults, recursively, are then checked. The method returns the default value argument if the property is not found. @param key the hashtable key. @param defaultValue a default value. @return the value in this property list with the specified key value parsed as a double.
getDouble
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
Apache-2.0
public boolean getBoolean(String key, boolean defaultValue) { String argument = getProperty(key, null); return argument == null ? defaultValue : Boolean.parseBoolean(argument); }
Searches for the property with the specified key in this property list. If the key is not found in this property list, the default property list, and its defaults, recursively, are then checked. The method returns the default value argument if the property is not found. @param key the hashtable key. @param defaultValue a default value. @return the value in this property list with the specified key value parsed as a boolean.
getBoolean
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricConfig.java
Apache-2.0
default <H extends Histogram> H histogram(int name, H histogram) { return histogram(String.valueOf(name), histogram); }
Registers a new {@link Histogram} with Flink. @param name name of the histogram @param histogram histogram to register @param <H> histogram type @return the registered histogram
histogram
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricGroup.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricGroup.java
Apache-2.0
default <M extends Meter> M meter(int name, M meter) { return meter(String.valueOf(name), meter); }
Registers a new {@link Meter} with Flink. @param name name of the meter @param meter meter to register @param <M> meter type @return the registered meter
meter
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricGroup.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricGroup.java
Apache-2.0
default MetricGroup addGroup(int name) { return addGroup(String.valueOf(name)); }
Creates a new MetricGroup and adds it to this groups sub-groups. @param name name of the group @return the created group
addGroup
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricGroup.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/MetricGroup.java
Apache-2.0
public SpanBuilder setStartTsMillis(long startTsMillis) { this.startTsMillis = startTsMillis; return this; }
Optionally you can manually set the {@link Span}'s startTs. If not specified, {@code System.currentTimeMillis()} will be used.
setStartTsMillis
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/traces/SpanBuilder.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/traces/SpanBuilder.java
Apache-2.0
public SpanBuilder setEndTsMillis(long endTsMillis) { this.endTsMillis = endTsMillis; return this; }
Optionally you can manually set the {@link Span}'s endTs. If not specified, {@code startTsMillis} will be used (see {@link #setStartTsMillis(long)})..
setEndTsMillis
java
apache/flink
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/traces/SpanBuilder.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/traces/SpanBuilder.java
Apache-2.0
protected void testHistogram(int size, Histogram histogram) { HistogramStatistics statistics; for (int i = 0; i < size; i++) { histogram.update(i); statistics = histogram.getStatistics(); assertThat(histogram.getCount()).isEqualTo(i + 1); assertThat(statistics.size()).isEqualTo(histogram.getCount()); assertThat(statistics.getMax()).isEqualTo(i); assertThat(statistics.getMin()).isEqualTo(0); } statistics = histogram.getStatistics(); assertThat(statistics.size()).isEqualTo(size); assertThat(statistics.getQuantile(0.5)).isCloseTo((size - 1) / 2.0, offset(0.001)); for (int i = size; i < 2 * size; i++) { histogram.update(i); statistics = histogram.getStatistics(); assertThat(histogram.getCount()).isEqualTo(i + 1); assertThat(statistics.size()).isEqualTo(size); assertThat(statistics.getMax()).isEqualTo(i); assertThat(statistics.getMin()).isEqualTo(i + 1 - size); } statistics = histogram.getStatistics(); assertThat(statistics.size()).isEqualTo(size); assertThat(statistics.getQuantile(0.5)).isCloseTo(size + (size - 1) / 2.0, offset(0.001)); }
Abstract base class for testing {@link Histogram} and {@link HistogramStatistics} implementations.
testHistogram
java
apache/flink
flink-metrics/flink-metrics-core/src/test/java/org/apache/flink/metrics/AbstractHistogramTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-core/src/test/java/org/apache/flink/metrics/AbstractHistogramTest.java
Apache-2.0
public String getDomain() { return domain; }
The data center to connect to.
getDomain
java
apache/flink
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DataCenter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DataCenter.java
Apache-2.0
private List<String> getTagsFromConfig(String str) { return Arrays.asList(str.split(",")); }
Get config tags from config 'metrics.reporter.dghttp.tags'.
getTagsFromConfig
java
apache/flink
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java
Apache-2.0
private String getVariableName(String str) { return str.substring(1, str.length() - 1); }
Removes leading and trailing angle brackets.
getVariableName
java
apache/flink
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java
Apache-2.0
@Override public Number getMetricValue() { return gauge.getValue(); }
Mapping of gauge between Flink and Datadog.
getMetricValue
java
apache/flink
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DGauge.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DGauge.java
Apache-2.0
public void addTo(DSeries series) { final HistogramStatistics statistics = histogram.getStatistics(); // this selection is based on // https://docs.datadoghq.com/developers/metrics/types/?tab=histogram // we only exclude 'sum' (which is optional), because we cannot compute it // the semantics for count are also slightly different, because we don't reset it after a // report series.add(new StaticDMetric(statistics.getMean(), metaDataAvg)); series.add(new StaticDMetric(histogram.getCount(), metaDataCount)); series.add(new StaticDMetric(statistics.getQuantile(.5), metaDataMedian)); series.add(new StaticDMetric(statistics.getQuantile(.95), metaData95Percentile)); series.add(new StaticDMetric(statistics.getMin(), metaDataMin)); series.add(new StaticDMetric(statistics.getMax(), metaDataMax)); }
Maps histograms to datadog gauges. <p>Note: We cannot map them to datadog histograms because the HTTP API does not support them.
addTo
java
apache/flink
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DHistogram.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DHistogram.java
Apache-2.0
@JsonGetter(FIELD_NAME_SERIES) public List<DMetric> getSeries() { return series; }
Json serialization between Flink and Datadog.
getSeries
java
apache/flink
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DSeries.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DSeries.java
Apache-2.0
public Number getMetricValue() { return value; }
A {@link DMetric} that returns a fixed value.
getMetricValue
java
apache/flink
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/StaticDMetric.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/StaticDMetric.java
Apache-2.0
@Test void testMetricCleanup() { TestingScheduledDropwizardReporter rep = new TestingScheduledDropwizardReporter(); MetricGroup mp = new UnregisteredMetricsGroup(); Counter c = new SimpleCounter(); Meter m = new TestMeter(); Histogram h = new TestHistogram(); Gauge<?> g = () -> null; rep.notifyOfAddedMetric(c, "counter", mp); assertThat(rep.getCounters()).hasSize(1); assertThat(rep.registry.getCounters()).hasSize(1); rep.notifyOfAddedMetric(m, "meter", mp); assertThat(rep.getMeters()).hasSize(1); assertThat(rep.registry.getMeters()).hasSize(1); rep.notifyOfAddedMetric(h, "histogram", mp); assertThat(rep.getHistograms()).hasSize(1); assertThat(rep.registry.getHistograms()).hasSize(1); rep.notifyOfAddedMetric(g, "gauge", mp); assertThat(rep.getGauges()).hasSize(1); assertThat(rep.registry.getGauges()).hasSize(1); rep.notifyOfRemovedMetric(c, "counter", mp); assertThat(rep.getCounters()).hasSize(0); assertThat(rep.registry.getCounters()).hasSize(0); rep.notifyOfRemovedMetric(m, "meter", mp); assertThat(rep.getMeters()).hasSize(0); assertThat(rep.registry.getMeters()).hasSize(0); rep.notifyOfRemovedMetric(h, "histogram", mp); assertThat(rep.getHistograms()).hasSize(0); assertThat(rep.registry.getHistograms()).hasSize(0); rep.notifyOfRemovedMetric(g, "gauge", mp); assertThat(rep.getGauges()).hasSize(0); assertThat(rep.registry.getGauges()).hasSize(0); }
This test verifies that metrics are properly added and removed to/from the ScheduledDropwizardReporter and the underlying Dropwizard MetricRegistry.
testMetricCleanup
java
apache/flink
flink-metrics/flink-metrics-dropwizard/src/test/java/org/apache/flink/dropwizard/ScheduledDropwizardReporterTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-dropwizard/src/test/java/org/apache/flink/dropwizard/ScheduledDropwizardReporterTest.java
Apache-2.0
@Test void testDropwizardHistogramWrapperReporting() throws Exception { int size = 10; String histogramMetricName = "histogram"; final TestingReporter testingReporter = new TestingReporter(); testingReporter.open(new MetricConfig()); DropwizardHistogramWrapper histogramWrapper = new DropwizardHistogramWrapper( new com.codahale.metrics.Histogram(new SlidingWindowReservoir(size))); final MetricGroup metricGroup = TestMetricGroup.newBuilder().build(); testingReporter.notifyOfAddedMetric(histogramWrapper, histogramMetricName, metricGroup); // check that the metric has been registered assertThat(testingReporter.getMetrics()).hasSize(1); for (int i = 0; i < size; i++) { histogramWrapper.update(i); } testingReporter.report(); String fullMetricName = metricGroup.getMetricIdentifier(histogramMetricName); Snapshot snapshot = testingReporter.getNextHistogramSnapshot(fullMetricName); assertThat(snapshot.getMin()).isEqualTo(0); assertThat(snapshot.getMedian()).isCloseTo((size - 1) / 2.0, offset(0.001)); assertThat(snapshot.getMax()).isEqualTo(size - 1); assertThat(snapshot.size()).isEqualTo(size); testingReporter.notifyOfRemovedMetric(histogramWrapper, histogramMetricName, metricGroup); // check that the metric has been de-registered assertThat(testingReporter.getMetrics()).hasSize(0); }
Tests that the DropwizardHistogramWrapper reports correct dropwizard snapshots to the ScheduledReporter.
testDropwizardHistogramWrapperReporting
java
apache/flink
flink-metrics/flink-metrics-dropwizard/src/test/java/org/apache/flink/dropwizard/metrics/DropwizardFlinkHistogramWrapperTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-dropwizard/src/test/java/org/apache/flink/dropwizard/metrics/DropwizardFlinkHistogramWrapperTest.java
Apache-2.0
@Override public ScheduledReporter getReporter(MetricConfig config) { String host = config.getString(ARG_HOST, null); int port = config.getInteger(ARG_PORT, -1); if (host == null || host.length() == 0 || port < 1) { throw new IllegalArgumentException( "Invalid host/port configuration. Host: " + host + " Port: " + port); } String prefix = config.getString(ARG_PREFIX, null); String conversionRate = config.getString(ARG_CONVERSION_RATE, null); String conversionDuration = config.getString(ARG_CONVERSION_DURATION, null); String protocol = config.getString(ARG_PROTOCOL, "TCP"); com.codahale.metrics.graphite.GraphiteReporter.Builder builder = com.codahale.metrics.graphite.GraphiteReporter.forRegistry(registry); if (prefix != null) { builder.prefixedWith(prefix); } if (conversionRate != null) { builder.convertRatesTo(TimeUnit.valueOf(conversionRate)); } if (conversionDuration != null) { builder.convertDurationsTo(TimeUnit.valueOf(conversionDuration)); } Protocol prot; try { prot = Protocol.valueOf(protocol); } catch (IllegalArgumentException iae) { log.warn( "Invalid protocol configuration: " + protocol + " Expected: TCP or UDP, defaulting to TCP."); prot = Protocol.TCP; } log.info( "Configured GraphiteReporter with {host:{}, port:{}, protocol:{}}", host, port, prot); switch (prot) { case UDP: return builder.build(new GraphiteUDP(host, port)); case TCP: default: return builder.build(new Graphite(host, port)); } }
This class acts as a factory for the {@link com.codahale.metrics.graphite.GraphiteReporter} and allows using it as a Flink reporter.
getReporter
java
apache/flink
flink-metrics/flink-metrics-graphite/src/main/java/org/apache/flink/metrics/graphite/GraphiteReporter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-graphite/src/main/java/org/apache/flink/metrics/graphite/GraphiteReporter.java
Apache-2.0
@Override public String toString() { return scheme; }
Supported URL schemes for the {@link InfluxdbReporter}.
toString
java
apache/flink
flink-metrics/flink-metrics-influxdb/src/main/java/org/apache/flink/metrics/influxdb/InfluxdbReporterOptions.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-influxdb/src/main/java/org/apache/flink/metrics/influxdb/InfluxdbReporterOptions.java
Apache-2.0
static String replaceInvalidChars(String str) { char[] chars = null; final int strLen = str.length(); int pos = 0; for (int i = 0; i < strLen; i++) { final char c = str.charAt(i); switch (c) { case '>': case '<': case '"': // remove character by not moving cursor if (chars == null) { chars = str.toCharArray(); } break; case ' ': if (chars == null) { chars = str.toCharArray(); } chars[pos++] = '_'; break; case ',': case '=': case ';': case ':': case '?': case '\'': case '*': if (chars == null) { chars = str.toCharArray(); } chars[pos++] = '-'; break; default: if (chars != null) { chars[pos] = c; } pos++; } } return chars == null ? str : new String(chars, 0, pos); }
Lightweight method to replace unsupported characters. If the string does not contain any unsupported characters, this method creates no new string (and in fact no new objects at all). <p>Replacements: <ul> <li>{@code "} is removed <li>{@code space} is replaced by {@code _} (underscore) <li>{@code , = ; : ? ' *} are replaced by {@code -} (hyphen) </ul>
replaceInvalidChars
java
apache/flink
flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java
Apache-2.0
@Override public long getCount() { return counter.getCount(); }
The MBean interface for an exposed counter.
getCount
java
apache/flink
flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java
Apache-2.0
@Override public Object getValue() { return gauge.getValue(); }
The MBean interface for an exposed gauge.
getValue
java
apache/flink
flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/main/java/org/apache/flink/metrics/jmx/JMXReporter.java
Apache-2.0
@Test void testGenerateTable() { Map<String, String> vars = new HashMap<>(); vars.put("key0", "value0"); vars.put("key1", "value1"); vars.put("\"key2,=;:?'", "\"value2 (test),=;:?'"); Hashtable<String, String> jmxTable = JMXReporter.generateJmxTable(vars); assertThat(jmxTable).containsEntry("key0", "value0"); assertThat(jmxTable).containsEntry("key0", "value0"); assertThat(jmxTable).containsEntry("key1", "value1"); assertThat(jmxTable).containsEntry("key2------", "value2_(test)------"); }
Verifies that the JMXReporter properly generates the JMX table.
testGenerateTable
java
apache/flink
flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
Apache-2.0
@Test void testPortConflictHandling() throws Exception { final MetricReporter rep1 = new JMXReporter("9020-9035"); final MetricReporter rep2 = new JMXReporter("9020-9035"); Gauge<Integer> g1 = () -> 1; Gauge<Integer> g2 = () -> 2; rep1.notifyOfAddedMetric(g1, "rep1", metricGroup); rep2.notifyOfAddedMetric(g2, "rep2", metricGroup); MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer(); ObjectName objectName1 = new ObjectName( JMX_DOMAIN_PREFIX + "taskmanager.rep1", JMXReporter.generateJmxTable(metricGroup.getAllVariables())); ObjectName objectName2 = new ObjectName( JMX_DOMAIN_PREFIX + "taskmanager.rep2", JMXReporter.generateJmxTable(metricGroup.getAllVariables())); assertThat(mBeanServer.getAttribute(objectName1, "Value")).isEqualTo(1); assertThat(mBeanServer.getAttribute(objectName2, "Value")).isEqualTo(2); rep1.notifyOfRemovedMetric(g1, "rep1", null); rep1.notifyOfRemovedMetric(g2, "rep2", null); }
Verifies that multiple JMXReporters can be started on the same machine and register metrics at the MBeanServer. @throws Exception if the attribute/mbean could not be found or the test is broken
testPortConflictHandling
java
apache/flink
flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
Apache-2.0
@Test void testJMXAvailability() throws Exception { final JMXReporter rep1 = new JMXReporter("9040-9055"); final JMXReporter rep2 = new JMXReporter("9040-9055"); Gauge<Integer> g1 = () -> 1; Gauge<Integer> g2 = () -> 2; rep1.notifyOfAddedMetric(g1, "rep1", metricGroup); rep2.notifyOfAddedMetric(g2, "rep2", metricGroup); ObjectName objectName1 = new ObjectName( JMX_DOMAIN_PREFIX + "taskmanager.rep1", JMXReporter.generateJmxTable(metricGroup.getAllVariables())); ObjectName objectName2 = new ObjectName( JMX_DOMAIN_PREFIX + "taskmanager.rep2", JMXReporter.generateJmxTable(metricGroup.getAllVariables())); JMXServiceURL url1 = new JMXServiceURL( "service:jmx:rmi://localhost:" + rep1.getPort().get() + "/jndi/rmi://localhost:" + rep1.getPort().get() + "/jmxrmi"); JMXConnector jmxCon1 = JMXConnectorFactory.connect(url1); MBeanServerConnection mCon1 = jmxCon1.getMBeanServerConnection(); assertThat(mCon1.getAttribute(objectName1, "Value")).isEqualTo(1); assertThat(mCon1.getAttribute(objectName2, "Value")).isEqualTo(2); jmxCon1.close(); JMXServiceURL url2 = new JMXServiceURL( "service:jmx:rmi://localhost:" + rep2.getPort().get() + "/jndi/rmi://localhost:" + rep2.getPort().get() + "/jmxrmi"); JMXConnector jmxCon2 = JMXConnectorFactory.connect(url2); MBeanServerConnection mCon2 = jmxCon2.getMBeanServerConnection(); assertThat(mCon2.getAttribute(objectName1, "Value")).isEqualTo(1); assertThat(mCon2.getAttribute(objectName2, "Value")).isEqualTo(2); // JMX Server URL should be identical since we made it static. assertThat(url2).isEqualTo(url1); rep1.notifyOfRemovedMetric(g1, "rep1", null); rep1.notifyOfRemovedMetric(g2, "rep2", null); jmxCon2.close(); rep1.close(); rep2.close(); }
Verifies that we can connect to multiple JMXReporters running on the same machine.
testJMXAvailability
java
apache/flink
flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
Apache-2.0
@Test void testHistogramReporting() throws Exception { String histogramName = "histogram"; final JMXReporter reporter = new JMXReporter(null); TestHistogram histogram = new TestHistogram(); reporter.notifyOfAddedMetric(histogram, histogramName, metricGroup); MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer(); ObjectName objectName = new ObjectName( JMX_DOMAIN_PREFIX + "taskmanager." + histogramName, JMXReporter.generateJmxTable(metricGroup.getAllVariables())); MBeanInfo info = mBeanServer.getMBeanInfo(objectName); MBeanAttributeInfo[] attributeInfos = info.getAttributes(); assertThat(attributeInfos).hasSize(11); assertThat(mBeanServer.getAttribute(objectName, "Count")).isEqualTo(histogram.getCount()); HistogramStatistics statistics = histogram.getStatistics(); assertThat(mBeanServer.getAttribute(objectName, "Mean")).isEqualTo(statistics.getMean()); assertThat(mBeanServer.getAttribute(objectName, "StdDev")) .isEqualTo(statistics.getStdDev()); assertThat(mBeanServer.getAttribute(objectName, "Max")).isEqualTo(statistics.getMax()); assertThat(mBeanServer.getAttribute(objectName, "Min")).isEqualTo(statistics.getMin()); assertThat(mBeanServer.getAttribute(objectName, "Median")) .isEqualTo(statistics.getQuantile(0.5)); assertThat(mBeanServer.getAttribute(objectName, "75thPercentile")) .isEqualTo(statistics.getQuantile(0.75)); assertThat(mBeanServer.getAttribute(objectName, "95thPercentile")) .isEqualTo(statistics.getQuantile(0.95)); assertThat(mBeanServer.getAttribute(objectName, "98thPercentile")) .isEqualTo(statistics.getQuantile(0.98)); assertThat(mBeanServer.getAttribute(objectName, "99thPercentile")) .isEqualTo(statistics.getQuantile(0.99)); assertThat(mBeanServer.getAttribute(objectName, "999thPercentile")) .isEqualTo(statistics.getQuantile(0.999)); }
Tests that histograms are properly reported via the JMXReporter.
testHistogramReporting
java
apache/flink
flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
Apache-2.0
@Test void testMeterReporting() throws Exception { String meterName = "meter"; final JMXReporter reporter = new JMXReporter(null); TestMeter meter = new TestMeter(); reporter.notifyOfAddedMetric(meter, meterName, metricGroup); MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer(); ObjectName objectName = new ObjectName( JMX_DOMAIN_PREFIX + "taskmanager." + meterName, JMXReporter.generateJmxTable(metricGroup.getAllVariables())); MBeanInfo info = mBeanServer.getMBeanInfo(objectName); MBeanAttributeInfo[] attributeInfos = info.getAttributes(); assertThat(attributeInfos).hasSize(2); assertThat(mBeanServer.getAttribute(objectName, "Rate")).isEqualTo(meter.getRate()); assertThat(mBeanServer.getAttribute(objectName, "Count")).isEqualTo(meter.getCount()); }
Tests that meters are properly reported via the JMXReporter.
testMeterReporting
java
apache/flink
flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/metrics/jmx/JMXReporterTest.java
Apache-2.0
private static Configuration getConfiguration() { Configuration flinkConfiguration = new Configuration(); MetricOptions.forReporter(flinkConfiguration, "test") .set(MetricOptions.REPORTER_FACTORY_CLASS, JMXReporterFactory.class.getName()); flinkConfiguration.set(MetricOptions.SCOPE_NAMING_JM_JOB, "jobmanager.<job_name>"); return flinkConfiguration; }
Tests to verify JMX reporter functionality on the JobManager.
getConfiguration
java
apache/flink
flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/runtime/jobmanager/JMXJobManagerMetricTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/runtime/jobmanager/JMXJobManagerMetricTest.java
Apache-2.0
@Test void testJobManagerJMXMetricAccess(@InjectClusterClient ClusterClient<?> client) throws Exception { Deadline deadline = Deadline.now().plus(Duration.ofMinutes(2)); try { JobVertex sourceJobVertex = new JobVertex("Source"); sourceJobVertex.setInvokableClass(BlockingInvokable.class); sourceJobVertex.setParallelism(1); final JobCheckpointingSettings jobCheckpointingSettings = new JobCheckpointingSettings( CheckpointCoordinatorConfiguration.builder().build(), null); final JobGraph jobGraph = JobGraphBuilder.newStreamingJobGraphBuilder() .setJobName("TestingJob") .addJobVertex(sourceJobVertex) .setJobCheckpointingSettings(jobCheckpointingSettings) .build(); client.submitJob(jobGraph).get(); FutureUtils.retrySuccessfulWithDelay( () -> client.getJobStatus(jobGraph.getJobID()), Duration.ofMillis(10), deadline, status -> status == JobStatus.RUNNING, new ScheduledExecutorServiceAdapter(EXECUTOR_RESOURCE.getExecutor())) .get(deadline.timeLeft().toMillis(), TimeUnit.MILLISECONDS); MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer(); Set<ObjectName> nameSet = mBeanServer.queryNames( new ObjectName( "org.apache.flink.jobmanager.job.lastCheckpointSize:job_name=TestingJob,*"), null); assertThat(nameSet).hasSize(1); assertThat(mBeanServer.getAttribute(nameSet.iterator().next(), "Value")).isEqualTo(-1L); BlockingInvokable.unblock(); } finally { BlockingInvokable.unblock(); } }
Tests that metrics registered on the JobManager are actually accessible via JMX.
testJobManagerJMXMetricAccess
java
apache/flink
flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/runtime/jobmanager/JMXJobManagerMetricTest.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-jmx/src/test/java/org/apache/flink/runtime/jobmanager/JMXJobManagerMetricTest.java
Apache-2.0
public static Optional<MetricData> convertCounter( CollectionMetadata collectionMetadata, Long count, Long previousCount, MetricMetadata metricMetadata) { long delta = count - previousCount; if (delta < 0) { LOG.warn( "Non-monotonic counter {}: current count {} is less than previous count {}", metricMetadata.getName(), count, previousCount); return Optional.empty(); } Boolean isMonotonic = true; return Optional.of( ImmutableMetricData.createLongSum( collectionMetadata.getOtelResource(), INSTRUMENTATION_SCOPE_INFO, metricMetadata.getName(), "", "", ImmutableSumData.create( isMonotonic, AggregationTemporality.DELTA, Collections.singleton( ImmutableLongPointData.create( collectionMetadata.getStartEpochNanos(), collectionMetadata.getEpochNanos(), convertVariables(metricMetadata.getVariables()), delta))))); }
An adapter class which translates from Flink metrics to Otel metrics which can exported with the standard Otel {@link io.opentelemetry.sdk.metrics.export.MetricExporter}s.
convertCounter
java
apache/flink
flink-metrics/flink-metrics-otel/src/main/java/org/apache/flink/metrics/otel/OpenTelemetryMetricAdapter.java
https://github.com/apache/flink/blob/master/flink-metrics/flink-metrics-otel/src/main/java/org/apache/flink/metrics/otel/OpenTelemetryMetricAdapter.java
Apache-2.0