name stringlengths 12 178 | code_snippet stringlengths 8 36.5k | score float64 3.26 3.68 |
|---|---|---|
flink_StreamExecutionEnvironment_enableCheckpointing_rdh | /**
* Enables checkpointing for the streaming job. The distributed state of the streaming dataflow
* will be periodically snapshotted. In case of a failure, the streaming dataflow will be
* restarted from the latest completed checkpoint. This method selects {@link CheckpointingMode#EXACTLY_ONCE} guarantees.
*
* <p... | 3.26 |
flink_StreamExecutionEnvironment_fromSource_rdh | /**
* Adds a data {@link Source} to the environment to get a {@link DataStream}.
*
* <p>The result will be either a bounded data stream (that can be processed in a batch way) or
* an unbounded data stream (that must be processed in a streaming way), based on the
* boundedness property of the source, as defined by ... | 3.26 |
flink_StreamExecutionEnvironment_socketTextStream_rdh | /**
* Creates a new data stream that contains the strings received infinitely from a socket.
* Received strings are decoded by the system's default character set, using"\n" as delimiter.
* The reader is terminated immediately when the socket is down.
*
* @param hostname
* The host name which a server socket bin... | 3.26 |
flink_StreamExecutionEnvironment_clean_rdh | /**
* Returns a "closure-cleaned" version of the given function. Cleans only if closure cleaning is
* not disabled in the {@link org.apache.flink.api.common.ExecutionConfig}
*/
@Internal
public <F>... | 3.26 |
flink_StreamExecutionEnvironment_getConfiguration_rdh | /**
* Gives read-only access to the underlying configuration of this environment.
*
* <p>Note that the returned configuration might not be complete. It only contains options that
* have initialized the environment via {@link #StreamExecutionEnvironment(Configuration)} or
* options that are not represented in dedic... | 3.26 |
flink_StreamExecutionEnvironment_setMaxParallelism_rdh | /**
* Sets the maximum degree of parallelism defined for the program. The upper limit (inclusive)
* is Short.MAX_VALUE + 1.
*
* <p>The maximum degree of parallelism specifies the upper limit for dynamic scaling. It also
* defines the number of key groups used for partitioned state.
*
* @param maxParallelism
* ... | 3.26 |
flink_StreamExecutionEnvironment_addOperator_rdh | /**
* Adds an operator to the list of operators that should be executed when calling {@link #execute}.
*
* <p>When calling {@link #execute()} only the operators that where previously added to the list
* are executed.
*
* <p>This is not meant to be used by users. The API methods that create operators must call
* ... | 3.26 |
flink_StreamExecutionEnvironment_m3_rdh | /**
* Creates a {@link RemoteStreamEnvironment}. The remote environment sends (parts of) the
* program to a cluster for execution. Note that all file paths used in the program must be
* accessible from the cluster. The execution will use the specified parallelism.
*
* @param host
* The host name or address of t... | 3.26 |
flink_StreamExecutionEnvironment_getCachedFiles_rdh | /**
* Get the list of cached files that were registered for distribution among the task managers.
*/
public List<Tuple2<String, DistributedCache.DistributedCacheEntry>> getCachedFiles() {
return cacheFile;
} | 3.26 |
flink_StreamExecutionEnvironment_setDefaultSavepointDirectory_rdh | /**
* Sets the default savepoint directory, where savepoints will be written to if no is explicitly
* provided when triggered.
*
* @return This StreamExecutionEnvironment itself, to allow chaining of function calls.
* @see #getDefaultSavepointDirectory()
*/
@PublicEvolving
public StreamExecutionEnvironment
setDef... | 3.26 |
flink_StreamExecutionEnvironment_setDefaultLocalParallelism_rdh | /**
* Sets the default parallelism that will be used for the local execution environment created by
* {@link #createLocalEnvironment()}.
*
* @param parallelism
* The parallelism to use as the default local parallelism.
*/
@PublicEvolving
public static void setDefaultLocalParallelism(int parallelism) {
defau... | 3.26 |
flink_StreamExecutionEnvironment_fromData_rdh | /**
* Creates a new data stream that contains the given elements. The framework will determine the
* type according to the based type user supplied. The elements should be the same or be the
* subclass to the based type. The sequence of elements must not be empty.
*
* <p>NOTE: This creates a non-parallel data stre... | 3.26 |
flink_StreamExecutionEnvironment_createRemoteEnvironment_rdh | /**
* Creates a {@link RemoteStreamEnvironment}. The remote environment sends (parts of) the
* program to a cluster for execution. Note that all file paths used in the program must be
* accessible from the cluster. The execution will use the specified parallelism.
*
* @param host
* The host name or address of t... | 3.26 |
flink_StreamExecutionEnvironment_setStateBackend_rdh | /**
* Sets the state backend that describes how to store operator. It defines the data structures
* that hold state during execution (for example hash tables, RocksDB, or other data stores).
*
* <p>State managed by the state backend includes both keyed state that is accessibl... | 3.26 |
flink_StreamExecutionEnvironment_isUnalignedCheckpointsEnabled_rdh | /**
* Returns whether unaligned checkpoints are enabled.
*/
@PublicEvolving
public boolean isUnalignedCheckpointsEnabled() {
return checkpointCfg.isUnalignedCheckpointsEnabled();
} | 3.26 |
flink_StreamExecutionEnvironment_isChainingEnabled_rdh | /**
* Returns whether operator chaining is enabled.
*
* @return {@code true} if chaining is enabled, false otherwise.
*/
@PublicEvolving public boolean isChainingEnabled() {
return isChainingEnabled;
} | 3.26 |
flink_StreamExecutionEnvironment_setBufferTimeout_rdh | /**
* Sets the maximum time frequency (milliseconds) for the flushing of the output buffers. By
* default the output buffers flush frequently to provide low latency and to aid smooth
* developer experience. Setting the parameter can result in three logical modes:
*
* <ul>
* <li>A positive integer triggers flush... | 3.26 |
flink_StreamExecutionEnvironment_setParallelism_rdh | /**
* Sets the parallelism for operations executed through this environment. Setting a parallelism
* of x here will cause all operators (such as map, batchReduce) to run with x parallel
* instances. This method overrides the default parallelism for this environment. The {@link LocalStreamEnvironment} uses by default... | 3.26 |
flink_StreamExecutionEnvironment_getStateBackend_rdh | /**
* Gets the state backend that defines how to store and checkpoint state.
*
* @see #setStateBackend(StateBackend)
*/
@PublicEvolving
public StateBackend
getStateBackend() {
return defaultStateBackend;
} | 3.26 |
flink_StreamExecutionEnvironment_close_rdh | /**
* Close and clean up the execution environment. All the cached intermediate results will be
* released physically.
*/
@Override
public void close() throws Exception {
for (AbstractID id : cachedTransfo... | 3.26 |
flink_StreamExecutionEnvironment_registerType_rdh | /**
* Registers the given type with the serialization stack. If the type is eventually serialized
* as a POJO, then the type is registered with the POJO serializer. If the type ends up being
* serialized with Kryo, then it will be registered at Kryo to make sure that only tags are
* written.
*
* @param type
* ... | 3.26 |
flink_StreamExecutionEnvironment_registerTypeWithKryoSerializer_rdh | /**
* Registers the given Serializer via its class as a serializer for the given type at the
* KryoSerializer.
*
* @param type
* The class of the types serialized with the given serializer.
* @param serializerClass
* The class of the serializer to use.
*/
@SuppressWarnings("rawtypes")
public void registerTy... | 3.26 |
flink_StreamExecutionEnvironment_createLocalEnvironment_rdh | /**
* Creates a {@link LocalStreamEnvironment}. The local execution environment will run the
* program in a multi-threaded fashion in the same JVM as the environment was created in.
*
* @param configuration
* Pass a custom configuration into the cluster
* @return A local execution environment with the specified... | 3.26 |
flink_StreamExecutionEnvironment_fromParallelCollection_rdh | // private helper for passing different names
private <OUT> DataStreamSource<OUT> fromParallelCollection(SplittableIterator<OUT> iterator, TypeInformation<OUT> typeInfo, String operatorName) {
return addSource(new FromSplittableIteratorFunction<>(iterator), operatorName, typeInfo, Boundedness.BOUNDED);
}
/**
* Re... | 3.26 |
flink_StreamExecutionEnvironment_createLocalEnvironmentWithWebUI_rdh | /**
* Creates a {@link LocalStreamEnvironment} for local program execution that also starts the web
* monitoring UI.
*
* <p>The local execution environment will run the program in a multi-threaded fashion in the
* same JVM as the environment was created in. It will use the parallelism specified in the
* parameter... | 3.26 |
flink_StreamExecutionEnvironment_addDefaultKryoSerializer_rdh | /**
* Adds a new Kryo default serializer to the Runtime.
*
* @param type
* The class of the types serialized with the given serializer.
* @param serializerClass
* The class of the serializer to use.
*/public void addDefaultKryoSerializer(Class<?> type, Class<? extends Serializer<?>> serializerClass) {
co... | 3.26 |
flink_StreamExecutionEnvironment_registerJobListener_rdh | /**
* Register a {@link JobListener} in this environment. The {@link JobListener} will be notified
* on specific job status changed.
*/
@PublicEvolving
... | 3.26 |
flink_StreamExecutionEnvironment_setRestartStrategy_rdh | /**
* Sets the restart strategy configuration. The configuration specifies which restart strategy
* will be used for the execution graph in case of a restart.
*
* @param restartStrategyConfiguration
* Restart strategy configuration to be set
*/
@PublicEvolving
public void setRestartStrategy(RestartStrategies.Re... | 3.26 |
flink_StreamExecutionEnvironment_enableChangelogStateBackend_rdh | /**
* Enable the change log for current state backend. This change log allows operators to persist
* state changes in a very fine-grained manner. Currently, the change log only applies to keyed
* state, so non-keyed operator state and channel state are persisted as usual. The 'state' here
* refers to 'keyed state'.... | 3.26 |
flink_StreamExecutionEnvironment_execute_rdh | /**
* Triggers the program execution. The environment will execute all parts of the program that
* have resulted in a "sink" operation. Sink operations are for example printing results or
* forwarding them to a message queue.
*
* @param streamGraph
* the stream graph representing the transformations
* @return ... | 3.26 |
flink_StreamExecutionEnvironment_m0_rdh | /**
* Creates a data stream from the given non-empty collection. The type of the data stream is
* that of the elements in the collection.
*
* <p>The framework will try and determine the exact type from the collection elements. In case
* of generic elements, it may be necessary to manually supply the type informati... | 3.26 |
flink_StreamExecutionEnvironment_getStreamGraph_rdh | /**
* Getter of the {@link StreamGraph} of the streaming job with the option to clear previously
* registered {@link Transformation transformations}. Clearing the transformations allows, for
* example, to not re-execute the same operations when calling {@link #execute()} multiple
* times.
*
* @param clearTransfor... | 3.26 |
flink_StreamExecutionEnvironment_configure_rdh | /**
* Sets all relevant options contained in the {@link ReadableConfig} such as e.g. {@link StreamPipelineOptions#TIME_CHARACTERISTIC}. It will reconfigure {@link StreamExecutionEnvironment}, {@link ExecutionConfig} and {@link CheckpointConfig}.
*
* <p>It will change the value of a setting only if a corresponding op... | 3.26 |
flink_StreamExecutionEnvironment_getRestartStrategy_rdh | /**
* Returns the specified restart strategy configuration.
*
* @return The restart strategy configuration to be used
*/
@PublicEvolving
public RestartStrategyConfiguration getRestartStrategy() {
return config.getRestartStrategy();
}
/**
* Sets the number of times that failed tasks are re-executed. A value of... | 3.26 |
flink_StreamExecutionEnvironment_fromCollection_rdh | /**
* Creates a data stream from the given iterator.
*
* <p>Because the iterator will remain unmodified until the actual execution happens, the type
* of data returned by the iterator must be given explicitly in the form of the type
* information. This method is useful for cases where the type is generic. In that ... | 3.26 |
flink_PushCalcPastChangelogNormalizeRule_extractUsedInputFields_rdh | /**
* Extracts input fields which are used in the Calc node and the ChangelogNormalize node.
*/
private int[] extractUsedInputFields(StreamPhysicalCalc calc, Set<Integer> primaryKeyIndices) {
RexProgram program = calc.getProgram();
List<RexNode> projectsAndCondition = program.getProjectList().stream().map(pr... | 3.26 |
flink_PushCalcPastChangelogNormalizeRule_adjustInputRef_rdh | /**
* Adjust the {@param expr} field indices according to the field index {@param mapping}.
*/
private RexNode adjustInputRef(RexNode expr, Map<Integer, Integer> mapping) {
return expr.accept(new RexShuttle() {
@Override
public RexNode
visitInputRef(RexInputRef inputRef) {
Inte... | 3.26 |
flink_PushCalcPastChangelogNormalizeRule_partitionPrimaryKeyPredicates_rdh | /**
* Separates the given {@param predicates} into filters which affect only the primary key and
* anything else.
*/
private void partitionPrimaryKeyPredicates(List<RexNode> predicates, Set<Integer> primaryKeyIndices, List<RexNode>
primaryKeyPredicates, List<RexNode> remainingPredicates) {
for (RexNode predicat... | 3.26 |
flink_PushCalcPastChangelogNormalizeRule_pushCalcThroughChangelogNormalize_rdh | /**
* Pushes {@param primaryKeyPredicates} and used fields project into the {@link StreamPhysicalChangelogNormalize}.
*/private StreamPhysicalChangelogNormalize pushCalcThroughChangelogNormalize(RelOptRuleCall call, List<RexNode> primaryKeyPredicates, int[] usedInputFields) {
final StreamPhysicalChangelogNormaliz... | 3.26 |
flink_PushCalcPastChangelogNormalizeRule_buildFieldsMapping_rdh | /**
* Build field reference mapping from old field index to new field index after projection.
*/
private Map<Integer, Integer> buildFieldsMapping(int[] projectedInputRefs) {
final Map<Integer, Integer> fieldsOldToNewIndexMapping = new HashMap<>();
for (int i = 0; i < projectedInputRefs.length; i++) {... | 3.26 |
flink_PushCalcPastChangelogNormalizeRule_transformWithRemainingPredicates_rdh | /**
* Transforms the {@link RelOptRuleCall} to use {@param changelogNormalize} as the new input to
* a {@link StreamPhysicalCalc} which uses {@param predicates} for the condition.
*/
private void transformWithRemainingPredicates(RelOptRuleCall call, StreamPhysicalChangelogNormalize changelogNormalize, List<RexNode> ... | 3.26 |
flink_PushCalcPastChangelogNormalizeRule_projectUsedFieldsWithConditions_rdh | /**
* Builds a new {@link StreamPhysicalCalc} on the input node with the given {@param conditions}
* and a used fields projection.
*/
private StreamPhysicalCalc projectUsedFieldsWithConditions(RelBuilder relBuilder, RelNode input, List<RexNode>
conditions, int[] usedFields) {
final RelDataType inputRowType = inp... | 3.26 |
flink_FlinkConfMountDecorator_getClusterSidePropertiesMap_rdh | /**
* Get properties map for the cluster-side after removal of some keys.
*/
private Map<String, String> getClusterSidePropertiesMap(Configuration flinkConfig) {
final Configuration clusterSideConfig = flinkConfig.clone();// Remove some configuration options that should not be taken to cluster side.
cluster... | 3.26 |
flink_TypeInformation_m0_rdh | /**
* Checks whether this type can be used as a key for sorting. The order produced by sorting this
* type must be meaningful.
*/
@PublicEvolving
public boolean m0() {
return isKeyType();
} | 3.26 |
flink_TypeInformation_of_rdh | /**
* Creates a TypeInformation for a generic type via a utility "type hint". This method can be
* used as follows:
*
* <pre>{@code TypeInformation<Tuple2<String, Long>> info = TypeInformation.of(new TypeHint<Tuple2<String, Long>>(){});}</pre>
*
* @param typeHint
* The hint for the generic type.
* @param <T>
... | 3.26 |
flink_SinkJoinerPlanNode_setCosts_rdh | // --------------------------------------------------------------------------------------------
public void setCosts(Costs nodeCosts) {
// the plan enumeration logic works as for regular two-input-operators, which is important
// because of the branch handling logic. it does pick redistributing network channels... | 3.26 |
flink_SinkJoinerPlanNode_getDataSinks_rdh | // --------------------------------------------------------------------------------------------
public void getDataSinks(List<SinkPlanNode> sinks) {
final PlanNode in1 = this.input1.getSource();
final PlanNode
in2 = this.input2.getSource();
if (in1 instanceof SinkPlanNode) {
sinks.add(((SinkPl... | 3.26 |
flink_EnvironmentInformation_getJvmStartupOptionsArray_rdh | /**
* Gets the system parameters and environment parameters that were passed to the JVM on startup.
*
* @return The options passed to the JVM on startup.
*/
public static String[] getJvmStartupOptionsArray() {
try {
RuntimeMXBean bean = ManagementFactory.getRuntimeMXBean();
List<String> options... | 3.26 |
flink_EnvironmentInformation_getOpenFileHandlesLimit_rdh | /**
* Tries to retrieve the maximum number of open file handles. This method will only work on
* UNIX-based operating systems with Sun/Oracle Java versions.
*
* <p>If the number of max open file handles cannot be determined, this method returns {@code -1}.
*
* @return The limit of open file handles, or {@code -1}... | 3.26 |
flink_EnvironmentInformation_logEnvironmentInfo_rdh | /**
* Logs information about the environment, like code revision, current user, Java version, and
* JVM parameters.
*
* @param log
* The logger to log the information to.
* @param componentName
... | 3.26 |
flink_EnvironmentInformation_getGitCommitId_rdh | /**
*
* @return The last known commit id of this version of the software.
*/
public static String getGitCommitId() {
return getVersionsInstance().gitCommitId;
} | 3.26 |
flink_EnvironmentInformation_getMaxJvmHeapMemory_rdh | /**
* The maximum JVM heap size, in bytes.
*
* <p>This method uses the <i>-Xmx</i> value of the JVM, if set. If not set, it returns (as a
* heuristic) 1/4th of the physical memory size.
*
* @return The maximum JVM heap size, in bytes.
*/
public static long getMaxJvmHeapMemory() {
final long maxMemory = Run... | 3.26 |
flink_EnvironmentInformation_m2_rdh | /**
* Gets an estimate of the size of the free heap memory. The estimate may vary, depending on the
* current level of memory fragmentation and the number of dead objects. For a better (but more
* heavy-weight) estimate, use {@link #getSizeOfFreeHeapMemoryWithDefrag()}.
*
* @return An estimate of the size of the f... | 3.26 |
flink_EnvironmentInformation_getJvmVersion_rdh | /**
* Gets the version of the JVM in the form "VM_Name - Vendor - Spec/Version".
*
* @return The JVM version.
*/
public static String getJvmVersion() {
try {
final RuntimeMXBean bean = ManagementFactory.getRuntimeMXBean();
return (((((bean.getVmName() + " - ") + bean.getVmVendor()) + " - ") + be... | 3.26 |
flink_EnvironmentInformation_getBuildTime_rdh | /**
*
* @return The Instant this version of the software was built.
*/
public static Instant getBuildTime() {
return getVersionsInstance().gitBuildTime;
} | 3.26 |
flink_EnvironmentInformation_getGitCommitTime_rdh | /**
*
* @return The Instant of the last commit of this code.
*/
public static Instant getGitCommitTime() {
return getVersionsInstance().gitCommitTime;
} | 3.26 |
flink_EnvironmentInformation_m1_rdh | /**
* Gets an estimate of the size of the free heap memory.
*
* <p>NOTE: This method is heavy-weight. It triggers a garbage collection to reduce
* fragmentation and get a better estimate at the size of free memory. It is typically more
* accurate than the plain version {@link #getSizeOfFreeHeapMemory()}.
*
* @re... | 3.26 |
flink_EnvironmentInformation_getHadoopUser_rdh | /**
* Gets the name of the user that is running the JVM.
*
* @return The name of the user that is running the JVM.
*/
public static String getHadoopUser() {
try {
Class<?> ugiClass = Class.forName("org.apache.hadoop.security.UserGroupInformation", false, EnvironmentInformation.class.getClassLoader());
... | 3.26 |
flink_EnvironmentInformation_getVersion_rdh | /**
* Returns the version of the code as String.
*
* @return The project version string.
*/
public static String getVersion() {
return getVersionsInstance().projectVersion;
} | 3.26 |
flink_EnvironmentInformation_getGitCommitIdAbbrev_rdh | /**
*
* @return The last known abbreviated commit id of this version of the software.
*/
public static String getGitCommitIdAbbrev() {
return getVersionsInstance().gitCommitIdAbbrev;
} | 3.26 |
flink_CombinedWatermarkStatus_setWatermark_rdh | /**
* Returns true if the watermark was advanced, that is if the new watermark is larger than
* the previous one.
*
* <p>Setting a watermark will clear the idleness flag.
*/
public boolean setWatermark(long watermark) {this.idle = false;
final boolean v3 = watermark > this.watermark;
if (v3) {
this... | 3.26 |
flink_CombinedWatermarkStatus_m0_rdh | /**
* Checks whether we need to update the combined watermark.
*
* <p><b>NOTE:</b>It can update {@link #isIdle()} status.
*
* @return true, if the combined watermark changed
*/
public boolean m0() {
long minimumOverAllOutputs = Long.MAX_VALUE;
// if we don't have any outputs minimumOverAllOutputs is not v... | 3.26 |
flink_CombinedWatermarkStatus_getWatermark_rdh | /**
* Returns the current watermark timestamp. This will throw {@link IllegalStateException} if
* the output is currently idle.
*/
private long getWatermark() {checkState(!idle, "Output is idle.");
return watermark;
... | 3.26 |
flink_AvroDeserializationSchema_forGeneric_rdh | /**
* Creates {@link AvroDeserializationSchema} that produces {@link GenericRecord} using provided
* schema.
*
* @param schema
* schema of produced records
* @param encoding
* Avro serialization approach to use for decoding
* @return deserialized record in form of {@link GenericRecord}
*/
public static Avr... | 3.26 |
flink_AvroDeserializationSchema_m0_rdh | /**
* Creates {@link AvroDeserializationSchema} that produces classes that were generated from avro
* schema.
*
* @param tClass
* class of record to be produced
* @return deserialized record
*/
public static <T extends SpecificRecord> AvroDeserializationSchema<T> m0(Class<T> tClass) {
return forSpecific(tC... | 3.26 |
flink_AvroDeserializationSchema_forSpecific_rdh | /**
* Creates {@link AvroDeserializationSchema} that produces classes that were generated from avro
* schema.
*
* @param tClass
* class of record to be produced
* @param encoding
* Avro serialization approach to use for decoding
* @return deserialized record
*/
public static <T extends SpecificRecord> Avro... | 3.26 |
flink_CharValueComparator_supportsSerializationWithKeyNormalization_rdh | // --------------------------------------------------------------------------------------------
// unsupported normalization
// --------------------------------------------------------------------------------------------
@Override
public boolean supportsSerializationWithKeyNormalization() {
return false;
} | 3.26 |
flink_WorksetNode_setCandidateProperties_rdh | // --------------------------------------------------------------------------------------------
public void setCandidateProperties(GlobalProperties gProps, LocalProperties lProps, Channel initialInput) {
if (this.cachedPlans != null) {
throw new IllegalStateException();
} else {
WorksetPlanNode... | 3.26 |
flink_WorksetNode_getOperator_rdh | // --------------------------------------------------------------------------------------------
/**
* Gets the contract object for this data source node.
*
* @return The contract.
*/
@Override
public WorksetPlaceHolder<?> getOperator() {
return ((WorksetPlaceHolder<?>) (super.getOperator()));
} | 3.26 |
flink_DefaultDelegationTokenManager_obtainDelegationTokens_rdh | /**
* Obtains new tokens in a one-time fashion and leaves it up to the caller to distribute them.
*/
@Override
public void obtainDelegationTokens(DelegationTokenContainer container) throws Exception {
LOG.info("Obtaining delegation tokens");
obtainDelegationTokensAndGetNextRenewal(container);LOG.info("Delegat... | 3.26 |
flink_DefaultDelegationTokenManager_start_rdh | /**
* Creates a re-occurring task which obtains new tokens and automatically distributes them to
* task managers.
*/
@Override
public void start(Listener listener)
throws Exception {
checkNotNull(scheduledExecutor, "Scheduled executor must not be nu... | 3.26 |
flink_DefaultDelegationTokenManager_stop_rdh | /**
* Stops re-occurring token obtain task.
*/
@Override
public void stop() {
LOG.info("Stopping credential renewal");
stopTokensUpdate();
LOG.info("Stopped credential renewal");
} | 3.26 |
flink_GroupReduceOperatorBase_setCombinable_rdh | /**
* Marks the group reduce operation as combinable. Combinable operations may pre-reduce the data
* before the actual group reduce operations. Combinable user-defined functions must implement
* the interface {@link GroupCombineFunction}.
*
* @param combinable
* Flag to mark the group reduce operation as combi... | 3.26 |
flink_GroupReduceOperatorBase_setGroupOrder_rdh | // --------------------------------------------------------------------------------------------
/**
* Sets the order of the elements within a reduce group.
*
* @param order
* The order for the elements in a reduce group.
*/
public void setGroupOrder(Ordering order) {
this.groupOrder = order;
} | 3.26 |
flink_GroupReduceOperatorBase_executeOnCollections_rdh | // --------------------------------------------------------------------------------------------
@Override
protected List<OUT> executeOnCollections(List<IN> inputData, RuntimeContext ctx, ExecutionConfig executionConfig) throws Exception {
GroupReduceFunction<IN, OUT> function = this.userFunction.get... | 3.26 |
flink_StreamElementQueueEntry_completeExceptionally_rdh | /**
* Not supported. Exceptions must be handled in the AsyncWaitOperator.
*/
@Override
default void completeExceptionally(Throwable error) {
throw new UnsupportedOperationException("This result future should only be used to set completed results.");
} | 3.26 |
flink_WorksetIterationPlanNode_setCosts_rdh | // --------------------------------------------------------------------------------------------
public void setCosts(Costs nodeCosts) {
// add the costs from the step function
nodeCosts.addCosts(this.solutionSetDeltaPlanNode.getCumulativeCostsShare());
nodeCosts.addCosts(this.nextWorkSetPlanNode.getCumulativeCostsShar... | 3.26 |
flink_WorksetIterationPlanNode_getSerializerForIterationChannel_rdh | // --------------------------------------------------------------------------------------------
public TypeSerializerFactory<?> getSerializerForIterationChannel() {
return serializerForIterationChannel;
} | 3.26 |
flink_WorksetIterationPlanNode_getIterationNode_rdh | // --------------------------------------------------------------------------------------------
public WorksetIterationNode getIterationNode() {
if (this.template instanceof WorksetIterationNode) {
return ((WorksetIterationNode) (this.template));
} else {
throw new RuntimeException();
}
} | 3.26 |
flink_WorksetIterationPlanNode_getWorksetSerializer_rdh | // --------------------------------------------------------------------------------------------
public TypeSerializerFactory<?> getWorksetSerializer() {
return worksetSerializer;
} | 3.26 |
flink_WorksetIterationPlanNode_mergeBranchPlanMaps_rdh | /**
* Merging can only take place after the solutionSetDelta and nextWorkset PlanNode has been set,
* because they can contain also some of the branching nodes.
*/
@Override
protected void mergeBranchPlanMaps(Map<OptimizerNode, PlanNode> branchPlan1, Map<OptimizerNode, PlanNode> branchPlan2) {
} | 3.26 |
flink_PropertiesUtil_getLong_rdh | /**
* Get long from properties. This method only logs if the long is not valid.
*
* @param config
* Properties
* @param key
* key in Properties
* @param defaultValue
* default value if value is not set
* @return default or value of key
*/
public static long getLong(Properties config, String key, long de... | 3.26 |
flink_PropertiesUtil_getInt_rdh | /**
* Get integer from properties. This method throws an exception if the integer is not valid.
*
* @param config
* Properties
* @param key
* key in Properties
* @param defaultValue
* default value if value is not set
* @return default or value of key
*/
public static int getInt(Properties config, Stri... | 3.26 |
flink_GSChecksumWriteChannel_m0_rdh | /**
* Closes the channel and validates the checksum against the storage. Manually verifying
* checksums for streaming uploads is recommended by Google, see here:
* https://cloud.google.com/storage/docs/streaming
*
* @throws IOException
* On underlying failure or non-matching checksums
*/
public void m0()
thro... | 3.26 |
flink_GSChecksumWriteChannel_write_rdh | /**
* Writes bytes to the underlying channel and updates checksum.
*
* @param content
* The content to write
* @param start
* The start position
* @param length
* The number of bytes to write
* @return The number of bytes written
* @throws IOException
* On underlying failure
*/
public int write(byte... | 3.26 |
flink_HiveParserQBExpr_containsQueryWithoutSourceTable_rdh | /**
* returns true, if the query block contains any query, or subquery without a source table. Like
* select current_user(), select current_database()
*
* @return true, if the query block contains any query without a source table
*/
public boolean containsQueryWithoutSourceTable() {
if (qb != null) {
return qb... | 3.26 |
flink_TypeHint_hashCode_rdh | // ------------------------------------------------------------------------
@Override
public int hashCode() {
return typeInfo.hashCode();
} | 3.26 |
flink_TypeHint_getTypeInfo_rdh | // ------------------------------------------------------------------------
/**
* Gets the type information described by this TypeHint.
*
* @return The type information described by this TypeHint.
*/public TypeInformation<T> getTypeInfo() { return typeInfo;
} | 3.26 |
flink_FlinkAssertions_chainOfCauses_rdh | /**
* You can use this method in combination with {@link AbstractThrowableAssert#extracting(Function, AssertFactory)} to perform assertions on a chain
* of causes. For example:
*
* <pre>{@code assertThat(throwable)
* .extracting(FlinkAssertions::chainOfCauses, FlinkAssertions.STREAM_THROWABLE)}</pre>
*
* @re... | 3.26 |
flink_FlinkAssertions_anyCauseMatches_rdh | /**
* Shorthand to assert the chain of causes includes a {@link Throwable} matching a specific
* {@link Class} and containing the provided message. Same as:
*
* <pre>{@code assertThatChainOfCauses(throwable)
* .anySatisfy(
* cause ->
* assertThat(cause)
* .hasMessageCo... | 3.26 |
flink_FlinkAssertions_m0_rdh | /**
* Shorthand to assert chain of causes. Same as:
*
* <pre>{@code assertThat(throwable)
* .extracting(FlinkAssertions::chainOfCauses, FlinkAssertions.STREAM_THROWABLE)}</pre>
*/
public static ListAssert<Throwable> m0(Throwable root) {
return assertThat(root).extracting(FlinkAssertions::chainOfCauses,
... | 3.26 |
flink_FlinkAssertions_assertThatFuture_rdh | /**
* Create assertion for {@link java.util.concurrent.CompletionStage}.
*
* @param actual
* the actual value.
* @param <T>
* the type of the value contained in the {@link java.util.concurrent.CompletionStage}.
* @return the created assertion object.
*/
public static <T> FlinkCompletableFutureAssert<T> asse... | 3.26 |
flink_BlobLibraryCacheManager_getNumberOfReferenceHolders_rdh | /**
* Gets the number of tasks holding {@link ClassLoader} references for the given job.
*
* @param jobId
* ID of a job
* @return number of reference holders
*/
int getNumberOfReferenceHolders(JobID jobId) {
synchronized(lockObject) {
LibraryCacheEntry entry = cacheEntries.get(jobId);
return... | 3.26 |
flink_BlobLibraryCacheManager_m1_rdh | /**
* Release the class loader to ensure any file descriptors are closed and the cached
* libraries are deleted immediately.
*/
private void m1() {
runReleaseHooks();
if (!wrapsSystemClassLoader) {
try {
((Closeable) (classLoader)).close();
} catch (IOException e) {
LO... | 3.26 |
flink_BlobLibraryCacheManager_getNumberOfManagedJobs_rdh | /**
* Returns the number of registered jobs that this library cache manager handles.
*
* @return number of jobs (irrespective of the actual number of tasks per job)
*/
int getNumberOfManagedJobs() {
synchronized(lockObject) {
return cacheEntries.size();
}
} | 3.26 |
flink_LogicalTypeDuplicator_instantiateStructuredBuilder_rdh | // --------------------------------------------------------------------------------------------
private Builder instantiateStructuredBuilder(StructuredType structuredType) {
final Optional<ObjectIdentifier> identifier = structuredType.getObjectIdentifier();
final Optional<Class<?>> implementationClass = structu... | 3.26 |
flink_StreamTableSinkFactory_createTableSink_rdh | /**
* Only create stream table sink.
*/
@Override
default TableSink<T> createTableSink(Map<String, String> properties) {
StreamTableSink<T> sink = createStreamTableSink(properties);
if (sink == null) {
throw new ValidationException("Please override 'createTableSink(Context)' method.");
}
ret... | 3.26 |
flink_FileInputSplit_hashCode_rdh | // --------------------------------------------------------------------------------------------
@Override
public int hashCode() {
return getSplitNumber() ^ (file == null ? 0 : file.hashCode());
} | 3.26 |
flink_FileInputSplit_getPath_rdh | // --------------------------------------------------------------------------------------------
/**
* Returns the path of the file containing this split's data.
*
* @return the path of the file containing this split's data.
*/
public Path getPath() {
return file;
} | 3.26 |
flink_BufferAvailabilityListener_notifyPriorityEvent_rdh | /**
* Called when the first priority event is added to the head of the buffer queue.
*
* @param prioritySequenceNumber
* the sequence number that identifies the priority buffer.
*/
default void notifyPriorityEvent(int prioritySequenceNumber) {
} | 3.26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.