name stringlengths 12 178 | code_snippet stringlengths 8 36.5k | score float64 3.26 3.68 |
|---|---|---|
flink_InPlaceMutableHashTable_updateTableEntryWithReduce_rdh | /**
* Looks up the table entry that has the same key as the given record, and updates it by
* performing a reduce step.
*
* @param record
* The record to update.
* @throws Exception
*/
public void updateTableEntryWithReduce(T record) throws Exception {T match = prober.getMatchFor(record, reuse);
if (match == n... | 3.26 |
flink_InPlaceMutableHashTable_overwriteRecordAt_rdh | /**
* Overwrites a record at the specified position. The record is read from a DataInputView
* (this will be the staging area). WARNING: The record must not be larger than the original
* record.
*
* @param pointer
* Points to the position to overwrite.
* @param input
* The DataInputView to read the record f... | 3.26 |
flink_InPlaceMutableHashTable_insertAfterNoMatch_rdh | /**
* This method can be called after getMatchFor returned null. It inserts the given record to
* the hash table. Important: The given record should have the same key as the record that
* was given to getMatchFor! WARNING; Don't do any modifications to the table between
* getMatchFor and insertAfterNoMatch!
*
* @... | 3.26 |
flink_InPlaceMutableHashTable_setReadPosition_rdh | // ----------------------- Input -----------------------
public void setReadPosition(long position) {
inView.setReadPosition(position);
} | 3.26 |
flink_InPlaceMutableHashTable_getMatchFor_rdh | /**
* Searches the hash table for the record with the given key. (If there would be multiple
* matches, only one is returned.)
*
* @param record
* The record whose key we are searching for
* @param targetForMatch
* If a match is found, it will be written here
* @return targetForMatch if a match is found, ot... | 3.26 |
flink_RowTimeMiniBatchDeduplicateFunction_miniBatchDeduplicateOnRowTime_rdh | /**
* Processes element to deduplicate on keys with row time semantic, sends current element if it
* is last or first row, retracts previous element if needed.
*
* @param state
* state of function
* @param bufferedRows
* latest row received by deduplicate function
* @param out
* underlying collector
* @... | 3.26 |
flink_FlinkTypeSystem_m1_rdh | /**
* Java numeric will always have invalid precision/scale, use its default decimal
* precision/scale instead.
*/
private RelDataType m1(RelDataTypeFactory typeFactory, RelDataType relDataType) {
return RelDataTypeFactoryImpl.isJavaType(relDataType) ? typeFactory.decimalOf(relDataType) : relDataType;
} | 3.26 |
flink_CloseableIterable_empty_rdh | /**
* Returns an empty iterator.
*/
static <T> CloseableIterable<T> empty() {
return new CloseableIterable.Empty<>();
} | 3.26 |
flink_FlinkCompletableFutureAssert_m0_rdh | /**
* An equivalent of {@link #succeedsWithin(Duration)}, that doesn't rely on timeouts.
*
* @return a new assertion object on the future's result
*/
public ObjectAssert<T> m0() {
final T object = assertEventuallySucceeds(info, actual);
return new ObjectAssert<>(object);
} | 3.26 |
flink_FlinkCompletableFutureAssert_eventuallyFailsWith_rdh | /**
* An equivalent of {@link #failsWithin(Duration)}, that doesn't rely on timeouts.
*
* @param exceptionClass
* type of the exception we expect the future to complete with
* @return a new assertion instance on the future's exception.
* @param <E>
* type of the exception we expect the future to complete wit... | 3.26 |
flink_FlinkCompletableFutureAssert_withThrowableOfType_rdh | /**
* Checks that the underlying throwable is of the given type and returns a {@link ThrowableAssertAlternative} to chain further assertions on the underlying throwable.
*
* @param type
* the expected {@link Throwable} type
* @param <T>
* the expected {@link Throwable} type
* @return a {@link ThrowableAssert... | 3.26 |
flink_DataSet_join_rdh | /**
* Initiates a Join transformation.
*
* <p>A Join transformation joins the elements of two {@link DataSet DataSets} on key equality
* and provides multiple ways to combine joining elements into one DataSet.
*
* <p>This method returns a {@link JoinOperatorSets} on which one of the {@code where} methods
* can b... | 3.26 |
flink_DataSet_leftOuterJoin_rdh | /**
* Initiates a Left Outer Join transformation.
*
* <p>An Outer Join transformation joins two elements of two {@link DataSet DataSets} on key
* equality and provides multiple ways to combine joining elements into one DataSet.
*
* <p>Elements of the <b>left</b> DataSet (i.e. {@code this}) that do not have a matc... | 3.26 |
flink_DataSet_m0_rdh | /**
* Syntactic sugar for {@link #aggregate(Aggregations, int)} using {@link Aggregations#MAX} as
* the aggregation function.
*
* <p><strong>Note:</strong> This operation is not to be confused with {@link #maxBy(int...)},
* which selects one element with maximum value at the specified field... | 3.26 |
flink_DataSet_partitionCustom_rdh | /**
* Partitions a DataSet on the key returned by the selector, using a custom partitioner. This
* method takes the key selector to get the key to partition on, and a partitioner that accepts
* the key type.
*
* <p>Note: This method works only on single field keys, i.e. the selector cannot return tuples
* of fiel... | 3.26 |
flink_DataSet_coGroup_rdh | // --------------------------------------------------------------------------------------------
// Co-Grouping
// --------------------------------------------------------------------------------------------
/**
* Initiates a CoGroup transformation.
*
* <p>A CoGroup transformation combines the elements of two {@link ... | 3.26 |
flink_DataSet_aggregate_rdh | // --------------------------------------------------------------------------------------------
// Non-grouped aggregations
// --------------------------------------------------------------------------------------------
/**
* Applies an Aggregate transformation on a non-grouped {@link Tuple} {@link DataSet}.
*
* <p>... | 3.26 |
flink_DataSet_count_rdh | /**
* Convenience method to get the count (number of elements) of a DataSet.
*
* @return A long integer that represents the number of elements in the data set.
*/public long count() throws Exception {
final String id = new AbstractID().toString(); output(new Utils.CountHelper<T>(id)).name("count()");
JobExe... | 3.26 |
flink_DataSet_fillInType_rdh | // --------------------------------------------------------------------------------------------
// Type Information handling
// --------------------------------------------------------------------------------------------
/**
* Tries to fill in the type information. Type information can be filled in later when the
* p... | 3.26 |
flink_DataSet_joinWithTiny_rdh | /**
* Initiates a Join transformation.
*
* <p>A Join transformation joins the elements of two {@link DataSet DataSets} on key equality
* and provides multiple ways to combine joining elements into one DataSet.
*
* <p>This method also gives the hint to the optimizer that the second DataSet to join is much
* small... | 3.26 |
flink_DataSet_crossWithTiny_rdh | /**
* Initiates a Cross transformation.
*
* <p>A Cross transformation combines the elements of two {@link DataSet DataSets} into one
* DataSet. It builds all pair combinations of elements of both DataSets, i.e., it builds a
* Cartesian product. This method also gives the hint to the optimizer that the second DataS... | 3.26 |
flink_DataSet_cross_rdh | // --------------------------------------------------------------------------------------------
// Cross
// --------------------------------------------------------------------------------------------
/**
* Continues a Join transformation and defines the {@link Tuple} fields of the second join
* {@link DataSet} that ... | 3.26 |
flink_DataSet_writeAsFormattedText_rdh | /**
* Writes a DataSet as text file(s) to the specified location.
*
* <p>For each element of the DataSet the result of {@link TextFormatter#format(Object)} is
* written.
*
* @param filePath
* The path pointing to the location the text file is written to.
* @param writeMode
* Control the behavior for existi... | 3.26 |
flink_DataSet_maxBy_rdh | /**
* Selects an element with maximum value.
*
* <p>The maximum is computed over the specified fields in lexicographical order.
*
* <p><strong>Example 1</strong>: Given a data set with elements <code>[0, 1], [1, 0]</code>,
* the results will be:
*
* <ul>
* <li><code>maxBy(0)</code>: <code>[1, 0]</code>
* ... | 3.26 |
flink_DataSet_filter_rdh | /**
* Applies a Filter transformation on a {@link DataSet}.
*
* <p>The transformation calls a {@link org.apache.flink.api.common.functions.RichFilterFunction} for each element of the DataSet and
* retains only those element for which the function returns true. Elements for which the
* function returns false are fi... | 3.26 |
flink_DataSet_getType_rdh | /**
* Returns the {@link TypeInformation} for the type of this DataSet.
*
* @return The TypeInformation for the type of this DataSet.
* @see TypeInformation
*/
public TypeInformation<T> getType() {
if (type instanceof MissingTypeInfo) {
MissingTypeInfo typeInfo = ((MissingTypeInfo) (type));
thr... | 3.26 |
flink_DataSet_runOperation_rdh | // --------------------------------------------------------------------------------------------
// Custom Operators
// -------------------------------------------------------------------------------------------
/**
* Runs a {@link CustomUnaryOperation} on the data set. Custom operations are typically complex
* operat... | 3.26 |
flink_DataSet_collect_rdh | /**
* Convenience method to get the elements of a DataSet as a List. As DataSet can contain a lot
* of data, this method should be used with caution.
*
* @return A List containing the elements of the DataSet
*/
public List<T> collect() throws Exception {
final String v9 = new AbstractID().toString();
final... | 3.26 |
flink_DataSet_checkSameExecutionContext_rdh | // --------------------------------------------------------------------------------------------
// Utilities
// --------------------------------------------------------------------------------------------
protected static void checkSameExecutionContext(DataSet<?> set1, DataSet<?> set2) {
if (set1.getExecutionEnviro... | 3.26 |
flink_DataSet_first_rdh | /**
* Returns a new set containing the first n elements in this {@link DataSet}.
*
* @param n
* The desired number of elements.
* @return A ReduceGroupOperator that represents the DataSet containing the elements.
*/
public GroupReduceOperator<T, T> first(int n) {
if (n < 1) {
throw new InvalidProgra... | 3.26 |
flink_DataSet_joinWithHuge_rdh | /**
* Initiates a Join transformation.
*
* <p>A Join transformation joins the elements of two {@link DataSet DataSets} on key equality
* and provides multiple ways to combine joining elements into one DataSet.
*
* <p>This method also gives the hint to the optimizer that the second DataSet to join is much
* large... | 3.26 |
flink_DataSet_mapPartition_rdh | /**
* Applies a Map-style operation to the entire partition of the data. The function is called
* once per parallel partition of the data, and the entire partition is available through the
* given Iterator. The number of elements that each instance of the MapPartition function sees
* is non deterministic and depend... | 3.26 |
flink_DataSet_print_rdh | /**
* Writes a DataSet to the standard output stream (stdout).
*
* <p>For each element of the DataSet the result of {@link Object#toString()} is written.
*
* @param sinkIdentifier
* The string to prefix the output with.
* @return The DataSink that writes the DataSet.
* @deprecated Use {@link #printOnTaskManag... | 3.26 |
flink_DataSet_output_rdh | /**
* Emits a DataSet using an {@link OutputFormat}. This method adds a data sink to the program.
* Programs may have multiple data sinks. A DataSet may also have multiple consumers (data sinks
* or transformations) at the same time.
*
* @para... | 3.26 |
flink_DataSet_partitionByRange_rdh | /**
* Range-partitions a DataSet using the specified KeySelector.
*
* <p><b>Important:</b>This operation requires an extra pass over the DataSet to compute the
* range boundaries and shuffles the whole DataSet over the network. This can take significant
* amount of time.
*
* @param keyExtractor
* The KeyExtra... | 3.26 |
flink_DataSet_iterateDelta_rdh | /**
* Initiates a delta iteration. A delta iteration is similar to a regular iteration (as started
* by {@link #iterate(int)}, but maintains state across the individual iteration steps. The
* Solution set, which represents the current state at the beginning of each iteration can be
* obtained via {@link org.apache.... | 3.26 |
flink_DataSet_union_rdh | // --------------------------------------------------------------------------------------------
// Union
// --------------------------------------------------------------------------------------------
/**
* Creates a union of this DataSet with an other DataSet. The other DataSet must be of the same
* data type.
*
*... | 3.26 |
flink_DataSet_printOnTaskManager_rdh | /**
* Writes a DataSet to the standard output streams (stdout) of the TaskManagers that execute the
* program (or more specifically, the data sink operators). On a typical cluster setup, the data
* will appear in the TaskManagers' <i>.out</i> files.
*
* <p>To print the data to the console or stdout stream of the c... | 3.26 |
flink_DataSet_sum_rdh | /**
* Syntactic sugar for aggregate (SUM, field).
*
* @param field
* The index of the Tuple field on which the aggregation function is applied.
* @return An AggregateOperator that represents the summed DataSet.
* @see org.apache.flink.api.java.operators.AggregateOperator
*/
public AggregateOperator<T> sum(int ... | 3.26 |
flink_DataSet_minBy_rdh | /**
* Selects an element with minimum value.
*
* <p>The minimum is computed over the specified fields in lexicographical order.
*
* <p><strong>Example 1</strong>: Given a data set with elements <code>[0, 1], [1, 0]</code>,
* the results will be:
*
* <ul>
* <li><code>minBy(0)</code>: <code>[0, 1]</code>
* ... | 3.26 |
flink_DataSet_writeAsCsv_rdh | /**
* Writes a {@link Tuple} DataSet as CSV file(s) to the specified location with the specified
* field and line delimiters.
*
* <p><b>Note: Only a Tuple DataSet can written as a CSV file.</b> For each Tuple field the
* result of {@link Object#toString()} is written.
*
* @param filePath
* The path pointing t... | 3.26 |
flink_DataSet_sortPartition_rdh | /**
* Locally sorts the partitions of the DataSet on the extracted key in the specified order. The
* DataSet can be sorted on multiple values by returning a tuple from the KeySelector.
*
* <p>Note that no additional sort keys can be appended to a KeySelector sort keys. To sort the
* partitions by multiple values u... | 3.26 |
flink_DataSet_fullOuterJoin_rdh | /**
* Initiates a Full Outer Join transformation.
*
* <p>An Outer Join transformation joins two elements of two {@link DataSet DataSets} on key
* equality and provides multiple ways to combine joining elements into one DataSet.
*
* <p>Elements of <b>both</b> DataSets that do not have a matching element on the opp... | 3.26 |
flink_DataSet_distinct_rdh | /**
* Returns a distinct set of a {@link DataSet}.
*
* <p>If the input is a {@link org.apache.flink.api.common.typeutils.CompositeType} (Tuple or
* Pojo type), distinct is performed on all fields and each field must be a key type
*
* @return A DistinctOperator that represents the distinct DataSet.
*/
public Dist... | 3.26 |
flink_DataSet_partitionByHash_rdh | /**
* Partitions a DataSet using the specified KeySelector.
*
* <p><b>Important:</b>This operation shuffles the whole DataSet over the network and can take
* significant amount of time.
*
* @param keyExtractor
* The KeyExtractor with which the DataSet is hash-partitioned.
* @return The partitioned DataSet.
*... | 3.26 |
flink_DataSet_flatMap_rdh | /**
* Applies a FlatMap transformation on a {@link DataSet}.
*
* <p>The transformation calls a {@link org.apache.flink.api.common.functions.RichFlatMapFunction} for each element of the DataSet.
* Each FlatMapFunction call can return any number of elements including none.
*
* @param flatMapper
* The FlatMapFunc... | 3.26 |
flink_DataSet_rebalance_rdh | /**
* Enforces a re-balancing of the DataSet, i.e., the DataSet is evenly distributed over all
* parallel instances of the following task. This can help to improve performance in case of
* heavy data skew and compute intensive operations.
*
* <p><b>Important:</b>This operation shuffles the whole DataSet over the n... | 3.26 |
flink_DataSet_write_rdh | /**
* Writes a DataSet using a {@link FileOutputFormat} to a specified location. This method adds a
* data sink to the program.
*
* @param outputFormat
* The FileOutputFormat to write the DataSet.
* @param filePath
* The path to the location where the DataSet is written.
* @param writeMode
* The mode of ... | 3.26 |
flink_DataSet_reduceGroup_rdh | /**
* Applies a GroupReduce transformation on a non-grouped {@link DataSet}.
*
* <p>The transformation calls a {@link org.apache.flink.api.common.functions.RichGroupReduceFunction} once with the full DataSet.
* The GroupReduceFunction can iterate over all elements of the DataSet and emit any number of
* output ele... | 3.26 |
flink_DataSet_rightOuterJoin_rdh | /**
* Initiates a Right Outer Join transformation.
*
* <p>An Outer Join transformation joins two elements of two {@link DataSet DataSets} on key
* equality and provides multiple ways to combine joining elements into one DataSet.
*
* <p>Elements of the <b>right</b> DataSet (i.e. {@code other}) that do not have a m... | 3.26 |
flink_DataSet_reduce_rdh | /**
* Applies a Reduce transformation on a non-grouped {@link DataSet}.
*
* <p>The transformation consecutively calls a {@link org.apache.flink.api.common.functions.RichReduceFunction} until only a single element remains
* which is the result of the transformation. A ReduceFunction combines two elements into one
*... | 3.26 |
flink_DataSet_map_rdh | // --------------------------------------------------------------------------------------------
// Filter & Transformations
// --------------------------------------------------------------------------------------------
/**
* Applies a Map transformation on this DataSet.
*
* <p>The transformation calls a {@link org.... | 3.26 |
flink_DataSet_writeAsText_rdh | /**
* Writes a DataSet as text file(s) to the specified location.
*
* <p>For each element of the DataSet the result of {@link Object#toString()} is written.
*
* @param filePath
* The path pointing to the location the text file is written to.
* @param writeMode
* Control the behavior for existing files. Opti... | 3.26 |
flink_DataSet_printToErr_rdh | /**
* Writes a DataSet to the standard error stream (stderr).
*
* <p>For each element of the DataSet the result of {@link Object#toString()} is written.
*
* @param sinkIdentifier
* The string to prefix the output with.
* @return The DataSink that writes the DataSet.
* @deprecated Use {@link #printOnTaskManage... | 3.26 |
flink_DataSet_min_rdh | /**
* Syntactic sugar for {@link #aggregate(Aggregations, int)} using {@link Aggregations#MIN} as
* the aggregation function.
*
* <p><strong>Note:</strong> This operation is not to be confused with {@link #minBy(int...)},
* which selects one element with the minimum value at the specified field positions.
*
* @p... | 3.26 |
flink_DataSet_combineGroup_rdh | /**
* Applies a GroupCombineFunction on a non-grouped {@link DataSet}. A CombineFunction is similar
* to a GroupReduceFunction but does not perform a full data exchange. Instead, the
* CombineFunction calls the combine method once per partition for combining a group of results.
* This operator is suitable for combi... | 3.26 |
flink_DataSet_project_rdh | // --------------------------------------------------------------------------------------------
// Projections
// --------------------------------------------------------------------------------------------
/**
* Applies a Project transformation on a {@link Tuple} {@link DataSet}.
*
* <p><b>Note: Only Tuple DataSets... | 3.26 |
flink_DataSet_crossWithHuge_rdh | /**
* Initiates a Cross transformation.
*
* <p>A Cross transformation combines the elements of two {@link DataSet DataSets} into one
* DataSet. It builds all pair combinations of elements of both DataSets, i.e., it builds a
* Cartesian product. This method also gives the hint to the optimizer that the second DataS... | 3.26 |
flink_KryoSerializerSnapshotData_writeSnapshotData_rdh | // --------------------------------------------------------------------------------------------
// Write
// --------------------------------------------------------------------------------------------
void writeSnapshotData(DataOutputView out) throws IOException {
writeTypeClass(out);
writeKryoRegistrations(out, kryo... | 3.26 |
flink_KryoSerializerSnapshotData_readTypeClass_rdh | // --------------------------------------------------------------------------------------------
// Read
// --------------------------------------------------------------------------------------------
private static <T> Class<T> readTypeClass(DataInputView in, ClassLoader userCodeClassLoader) throws IOException {
retur... | 3.26 |
flink_KryoSerializerSnapshotData_createFrom_rdh | // --------------------------------------------------------------------------------------------
// Factories
// --------------------------------------------------------------------------------------------
static <T> KryoSerializerSnapshotData<T> createFrom(Class<T> typeClass, LinkedHashMap<Class<?>, SerializableSeriali... | 3.26 |
flink_KryoSerializerSnapshotData_getTypeClass_rdh | // --------------------------------------------------------------------------------------------
// Getters
// --------------------------------------------------------------------------------------------
Class<T> getTypeClass() {
return typeClass;
} | 3.26 |
flink_Printer_close_rdh | /**
* Close the resource of the {@link Printer}.
*/ @Override
default void close() {
} | 3.26 |
flink_Printer_createClearCommandPrinter_rdh | // --------------------------------------------------------------------------------------------
static ClearCommandPrinter createClearCommandPrinter() {
return ClearCommandPrinter.INSTANCE;
} | 3.26 |
flink_AvroSerializationSchema_forSpecific_rdh | /**
* Creates {@link AvroSerializationSchema} that serializes {@link SpecificRecord} using provided
* schema.
*
* @param tClass
* the type to be serialized
* @return serialized record in form of byte array
*/
public static <T extends SpecificRecord> AvroSerializationSchema<T> forSpecific(Class<T> tClass, AvroE... | 3.26 |
flink_AvroSerializationSchema_forGeneric_rdh | /**
* Creates {@link AvroSerializationSchema} that serializes {@link GenericRecord} using provided
* schema.
*
* @param schema
* the schema that will be used for serialization
* @return serialized record in form of byte array
*/
public static AvroSerializationSchema<GenericRecord> forGeneric(Schema schema, Avr... | 3.26 |
flink_TestcontainersSettings_logger_rdh | /**
* Sets the {@code baseImage} and returns a reference to this Builder enabling method
* chaining.
*
* @param logger
* The {@code logger} to set.
* @return A reference to this Builder.
*/
public Builder logger(Logger logger) {
this.logger = logger;
return this;
} | 3.26 |
flink_TestcontainersSettings_getNetwork_rdh | /**
*
* @return The network.
*/
public Network getNetwork() {
return network;
} | 3.26 |
flink_TestcontainersSettings_getEnvVars_rdh | /**
*
* @return The environment variables.
*/
public Map<String, String> getEnvVars() {
return envVars;
} | 3.26 |
flink_TestcontainersSettings_build_rdh | /**
* Returns a {@code TestContainersSettings} built from the parameters previously set.
*
* @return A {@code TestContainersSettings} built with parameters of this {@code TestContainersSettings.Builder}
*/
public TestcontainersSettings build() {
return new TestcontainersSettings(this);
} | 3.26 |
flink_TestcontainersSettings_network_rdh | /**
* Sets the {@code network} and returns a reference to this Builder enabling method
* chaining.
*
* @param network
* The {@code network} to set.
* @return A reference to this Builder.
*/
public Builder network(Network network) {
this.network = network;
return this;
} | 3.26 |
flink_TestcontainersSettings_getDependencies_rdh | /**
*
* @return The dependencies (other containers).
*/
public Collection<GenericContainer<?>> getDependencies() {
return dependencies;} | 3.26 |
flink_TestcontainersSettings_builder_rdh | /**
* A new builder for {@code TestcontainersSettings}.
*
* @return The builder.
*/
public static Builder
builder() {
return new Builder();} | 3.26 |
flink_TestcontainersSettings_getLogger_rdh | /**
*
* @return The logger.
*/
public Logger getLogger() {
return logger;
} | 3.26 |
flink_TestcontainersSettings_environmentVariable_rdh | /**
* Sets an environment variable and returns a reference to this Builder enabling method
* chaining.
*
* @param name
* The name of the environment variable.
* @param value
* The value of the environment variable.
* @return A reference to this Builder.
*/
public Builder environmentVariable(String name, St... | 3.26 |
flink_TestcontainersSettings_baseImage_rdh | /**
* Sets the {@code baseImage} and returns a reference to this Builder enabling method
* chaining.
*
* @param baseImage
* The {@code baseImage} to set.
* @return A reference to this Builder.
*/
public Builder baseImage(String baseImage) {
this.baseImage = baseImage;
return this;
} | 3.26 |
flink_TestcontainersSettings_m0_rdh | /**
*
* @return The base image.
*/
public String m0() {
return baseImage; } | 3.26 |
flink_RowtimeAttributeDescriptor_getAttributeName_rdh | /**
* Returns the name of the rowtime attribute.
*/
public String getAttributeName() {
return attributeName;
} | 3.26 |
flink_RowtimeAttributeDescriptor_getTimestampExtractor_rdh | /**
* Returns the [[TimestampExtractor]] for the attribute.
*/
public TimestampExtractor getTimestampExtractor() {return timestampExtractor;
} | 3.26 |
flink_DelimitedInputFormat_fillBuffer_rdh | /**
* Fills the read buffer with bytes read from the file starting from an offset.
*/
private boolean fillBuffer(int offset) throws IOException {
int maxReadLength = this.readBuffer.length - offset;
// special case for reading the whole s... | 3.26 |
flink_DelimitedInputFormat_loadGlobalConfigParams_rdh | /**
*
* @deprecated Please use {@code loadConfigParameters(Configuration config}
*/@Deprecated
protected static void loadGlobalConfigParams()
{
loadConfigParameters(GlobalConfiguration.loadConfiguration());
} | 3.26 |
flink_DelimitedInputFormat_configure_rdh | // --------------------------------------------------------------------------------------------
// Pre-flight: Configuration, Splits, Sampling
// --------------------------------------------------------------------------------------------
/**
* Configures this input format by reading the path to the file from the conf... | 3.26 |
flink_DelimitedInputFormat_setCharset_rdh | /**
* Set the name of the character set used for the row delimiter. This is also used by subclasses
* to interpret field delimiters, comment strings, and for configuring {@link FieldParser}s.
*
* <p>These fields are interpreted when set. Changing the charset thereafter may cause
* unexpected results.
*
* @param ... | 3.26 |
flink_DelimitedInputFormat_close_rdh | /**
* Closes the input by releasing all buffers and closing the file input stream.
*
* @throws IOException
* Thrown, if the closing of the file stream causes an I/O error.
*/
@Override
public void close() throws IOException {
this.wrapBuffer = null;
this.readBuffer = null;
super.close();
} | 3.26 |
flink_DelimitedInputFormat_getCurrentState_rdh | // --------------------------------------------------------------------------------------------
// Checkpointing
// --------------------------------------------------------------------------------------------
@PublicEvolving
@Override
public Long getCurrentState() throws IOException {
return this.offset;
} | 3.26 |
flink_DelimitedInputFormat_readLine_rdh | // --------------------------------------------------------------------------------------------
protected final boolean readLine() throws IOException {
if ((this.stream == null) || this.overLimit) {
return false;
}
int countInWrapBuffer = 0;
// position of matching positions in the delimiter byt... | 3.26 |
flink_DelimitedInputFormat_initializeSplit_rdh | /**
* Initialization method that is called after opening or reopening an input split.
*
* @param split
* Split that was opened or reopened
* @param state
* Checkpointed state if the split was reopened
* @throws IOException
*/
protected void initializeSplit(FileInputSplit split, @Nullable
Long state) throws ... | 3.26 |
flink_DelimitedInputFormat_m0_rdh | /**
* Opens the given input split. This method opens the input stream to the specified file,
* allocates read buffers and positions the stream at the correct position, making sure that any
* partial record at the beginning is skipped.
*
* @param split
* The input split to open.
* @see org.apache.flink.api.comm... | 3.26 |
flink_DelimitedInputFormat_getCharset_rdh | /**
* Get the character set used for the row delimiter. This is also used by subclasses to
* interpret field delimiters, comment strings, and for configuring {@link FieldParser}s.
*
* @return the charset
*/
@PublicEvolving
public Charset getCharset() {
if (this.charset == null) {
this.charset = Charset... | 3.26 |
flink_DelimitedInputFormat_reachedEnd_rdh | /**
* Checks whether the current split is at its end.
*
* @return True, if the split is at its end, false otherwise.
*/
@Override
public boolean reachedEnd() {
return this.end;} | 3.26 |
flink_CepOperator_processEvent_rdh | /**
* Process the given event by giving it to the NFA and outputting the produced set of matched
* event sequences.
*
* @param nfaState
* Our NFAState object
* @param event
* The current event to be processed
* @param timestamp
* The timestamp of the event
*/
private void processEvent(NFAState nfaState,... | 3.26 |
flink_CepOperator_hasNonEmptySharedBuffer_rdh | // //////////////////// Testing Methods //////////////////////
@VisibleForTesting
boolean hasNonEmptySharedBuffer(KEY key) throws Exception {
setCurrentKey(key);
return !partialMatches.isEmpty();
} | 3.26 |
flink_CepOperator_advanceTime_rdh | /**
* Advances the time for the given NFA to the given timestamp. This means that no more events
* with timestamp <b>lower</b> than the given timestamp should be passed to the nfa, This can
* lead to pruning and timeouts.
*/
private void advanceTime(NFAState nfaState, long timestamp) throws Exception {
try (SharedB... | 3.26 |
flink_Transformation_declareManagedMemoryUseCaseAtSlotScope_rdh | /**
* Declares that this transformation contains certain slot scope managed memory use case.
*
* @param managedMemoryUseCase
* The use case that this transformation declares needing managed
* memory for.
*/
public void declareManagedMemoryUseCaseAtSlotScope(ManagedMemoryUseCase managedMemoryUseCase) {
Pre... | 3.26 |
flink_Transformation_setCoLocationGroupKey_rdh | /**
* <b>NOTE:</b> This is an internal undocumented feature for now. It is not clear whether this
* will be supported and stable in the long term.
*
* <p>Sets the key that identifies the co-location group. Operators with the same co-location
* key will have their corresponding subtasks placed into the same slot by... | 3.26 |
flink_Transformation_getName_rdh | /**
* Returns the name of this {@code Transformation}.
*/
public String getName() {
return name;
} | 3.26 |
flink_Transformation_getId_rdh | /**
* Returns the unique ID of this {@code Transformation}.
*/
public int getId() {
return id; } | 3.26 |
flink_Transformation_getManagedMemorySlotScopeUseCases_rdh | /**
* Get slot scope use cases that this transformation needs managed memory for.
*/
public Set<ManagedMemoryUseCase> getManagedMemorySlotScopeUseCases() {
return Collections.unmodifiableSet(managedMemorySlotScopeUseCases);
} | 3.26 |
flink_Transformation_setUidHash_rdh | /**
* Sets an user provided hash for this operator. This will be used AS IS the create the
* JobVertexID.
*
* <p>The user provided hash is an alternative to the generated hashes, that is considered when
* identifying an operator through the default hash mechanics fails (e.g. because of changes
* between Flink ver... | 3.26 |
flink_Transformation_setName_rdh | /**
* Changes the name of this {@code Transformation}.
*/
public void setName(String name) {
this.name = name;
} | 3.26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.