code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1 value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7 values |
|---|---|---|---|---|---|---|---|
public int getAndResetUnannouncedCredit() {
return unannouncedCredit.getAndSet(0);
} | Gets the unannounced credit and resets it to <tt>0</tt> atomically.
@return Credit which was not announced to the sender yet. | getAndResetUnannouncedCredit | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | Apache-2.0 |
@Nullable
public Buffer requestBuffer() {
return bufferManager.requestBuffer();
} | Requests buffer from input channel directly for receiving network data. It should always
return an available buffer in credit-based mode unless the channel has been released.
@return The available buffer. | requestBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | Apache-2.0 |
public void onSenderBacklog(int backlog) throws IOException {
notifyBufferAvailable(bufferManager.requestFloatingBuffers(backlog + initialCredit));
} | Receives the backlog from the producer's buffer response. If the number of available buffers
is less than backlog + initialCredit, it will request floating buffers from the buffer
manager, and then notify unannounced credits to the producer.
@param backlog The number of unsent buffers in the producer's sub partition. | onSenderBacklog | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | Apache-2.0 |
public void onBuffer(Buffer buffer, int sequenceNumber, int backlog, int subpartitionId)
throws IOException {
boolean recycleBuffer = true;
try {
if (expectedSequenceNumber != sequenceNumber) {
onError(new BufferReorderingException(expectedSequenceNumber, sequenceNumber));
return;
}
if (buffer.getDataType().isBlockingUpstream()) {
onBlockingUpstream();
checkArgument(backlog == 0, "Illegal number of backlog: %s, should be 0.", backlog);
}
final boolean wasEmpty;
boolean firstPriorityEvent = false;
synchronized (receivedBuffers) {
NetworkActionsLogger.traceInput(
"RemoteInputChannel#onBuffer",
buffer,
inputGate.getOwningTaskName(),
channelInfo,
channelStatePersister,
sequenceNumber);
// Similar to notifyBufferAvailable(), make sure that we never add a buffer
// after releaseAllResources() released all buffers from receivedBuffers
// (see above for details).
if (isReleased.get()) {
return;
}
wasEmpty = receivedBuffers.isEmpty();
SequenceBuffer sequenceBuffer =
new SequenceBuffer(buffer, sequenceNumber, subpartitionId);
DataType dataType = buffer.getDataType();
if (dataType.hasPriority()) {
firstPriorityEvent = addPriorityBuffer(sequenceBuffer);
recycleBuffer = false;
} else {
receivedBuffers.add(sequenceBuffer);
recycleBuffer = false;
if (dataType.requiresAnnouncement()) {
firstPriorityEvent = addPriorityBuffer(announce(sequenceBuffer));
}
}
totalQueueSizeInBytes += buffer.getSize();
final OptionalLong barrierId =
channelStatePersister.checkForBarrier(sequenceBuffer.buffer);
if (barrierId.isPresent() && barrierId.getAsLong() > lastBarrierId) {
// checkpoint was not yet started by task thread,
// so remember the numbers of buffers to spill for the time when
// it will be started
lastBarrierId = barrierId.getAsLong();
lastBarrierSequenceNumber = sequenceBuffer.sequenceNumber;
}
channelStatePersister.maybePersist(buffer);
++expectedSequenceNumber;
}
if (firstPriorityEvent) {
notifyPriorityEvent(sequenceNumber);
}
if (wasEmpty) {
notifyChannelNonEmpty();
}
if (backlog >= 0) {
onSenderBacklog(backlog);
}
} finally {
if (recycleBuffer) {
buffer.recycleBuffer();
}
}
} | Handles the input buffer. This method is taking over the ownership of the buffer and is fully
responsible for cleaning it up both on the happy path and in case of an error. | onBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java | Apache-2.0 |
@Override
protected InputChannel toInputChannelInternal() throws IOException {
RemoteInputChannel remoteInputChannel =
new RemoteInputChannel(
inputGate,
getChannelIndex(),
partitionId,
consumedSubpartitionIndexSet,
connectionId,
connectionManager,
initialBackoff,
maxBackoff,
partitionRequestListenerTimeout,
networkBuffersPerChannel,
numBytesIn,
numBuffersIn,
channelStateWriter);
remoteInputChannel.setup();
return remoteInputChannel;
} | An input channel reads recovered state from previous unaligned checkpoint snapshots and then
converts into {@link RemoteInputChannel} finally. | toInputChannelInternal | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteRecoveredInputChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteRecoveredInputChannel.java | Apache-2.0 |
private void handlePriorityEventAvailable(IndexedInputGate inputGate) {
queueInputGate(inputGate, true);
} | A mapping from input gate index to (logical) channel index offset. Valid channel indexes go
from 0 (inclusive) to the total number of input channels (exclusive). | handlePriorityEventAvailable | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/UnionInputGate.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/UnionInputGate.java | Apache-2.0 |
public Optional<T> get(int subpartitionId, int bufferIndex) {
// first of all, try to get region in memory.
Optional<T> regionOpt =
getCachedRegionContainsTargetBufferIndex(subpartitionId, bufferIndex);
if (regionOpt.isPresent()) {
T region = regionOpt.get();
checkNotNull(
// this is needed for cache entry remove algorithm like LRU.
internalCache.getIfPresent(
new CachedRegionKey(subpartitionId, region.getFirstBufferIndex())));
return Optional.of(region);
} else {
// try to find target region and load it into cache if founded.
spilledRegionManager.findRegion(subpartitionId, bufferIndex, true);
return getCachedRegionContainsTargetBufferIndex(subpartitionId, bufferIndex);
}
} | Get a region contains target bufferIndex and belong to target subpartition.
@param subpartitionId the subpartition that target buffer belong to.
@param bufferIndex the index of target buffer.
@return If target region can be founded from memory or disk, return optional contains target
region. Otherwise, return {@code Optional#empty()}; | get | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexCache.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexCache.java | Apache-2.0 |
public void put(int subpartition, List<T> fileRegions) {
TreeMap<Integer, T> treeMap = subpartitionFirstBufferIndexRegions.get(subpartition);
for (T region : fileRegions) {
internalCache.put(
new CachedRegionKey(subpartition, region.getFirstBufferIndex()), PLACEHOLDER);
treeMap.put(region.getFirstBufferIndex(), region);
}
} | Put regions to cache.
@param subpartition the subpartition's id of regions.
@param fileRegions regions to be cached. | put | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexCache.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexCache.java | Apache-2.0 |
private Optional<T> getCachedRegionContainsTargetBufferIndex(
int subpartitionId, int bufferIndex) {
return Optional.ofNullable(
subpartitionFirstBufferIndexRegions
.get(subpartitionId)
.floorEntry(bufferIndex))
.map(Map.Entry::getValue)
.filter(internalRegion -> internalRegion.containBuffer(bufferIndex));
} | Get the cached in memory region contains target buffer.
@param subpartitionId the subpartition that target buffer belong to.
@param bufferIndex the index of target buffer.
@return If target region is cached in memory, return optional contains target region.
Otherwise, return {@code Optional#empty()}; | getCachedRegionContainsTargetBufferIndex | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexCache.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexCache.java | Apache-2.0 |
private List<Tuple2<T, Long>> readRegionGroup(long offset, int numRegions) throws IOException {
List<Tuple2<T, Long>> regionAndOffsets = new ArrayList<>();
for (int i = 0; i < numRegions; i++) {
T region = fileDataIndexRegionHelper.readRegionFromFile(channel, offset);
regionAndOffsets.add(Tuple2.of(region, offset));
offset += region.getSize();
}
return regionAndOffsets;
} | Read region group from index file.
@param offset offset of this region group.
@param numRegions number of regions of this region group.
@return List of all regions and its offset belong to this region group. | readRegionGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexSpilledRegionManagerImpl.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileDataIndexSpilledRegionManagerImpl.java | Apache-2.0 |
public static ByteBuffer allocateAndConfigureBuffer(int bufferSize) {
ByteBuffer buffer = ByteBuffer.allocateDirect(bufferSize);
buffer.order(ByteOrder.nativeOrder());
return buffer;
} | Allocate a buffer with specific size and configure it to native order.
@param bufferSize the size of buffer to allocate.
@return a native order buffer with expected size. | allocateAndConfigureBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileRegionWriteReadUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileRegionWriteReadUtils.java | Apache-2.0 |
public static void writeFixedSizeRegionToFile(
FileChannel channel, ByteBuffer regionBuffer, FileDataIndexRegionHelper.Region region)
throws IOException {
regionBuffer.clear();
regionBuffer.putInt(region.getFirstBufferIndex());
regionBuffer.putInt(region.getNumBuffers());
regionBuffer.putLong(region.getRegionStartOffset());
regionBuffer.putLong(region.getRegionEndOffset());
regionBuffer.flip();
BufferReaderWriterUtil.writeBuffers(channel, regionBuffer.capacity(), regionBuffer);
} | Write {@link FixedSizeRegion} to {@link FileChannel}.
<p>Note that this type of region's length is fixed.
@param channel the file's channel to write.
@param regionBuffer the buffer to write {@link FixedSizeRegion}'s header.
@param region the region to be written to channel. | writeFixedSizeRegionToFile | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileRegionWriteReadUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileRegionWriteReadUtils.java | Apache-2.0 |
public static FixedSizeRegion readFixedSizeRegionFromFile(
FileChannel channel, ByteBuffer regionBuffer, long fileOffset) throws IOException {
regionBuffer.clear();
BufferReaderWriterUtil.readByteBufferFully(channel, regionBuffer, fileOffset);
regionBuffer.flip();
int firstBufferIndex = regionBuffer.getInt();
int numBuffers = regionBuffer.getInt();
long firstBufferOffset = regionBuffer.getLong();
long lastBufferEndOffset = regionBuffer.getLong();
return new FixedSizeRegion(
firstBufferIndex, firstBufferOffset, lastBufferEndOffset, numBuffers);
} | Read {@link FixedSizeRegion} from {@link FileChannel}.
<p>Note that this type of region's length is fixed.
@param channel the channel to read.
@param regionBuffer the buffer to read {@link FixedSizeRegion}'s header.
@param fileOffset the file offset to start read.
@return the {@link FixedSizeRegion} that read from this channel. | readFixedSizeRegionFromFile | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileRegionWriteReadUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/index/FileRegionWriteReadUtils.java | Apache-2.0 |
@Override
public String toString() {
return "TieredStorageTopicId{" + "ID=" + StringUtils.byteToHexString(bytes) + '}';
} | Identifier of a topic.
<p>A topic is equivalent to an intermediate data set in Flink. | toString | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageTopicId.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageTopicId.java | Apache-2.0 |
public static float getNumBuffersTriggerFlushRatio() {
return DEFAULT_NUM_BUFFERS_TRIGGER_FLUSH_RATIO;
} | When the number of buffers that have been requested exceeds this threshold, trigger the
flushing operation in each {@link TierProducerAgent}.
@return flush ratio. | getNumBuffersTriggerFlushRatio | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | Apache-2.0 |
public static int getAccumulatorExclusiveBuffers() {
return DEFAULT_NUM_BUFFERS_USE_SORT_ACCUMULATOR_THRESHOLD;
} | Get exclusive buffer number of accumulator.
<p>The buffer number is used to compare with the subpartition number to determine the type of
{@link BufferAccumulator}.
<p>If the exclusive buffer number is larger than (subpartitionNum + 1), the accumulator will
use {@link HashBufferAccumulator}. If the exclusive buffer number is equal to or smaller than
(subpartitionNum + 1), the accumulator will use {@link SortBufferAccumulator}
@return the buffer number. | getAccumulatorExclusiveBuffers | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | Apache-2.0 |
public static long getPoolSizeCheckInterval() {
return DEFAULT_POOL_SIZE_CHECK_INTERVAL;
} | Get the pool size check interval. | getPoolSizeCheckInterval | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | Apache-2.0 |
public static int getMinBuffersPerGate() {
return DEFAULT_MIN_BUFFERS_PER_GATE;
} | Get the number of minimum buffers per input gate. It is only used when
taskmanager.network.hybrid-shuffle.memory-decoupling.enabled is set to true.
@return the buffer number. | getMinBuffersPerGate | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/common/TieredStorageUtils.java | Apache-2.0 |
void addBuffers(List<FlushedBuffer> buffers) {
if (buffers.isEmpty()) {
return;
}
Map<Integer, List<FixedSizeRegion>> convertedRegions = convertToRegions(buffers);
synchronized (lock) {
convertedRegions.forEach(indexCache::put);
}
} | Add buffers to the index.
@param buffers to be added. Note, the provided buffers are required to be physically
consecutive and in the same order as in the file. | addBuffers | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileIndex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileIndex.java | Apache-2.0 |
@Override
public boolean containBuffer(int bufferIndex) {
return bufferIndex >= firstBufferIndex && bufferIndex < firstBufferIndex + numBuffers;
} | The number of buffers that the region contains. | containBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileIndex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileIndex.java | Apache-2.0 |
private void lazyInitializeFileChannel() throws IOException {
if (fileChannel == null) {
fileChannel = FileChannel.open(dataFilePath, StandardOpenOption.READ);
}
} | Initialize the file channel in a lazy manner, which can reduce usage of the file descriptor
resource. | lazyInitializeFileChannel | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileReader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileReader.java | Apache-2.0 |
private Tuple2<Integer, Integer> sliceBuffer(
ByteBuffer byteBuffer,
MemorySegment memorySegment,
@Nullable CompositeBuffer partialBuffer,
BufferRecycler bufferRecycler,
List<Buffer> readBuffers) {
checkState(reusedHeaderBuffer.position() == 0);
checkState(partialBuffer == null || partialBuffer.missingLength() > 0);
NetworkBuffer buffer = new NetworkBuffer(memorySegment, bufferRecycler);
buffer.setSize(byteBuffer.remaining());
try {
int numSlicedBytes = 0;
if (partialBuffer != null) {
// If there is a previous small partial buffer, the current read operation should
// read additional data and combine it with the existing partial to construct a new
// complete buffer
buffer.retainBuffer();
int position = byteBuffer.position() + partialBuffer.missingLength();
int numPartialBytes = partialBuffer.missingLength();
partialBuffer.addPartialBuffer(
buffer.readOnlySlice(byteBuffer.position(), numPartialBytes));
numSlicedBytes += numPartialBytes;
byteBuffer.position(position);
readBuffers.add(partialBuffer);
}
partialBuffer = null;
while (byteBuffer.hasRemaining()) {
// Parse the small buffer's header
BufferHeader header = parseBufferHeader(byteBuffer);
if (header == null) {
// If the remaining data length in the buffer is not enough to construct a new
// complete buffer header, drop it directly.
break;
} else {
numSlicedBytes += HEADER_LENGTH;
}
if (header.getLength() <= byteBuffer.remaining()) {
// The remaining data length in the buffer is enough to generate a new small
// sliced network buffer. The small sliced buffer is not a partial buffer, we
// should read the slice of the buffer directly
buffer.retainBuffer();
ReadOnlySlicedNetworkBuffer slicedBuffer =
buffer.readOnlySlice(byteBuffer.position(), header.getLength());
slicedBuffer.setDataType(header.getDataType());
slicedBuffer.setCompressed(header.isCompressed());
byteBuffer.position(byteBuffer.position() + header.getLength());
numSlicedBytes += header.getLength();
readBuffers.add(slicedBuffer);
} else {
// The remaining data length in the buffer is smaller than the actual length of
// the buffer, so we should generate a new partial buffer, allowing for
// generating a new complete buffer during the next read operation
buffer.retainBuffer();
int numPartialBytes = byteBuffer.remaining();
numSlicedBytes += numPartialBytes;
partialBuffer = new CompositeBuffer(header);
partialBuffer.addPartialBuffer(
buffer.readOnlySlice(byteBuffer.position(), numPartialBytes));
readBuffers.add(partialBuffer);
break;
}
}
return Tuple2.of(numSlicedBytes, getPartialBufferReadBytes(partialBuffer));
} catch (Throwable throwable) {
LOG.error("Failed to slice the read buffer {}.", byteBuffer, throwable);
throw throwable;
} finally {
buffer.recycleBuffer();
}
} | Slice the read memory segment to multiple small network buffers.
<p>Note that although the method appears to be split into multiple buffers, the sliced
buffers still share the same one actual underlying memory segment.
@param byteBuffer the byte buffer to be sliced, it points to the underlying memorySegment
@param memorySegment the underlying memory segment to be sliced
@param partialBuffer the partial buffer, if the partial buffer is not null, it contains the
partial data buffer from the previous read
@param readBuffers the read buffers list is to accept the sliced buffers
@return the first field is the number of total sliced bytes, the second field is the bytes of
the partial buffer | sliceBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileReader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileReader.java | Apache-2.0 |
private void flush(
List<SubpartitionBufferContext> toWrite, CompletableFuture<Void> flushSuccessNotifier) {
try {
List<ProducerMergedPartitionFileIndex.FlushedBuffer> buffers = new ArrayList<>();
calculateSizeAndFlushBuffers(toWrite, buffers);
partitionFileIndex.addBuffers(buffers);
flushSuccessNotifier.complete(null);
} catch (IOException exception) {
ExceptionUtils.rethrow(exception);
}
} | Called in single-threaded ioExecutor. Order is guaranteed. | flush | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileWriter.java | Apache-2.0 |
private void calculateSizeAndFlushBuffers(
List<SubpartitionBufferContext> toWrite,
List<ProducerMergedPartitionFileIndex.FlushedBuffer> buffers)
throws IOException {
List<Tuple2<Buffer, Integer>> buffersToFlush = new ArrayList<>();
long expectedBytes = 0;
for (SubpartitionBufferContext subpartitionBufferContext : toWrite) {
int subpartitionId = subpartitionBufferContext.getSubpartitionId();
for (SegmentBufferContext segmentBufferContext :
subpartitionBufferContext.getSegmentBufferContexts()) {
List<Tuple2<Buffer, Integer>> bufferAndIndexes =
segmentBufferContext.getBufferAndIndexes();
buffersToFlush.addAll(bufferAndIndexes);
for (Tuple2<Buffer, Integer> bufferWithIndex :
segmentBufferContext.getBufferAndIndexes()) {
Buffer buffer = bufferWithIndex.f0;
buffers.add(
new ProducerMergedPartitionFileIndex.FlushedBuffer(
subpartitionId,
bufferWithIndex.f1,
totalBytesWritten + expectedBytes,
buffer.readableBytes() + BufferReaderWriterUtil.HEADER_LENGTH));
expectedBytes += buffer.readableBytes() + BufferReaderWriterUtil.HEADER_LENGTH;
}
}
}
flushBuffers(buffersToFlush, expectedBytes);
buffersToFlush.forEach(bufferWithIndex -> bufferWithIndex.f0.recycleBuffer());
} | Compute buffer's file offset and create buffers to be flushed.
@param toWrite all buffers to write to create {@link
ProducerMergedPartitionFileIndex.FlushedBuffer}s
@param buffers receive the created {@link ProducerMergedPartitionFileIndex.FlushedBuffer} | calculateSizeAndFlushBuffers | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileWriter.java | Apache-2.0 |
private void flushBuffers(List<Tuple2<Buffer, Integer>> bufferAndIndexes, long expectedBytes)
throws IOException {
if (bufferAndIndexes.isEmpty()) {
return;
}
ByteBuffer[] bufferWithHeaders = generateBufferWithHeaders(bufferAndIndexes);
BufferReaderWriterUtil.writeBuffers(dataFileChannel, expectedBytes, bufferWithHeaders);
totalBytesWritten += expectedBytes;
} | Write all buffers to the disk. | flushBuffers | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileWriter.java | Apache-2.0 |
private void flush(
TieredStoragePartitionId partitionId,
int subpartitionId,
int segmentId,
List<Tuple2<Buffer, Integer>> buffersToFlush) {
try {
writeBuffers(
partitionId,
subpartitionId,
segmentId,
buffersToFlush,
getTotalBytes(buffersToFlush));
buffersToFlush.forEach(bufferToFlush -> bufferToFlush.f0.recycleBuffer());
} catch (IOException exception) {
ExceptionUtils.rethrow(exception);
}
} | This method is only called by the flushing thread. | flush | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/SegmentPartitionFileWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/SegmentPartitionFileWriter.java | Apache-2.0 |
public ResultSubpartitionView createResultSubpartitionView(
TieredStoragePartitionId partitionId,
TieredStorageSubpartitionId subpartitionId,
BufferAvailabilityListener availabilityListener) {
List<NettyServiceProducer> serviceProducers = registeredServiceProducers.get(partitionId);
if (serviceProducers == null) {
return new TieredStorageResultSubpartitionView(
availabilityListener, new ArrayList<>(), new ArrayList<>(), new ArrayList<>());
}
List<NettyPayloadManager> nettyPayloadManagers = new ArrayList<>();
List<NettyConnectionId> nettyConnectionIds = new ArrayList<>();
List<NettyConnectionWriterImpl> writers = new ArrayList<>();
for (int i = 0; i < serviceProducers.size(); i++) {
NettyPayloadManager nettyPayloadManager = new NettyPayloadManager();
NettyConnectionWriterImpl writer = new NettyConnectionWriterImpl(nettyPayloadManager);
nettyConnectionIds.add(writer.getNettyConnectionId());
nettyPayloadManagers.add(nettyPayloadManager);
writers.add(writer);
}
ResultSubpartitionView result =
new TieredStorageResultSubpartitionView(
availabilityListener,
nettyPayloadManagers,
nettyConnectionIds,
registeredServiceProducers.get(partitionId));
for (int i = 0; i < writers.size(); i++) {
writers.get(i).registerAvailabilityListener(result::notifyDataAvailable);
serviceProducers.get(i).connectionEstablished(subpartitionId, writers.get(i));
}
return result;
} | Create a {@link ResultSubpartitionView} for the netty server.
@param partitionId partition id indicates the unique id of {@link TieredResultPartition}.
@param subpartitionId subpartition id indicates the unique id of subpartition.
@param availabilityListener listener is used to listen the available status of data.
@return the {@link TieredStorageResultSubpartitionView}. | createResultSubpartitionView | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/netty/TieredStorageNettyServiceImpl.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/netty/TieredStorageNettyServiceImpl.java | Apache-2.0 |
public boolean trySetChannel(int channelIndex, Supplier<InputChannel> channelSupplier) {
if (isChannelSet()) {
return false;
}
checkArgument(channelIndex >= 0);
this.channelIndex = channelIndex;
this.channelSupplier = checkNotNull(channelSupplier);
tryCreateNettyConnectionReader();
return true;
} | Try to set input channel.
@param channelIndex the index of channel.
@param channelSupplier supplier to provide channel.
@return true if the channel is successfully set, or false if the registration already has
an input channel. | trySetChannel | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/netty/TieredStorageNettyServiceImpl.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/netty/TieredStorageNettyServiceImpl.java | Apache-2.0 |
public static EmptyTieredShuffleMasterSnapshot getInstance() {
return INSTANCE;
} | A singleton implementation of {@link TieredShuffleMasterSnapshot} that represents an empty
snapshot of tiered shuffle master. | getInstance | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/shuffle/EmptyTieredShuffleMasterSnapshot.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/shuffle/EmptyTieredShuffleMasterSnapshot.java | Apache-2.0 |
public void registerJob(JobShuffleContext context) {
tieredStorageMasterClient.registerJob(context.getJobId(), getTierShuffleHandler(context));
} | Registers the target job together with the corresponding {@link JobShuffleContext} to this
shuffle master. | registerJob | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/shuffle/TieredInternalShuffleMaster.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/shuffle/TieredInternalShuffleMaster.java | Apache-2.0 |
private static TierFactory loadTierFactory(String tierFactoryClassName) {
TierFactory tierFactory = null;
try {
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
tierFactory =
InstantiationUtil.instantiate(
tierFactoryClassName, TierFactory.class, classLoader);
} catch (FlinkException e) {
ExceptionUtils.rethrow(e);
}
return tierFactory;
} | Loads and instantiates a {@link TierFactory} based on the provided class name. | loadTierFactory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/shuffle/TierFactoryInitializer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/shuffle/TierFactoryInitializer.java | Apache-2.0 |
private static List<List<TierShuffleDescriptor>> transformTierShuffleDescriptors(
List<List<TierShuffleDescriptor>> shuffleDescriptors) {
int numTiers = 0;
int numPartitions = shuffleDescriptors.size();
for (List<TierShuffleDescriptor> tierShuffleDescriptors : shuffleDescriptors) {
if (numTiers == 0) {
numTiers = tierShuffleDescriptors.size();
}
checkState(numTiers == tierShuffleDescriptors.size());
}
List<List<TierShuffleDescriptor>> transformedList = new ArrayList<>();
for (int i = 0; i < numTiers; i++) {
List<TierShuffleDescriptor> innerList = new ArrayList<>();
for (int j = 0; j < numPartitions; j++) {
innerList.add(shuffleDescriptors.get(j).get(i));
}
transformedList.add(innerList);
}
return transformedList;
} | Before transforming the shuffle descriptors, the number of tier shuffle descriptors is
numPartitions * numTiers (That means shuffleDescriptors.size() is numPartitions, while the
shuffleDescriptors.get(0).size() is numTiers). After transforming, the number of tier shuffle
descriptors is numTiers * numPartitions (That means transformedList.size() is numTiers, while
transformedList.get(0).size() is numPartitions). | transformTierShuffleDescriptors | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageConsumerClient.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageConsumerClient.java | Apache-2.0 |
@Nullable
private MemorySegment requestBufferBlockingFromPool() {
MemorySegment memorySegment = null;
hardBackpressureTimerGauge.markStart();
while (numRequestedBuffers.get() < bufferPool.getNumBuffers()) {
memorySegment = bufferPool.requestMemorySegment();
if (memorySegment == null) {
try {
// Wait until a buffer is available or timeout before entering the next loop
// iteration.
bufferPool.getAvailableFuture().get(100, TimeUnit.MILLISECONDS);
} catch (TimeoutException ignored) {
} catch (Exception e) {
ExceptionUtils.rethrow(e);
}
} else {
numRequestedBuffers.incrementAndGet();
break;
}
}
hardBackpressureTimerGauge.markEnd();
return memorySegment;
} | @return a memory segment from the buffer pool or null if the memory manager has requested all
segments of the buffer pool. | requestBufferBlockingFromPool | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageMemoryManagerImpl.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageMemoryManagerImpl.java | Apache-2.0 |
public void registerResource(
TieredStorageDataIdentifier owner, TieredStorageResource tieredStorageResource) {
registeredResources
.computeIfAbsent(owner, (ignore) -> new ArrayList<>())
.add(tieredStorageResource);
} | Register a new resource for the given owner.
@param owner identifier of the data that the resource corresponds to.
@param tieredStorageResource the tiered storage resources to be registered. | registerResource | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageResourceRegistry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageResourceRegistry.java | Apache-2.0 |
public void clearResourceFor(TieredStorageDataIdentifier owner) {
List<TieredStorageResource> cleanersForOwner = registeredResources.remove(owner);
if (cleanersForOwner != null) {
cleanersForOwner.forEach(TieredStorageResource::release);
}
} | Remove all resources for the given owner.
@param owner identifier of the data that the resources correspond to. | clearResourceFor | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageResourceRegistry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/storage/TieredStorageResourceRegistry.java | Apache-2.0 |
default void snapshotState(
CompletableFuture<TieredShuffleMasterSnapshot> snapshotFuture,
ShuffleMasterSnapshotContext context,
JobID jobId) {
snapshotFuture.complete(EmptyTieredShuffleMasterSnapshot.getInstance());
} | Triggers a snapshot of the tier master agent's state which related the specified job. | snapshotState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/TierMasterAgent.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/TierMasterAgent.java | Apache-2.0 |
default CompletableFuture<Map<ResultPartitionID, ShuffleMetrics>> getPartitionWithMetrics(
JobID jobId, Duration timeout, Set<ResultPartitionID> expectedPartitions) {
if (!partitionInRemote()) {
return CompletableFuture.completedFuture(Collections.emptyMap());
} else {
throw new UnsupportedOperationException(
"remote partition should be reported by tier itself.");
}
} | Retrieves specified partitions and their metrics (identified by {@code expectedPartitions}),
the metrics include sizes of sub-partitions in a result partition.
@param jobId ID of the target job
@param timeout The timeout used for retrieve the specified partitions.
@param expectedPartitions The set of identifiers for the result partitions whose metrics are
to be fetched.
@return A future will contain a map of the partitions with their metrics that could be
retrieved from the expected partitions within the specified timeout period. | getPartitionWithMetrics | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/TierMasterAgent.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/TierMasterAgent.java | Apache-2.0 |
default boolean partitionInRemote() {
return false;
} | Is this tier manage the partition in remote cluster instead of flink taskmanager. | partitionInRemote | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/TierMasterAgent.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/TierMasterAgent.java | Apache-2.0 |
void append(Buffer buffer, int subpartitionId, boolean flush) {
subpartitionCacheManagers[subpartitionId].append(buffer);
increaseNumCachedBytesAndCheckFlush(buffer.readableBytes(), flush);
} | Append buffer to {@link DiskCacheManager}.
@param buffer to be managed by this class.
@param subpartitionId the subpartition of this record.
@param flush whether it is allowed to flush the cache after this buffer is appended. | append | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/DiskCacheManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/DiskCacheManager.java | Apache-2.0 |
void appendEndOfSegmentEvent(ByteBuffer record, int subpartitionId) {
subpartitionCacheManagers[subpartitionId].appendEndOfSegmentEvent(record);
increaseNumCachedBytesAndCheckFlush(record.remaining(), true);
} | Append the end-of-segment event to {@link DiskCacheManager}, which indicates the segment has
finished.
@param record the end-of-segment event
@param subpartitionId target subpartition of this record. | appendEndOfSegmentEvent | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/DiskCacheManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/DiskCacheManager.java | Apache-2.0 |
int getBufferIndex(int subpartitionId) {
return subpartitionCacheManagers[subpartitionId].getBufferIndex();
} | Return the current buffer index.
@param subpartitionId the target subpartition id
@return the finished buffer index | getBufferIndex | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/DiskCacheManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/DiskCacheManager.java | Apache-2.0 |
private void addBuffer(Buffer buffer) {
synchronized (allBuffers) {
allBuffers.add(new Tuple2<>(buffer, bufferIndex));
}
bufferIndex++;
} | This method is only called by the task thread. | addBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/SubpartitionDiskCacheManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/disk/SubpartitionDiskCacheManager.java | Apache-2.0 |
private FileSystem createFileSystem() {
FileSystem fileSystem = null;
try {
fileSystem = new Path(baseRemoteStoragePath).getFileSystem();
} catch (IOException e) {
ExceptionUtils.rethrow(
e, "Failed to initialize file system on the path: " + baseRemoteStoragePath);
}
return fileSystem;
} | The key is partition id and subpartition id, the value is max id of written segment files in
the subpartition. | createFileSystem | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteStorageScanner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteStorageScanner.java | Apache-2.0 |
public void watchSegment(
TieredStoragePartitionId partitionId,
TieredStorageSubpartitionId subpartitionId,
int segmentId) {
Tuple2<TieredStoragePartitionId, TieredStorageSubpartitionId> key =
Tuple2.of(partitionId, subpartitionId);
scannedMaxSegmentIds.compute(
key,
(segmentKey, maxSegmentId) -> {
if (maxSegmentId == null || maxSegmentId < segmentId) {
requiredSegmentIds.put(segmentKey, segmentId);
}
return maxSegmentId;
});
} | Watch the segment for a specific subpartition in the {@link RemoteStorageScanner}.
<p>If a segment with a larger or equal id already exists, the current segment won't be
watched.
<p>If a segment with a smaller segment id is still being watched, the current segment will
replace it because the smaller segment should have been consumed. This method ensures that
only one segment file can be watched for each subpartition.
@param partitionId is the id of partition.
@param subpartitionId is the id of subpartition.
@param segmentId is the id of segment. | watchSegment | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteStorageScanner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteStorageScanner.java | Apache-2.0 |
int getInterval(int lastInterval) {
return Math.min(lastInterval * 2, maxScanInterval);
} | The strategy is used to decide the scan interval of {@link RemoteStorageScanner}. The
interval will be updated at a double rate and restricted by max value. | getInterval | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteStorageScanner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteStorageScanner.java | Apache-2.0 |
public TieredStoragePartitionId getPartitionId() {
return partitionId;
} | The shuffle descriptor implementation for remote tier. | getPartitionId | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteTierShuffleDescriptor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/RemoteTierShuffleDescriptor.java | Apache-2.0 |
private static void checkSegmentIdNotEmpty(int segmentId) {
checkArgument(segmentId >= 0, "Segment id must be non-negative.");
} | This {@link SubpartitionRemoteCacheManager} is responsible for managing the buffers in a single
subpartition. | checkSegmentIdNotEmpty | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/SubpartitionRemoteCacheManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/tier/remote/SubpartitionRemoteCacheManager.java | Apache-2.0 |
public DataInputView getReadEndAfterSuperstepEnded() {
try {
return queue.take().switchBuffers();
} catch (InterruptedException | IOException e) {
throw new RuntimeException(e);
}
} | Called by iteration head after it has sent all input for the current superstep through the
data channel (blocks iteration head). | getReadEndAfterSuperstepEnded | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/BlockingBackChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/BlockingBackChannel.java | Apache-2.0 |
public DataOutputView getWriteEnd() {
return buffer;
} | Called by iteration tail to save the output of the current superstep. | getWriteEnd | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/BlockingBackChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/BlockingBackChannel.java | Apache-2.0 |
public void handIn(String key, V obj) {
if (!retrieveSharedQueue(key).offer(obj)) {
throw new RuntimeException(
"Could not register the given element, broker slot is already occupied.");
}
} | Hand in the object to share. | handIn | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/Broker.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/Broker.java | Apache-2.0 |
private BlockingQueue<V> retrieveSharedQueue(String key) {
BlockingQueue<V> queue = mediations.get(key);
if (queue == null) {
queue = new ArrayBlockingQueue<V>(1);
BlockingQueue<V> commonQueue = mediations.putIfAbsent(key, queue);
return commonQueue != null ? commonQueue : queue;
} else {
return queue;
}
} | Thread-safe call to get a shared {@link BlockingQueue}. | retrieveSharedQueue | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/Broker.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/Broker.java | Apache-2.0 |
public void setup() {
latch = new CountDownLatch(1);
} | Resettable barrier to synchronize the {@link IterationHeadTask} and the {@link IterationTailTask}
in case of iterations that contain a separate solution set tail. | setup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/SolutionSetUpdateBarrier.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/iterative/concurrent/SolutionSetUpdateBarrier.java | Apache-2.0 |
@Override
public boolean isConverged(int iteration, LongValue value) {
long updatedElements = value.getValue();
if (log.isInfoEnabled()) {
log.info(
"["
+ updatedElements
+ "] elements updated in the solutionset in iteration ["
+ iteration
+ "]");
}
return updatedElements == 0;
} | A workset iteration is by definition converged if no records have been updated in the
solutionset. | isConverged | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/iterative/convergence/WorksetEmptyConvergenceCriterion.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/iterative/convergence/WorksetEmptyConvergenceCriterion.java | Apache-2.0 |
public IntermediateDataSet getSource() {
return source;
} | Returns the data set at the source of the edge. May be null, if the edge refers to the source
via an ID and has not been connected.
@return The data set at the source of the edge | getSource | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public JobVertex getTarget() {
return target;
} | Returns the vertex connected to this edge.
@return The vertex connected to this edge. | getTarget | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public DistributionPattern getDistributionPattern() {
return this.distributionPattern;
} | Returns the distribution pattern used for this edge.
@return The distribution pattern used for this edge. | getDistributionPattern | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public IntermediateDataSetID getSourceId() {
return source.getId();
} | Gets the ID of the consumed data set.
@return The ID of the consumed data set. | getSourceId | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public String getShipStrategyName() {
return shipStrategyName;
} | Gets the name of the ship strategy for the represented input, like "forward", "partition
hash", "rebalance", "broadcast", ...
@return The name of the ship strategy for the represented input, or null, if none was set. | getShipStrategyName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public void setShipStrategyName(String shipStrategyName) {
this.shipStrategyName = shipStrategyName;
} | Sets the name of the ship strategy for the represented input.
@param shipStrategyName The name of the ship strategy. | setShipStrategyName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public boolean isBroadcast() {
return isBroadcast;
} | Gets whether the edge is broadcast edge. | isBroadcast | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public boolean isForward() {
return isForward;
} | Gets whether the edge is forward edge. | isForward | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public SubtaskStateMapper getDownstreamSubtaskStateMapper() {
return downstreamSubtaskStateMapper;
} | Gets the channel state rescaler used for rescaling persisted data on downstream side of this
JobEdge.
@return The channel state rescaler to use, or null, if none was set. | getDownstreamSubtaskStateMapper | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public void setDownstreamSubtaskStateMapper(SubtaskStateMapper downstreamSubtaskStateMapper) {
this.downstreamSubtaskStateMapper = checkNotNull(downstreamSubtaskStateMapper);
} | Sets the channel state rescaler used for rescaling persisted data on downstream side of this
JobEdge.
@param downstreamSubtaskStateMapper The channel state rescaler selector to use. | setDownstreamSubtaskStateMapper | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public SubtaskStateMapper getUpstreamSubtaskStateMapper() {
return upstreamSubtaskStateMapper;
} | Gets the channel state rescaler used for rescaling persisted data on upstream side of this
JobEdge.
@return The channel state rescaler to use, or null, if none was set. | getUpstreamSubtaskStateMapper | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public void setUpstreamSubtaskStateMapper(SubtaskStateMapper upstreamSubtaskStateMapper) {
this.upstreamSubtaskStateMapper = checkNotNull(upstreamSubtaskStateMapper);
} | Sets the channel state rescaler used for rescaling persisted data on upstream side of this
JobEdge.
@param upstreamSubtaskStateMapper The channel state rescaler selector to use. | setUpstreamSubtaskStateMapper | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public String getPreProcessingOperationName() {
return preProcessingOperationName;
} | Gets the name of the pro-processing operation for this input.
@return The name of the pro-processing operation, or null, if none was set. | getPreProcessingOperationName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public void setPreProcessingOperationName(String preProcessingOperationName) {
this.preProcessingOperationName = preProcessingOperationName;
} | Sets the name of the pre-processing operation for this input.
@param preProcessingOperationName The name of the pre-processing operation. | setPreProcessingOperationName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public String getOperatorLevelCachingDescription() {
return operatorLevelCachingDescription;
} | Gets the operator-level caching description for this input.
@return The description of operator-level caching, or null, is none was set. | getOperatorLevelCachingDescription | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public void setOperatorLevelCachingDescription(String operatorLevelCachingDescription) {
this.operatorLevelCachingDescription = operatorLevelCachingDescription;
} | Sets the operator-level caching description for this input.
@param operatorLevelCachingDescription The description of operator-level caching. | setOperatorLevelCachingDescription | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
public boolean areInterInputsKeysCorrelated() {
return interInputsKeysCorrelated;
} | Gets whether the records with same key of this edge are correlated with other inputs. | areInterInputsKeysCorrelated | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobEdge.java | Apache-2.0 |
@Override
public JobID getJobID() {
return this.jobID;
} | Returns the ID of the job.
@return the ID of the job | getJobID | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public void setJobID(JobID jobID) {
this.jobID = jobID;
} | Sets the ID of the job. | setJobID | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
@Override
public Configuration getJobConfiguration() {
return this.jobConfiguration;
} | Returns the configuration object for this job. Job-wide parameters should be set into that
configuration object.
@return The configuration object for this job. | getJobConfiguration | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
@Override
public void setSavepointRestoreSettings(SavepointRestoreSettings settings) {
this.savepointRestoreSettings = checkNotNull(settings, "Savepoint restore settings");
} | Sets the savepoint restore settings.
@param settings The savepoint restore settings. | setSavepointRestoreSettings | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
@Override
public SavepointRestoreSettings getSavepointRestoreSettings() {
return savepointRestoreSettings;
} | Returns the configured savepoint restore setting.
@return The configured savepoint restore settings. | getSavepointRestoreSettings | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public void addVertex(JobVertex vertex) {
final JobVertexID id = vertex.getID();
JobVertex previous = taskVertices.put(id, vertex);
// if we had a prior association, restore and throw an exception
if (previous != null) {
taskVertices.put(id, previous);
throw new IllegalArgumentException(
"The JobGraph already contains a vertex with that id.");
}
} | Adds a new task vertex to the job graph if it is not already included.
@param vertex the new task vertex to be added | addVertex | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public Iterable<JobVertex> getVertices() {
return this.taskVertices.values();
} | Returns an Iterable to iterate all vertices registered with the job graph.
@return an Iterable to iterate all vertices registered with the job graph | getVertices | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public JobVertex[] getVerticesAsArray() {
return this.taskVertices.values().toArray(new JobVertex[this.taskVertices.size()]);
} | Returns an array of all job vertices that are registered with the job graph. The order in
which the vertices appear in the list is not defined.
@return an array of all job vertices that are registered with the job graph | getVerticesAsArray | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public Set<CoLocationGroup> getCoLocationGroups() {
final Set<CoLocationGroup> coLocationGroups =
IterableUtils.toStream(getVertices())
.map(JobVertex::getCoLocationGroup)
.filter(Objects::nonNull)
.collect(Collectors.toSet());
return Collections.unmodifiableSet(coLocationGroups);
} | Returns all {@link CoLocationGroup} instances associated with this {@code JobGraph}.
@return The associated {@code CoLocationGroup} instances. | getCoLocationGroups | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public void setSnapshotSettings(JobCheckpointingSettings settings) {
this.snapshotSettings = settings;
} | Sets the settings for asynchronous snapshots. A value of {@code null} means that snapshotting
is not enabled.
@param settings The snapshot settings | setSnapshotSettings | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
@Override
public JobCheckpointingSettings getCheckpointingSettings() {
return snapshotSettings;
} | Gets the settings for asynchronous snapshots. This method returns null, when checkpointing is
not enabled.
@return The snapshot settings | getCheckpointingSettings | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public JobVertex findVertexByID(JobVertexID id) {
return this.taskVertices.get(id);
} | Searches for a vertex with a matching ID and returns it.
@param id the ID of the vertex to search for
@return the vertex with the matching ID or <code>null</code> if no vertex with such ID could
be found | findVertexByID | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public void addUserArtifact(String name, DistributedCache.DistributedCacheEntry file) {
if (file == null) {
throw new IllegalArgumentException();
}
userArtifacts.putIfAbsent(name, file);
} | Adds the path of a custom file required to run the job on a task manager.
@param name a name under which this artifact will be accessible through {@link
DistributedCache}
@param file path of a custom file required to run the job on a task manager | addUserArtifact | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
@Override
public void addUserJarBlobKey(PermanentBlobKey key) {
if (key == null) {
throw new IllegalArgumentException();
}
if (!userJarBlobKeys.contains(key)) {
userJarBlobKeys.add(key);
}
} | Adds the BLOB referenced by the key to the JobGraph's dependencies.
@param key path of the JAR file required to run the job on a task manager | addUserJarBlobKey | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public boolean hasUsercodeJarFiles() {
return this.userJars.size() > 0;
} | Checks whether the JobGraph has user code JAR files attached.
@return True, if the JobGraph has user code JAR files attached, false otherwise. | hasUsercodeJarFiles | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobGraph.java | Apache-2.0 |
public static void writeToExecutionPlan(
ExecutionPlan executionPlan, JobResourceRequirements jobResourceRequirements)
throws IOException {
InstantiationUtil.writeObjectToConfig(
jobResourceRequirements,
executionPlan.getJobConfiguration(),
JOB_RESOURCE_REQUIREMENTS_KEY);
} | Write {@link JobResourceRequirements resource requirements} into the configuration of a given
{@link ExecutionPlan}.
@param executionPlan executionPlan to write requirements to
@param jobResourceRequirements resource requirements to write
@throws IOException in case we're not able to serialize requirements into the configuration | writeToExecutionPlan | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobResourceRequirements.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobResourceRequirements.java | Apache-2.0 |
public static Optional<JobResourceRequirements> readFromExecutionPlan(
ExecutionPlan executionPlan) throws IOException {
try {
return Optional.ofNullable(
InstantiationUtil.readObjectFromConfig(
executionPlan.getJobConfiguration(),
JOB_RESOURCE_REQUIREMENTS_KEY,
JobResourceRequirements.class.getClassLoader()));
} catch (ClassNotFoundException e) {
throw new IOException(
"Unable to deserialize JobResourceRequirements due to missing classes. This might happen when the ExecutionPlan was written from a different Flink version.",
e);
}
} | Read {@link JobResourceRequirements resource requirements} from the configuration of a given
{@link ExecutionPlan}.
@param executionPlan execution plan to read requirements from
@throws IOException in case we're not able to deserialize requirements from the configuration | readFromExecutionPlan | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobResourceRequirements.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobResourceRequirements.java | Apache-2.0 |
public JobVertexID getID() {
return this.id;
} | Returns the ID of this job vertex.
@return The ID of this job vertex | getID | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public void setName(String name) {
this.name = name == null ? DEFAULT_NAME : name;
} | Sets the name of the vertex.
@param name The new name. | setName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public int getNumberOfProducedIntermediateDataSets() {
return this.results.size();
} | Returns the number of produced intermediate data sets.
@return The number of produced intermediate data sets. | getNumberOfProducedIntermediateDataSets | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public String getInvokableClassName() {
return this.invokableClassName;
} | Returns the name of the invokable class which represents the task of this vertex.
@return The name of the invokable class, <code>null</code> if not set. | getInvokableClassName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public Class<? extends TaskInvokable> getInvokableClass(ClassLoader cl) {
if (cl == null) {
throw new NullPointerException("The classloader must not be null.");
}
if (invokableClassName == null) {
return null;
}
try {
return Class.forName(invokableClassName, true, cl).asSubclass(TaskInvokable.class);
} catch (ClassNotFoundException e) {
throw new RuntimeException("The user-code class could not be resolved.", e);
} catch (ClassCastException e) {
throw new RuntimeException(
"The user-code class is no subclass of " + TaskInvokable.class.getName(), e);
}
} | Returns the invokable class which represents the task of this vertex.
@param cl The classloader used to resolve user-defined classes
@return The invokable class, <code>null</code> if it is not set | getInvokableClass | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public int getParallelism() {
return parallelism;
} | Gets the parallelism of the task.
@return The parallelism of the task. | getParallelism | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public void setParallelism(int parallelism) {
if (parallelism < 1 && parallelism != ExecutionConfig.PARALLELISM_DEFAULT) {
throw new IllegalArgumentException(
"The parallelism must be at least one, or "
+ ExecutionConfig.PARALLELISM_DEFAULT
+ " (unset).");
}
this.parallelism = parallelism;
} | Sets the parallelism for the task.
@param parallelism The parallelism for the task. | setParallelism | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public int getMaxParallelism() {
return maxParallelism;
} | Gets the maximum parallelism for the task.
@return The maximum parallelism for the task. | getMaxParallelism | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public void setMaxParallelism(int maxParallelism) {
this.maxParallelism = maxParallelism;
} | Sets the maximum parallelism for the task.
@param maxParallelism The maximum parallelism to be set. must be between 1 and
Short.MAX_VALUE + 1. | setMaxParallelism | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public ResourceSpec getMinResources() {
return minResources;
} | Gets the minimum resource for the task.
@return The minimum resource for the task. | getMinResources | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public ResourceSpec getPreferredResources() {
return preferredResources;
} | Gets the preferred resource for the task.
@return The preferred resource for the task. | getPreferredResources | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public void setSlotSharingGroup(SlotSharingGroup grp) {
checkNotNull(grp);
if (this.slotSharingGroup != null) {
this.slotSharingGroup.removeVertexFromGroup(this.getID());
}
grp.addVertexToGroup(this.getID());
this.slotSharingGroup = grp;
} | Associates this vertex with a slot sharing group for scheduling. Different vertices in the
same slot sharing group can run one subtask each in the same slot.
@param grp The slot sharing group to associate the vertex with. | setSlotSharingGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
public SlotSharingGroup getSlotSharingGroup() {
if (slotSharingGroup == null) {
// create a new slot sharing group for this vertex if it was in no other slot sharing
// group.
// this should only happen in testing cases at the moment because production code path
// will
// always set a value to it before used
setSlotSharingGroup(new SlotSharingGroup());
}
return slotSharingGroup;
} | Gets the slot sharing group that this vertex is associated with. Different vertices in the
same slot sharing group can run one subtask each in the same slot.
@return The slot sharing group to associate the vertex with | getSlotSharingGroup | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.