code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1 value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7 values |
|---|---|---|---|---|---|---|---|
@Override
public Collection<MessagePathParameter<?>> getPathParameters() {
return Arrays.asList(jobPathParameter, checkpointIdPathParameter, jobVertexIdPathParameter);
} | Message parameters for subtask related checkpoint message.
<p>The message requires a JobID, checkpoint ID and a JobVertexID to be specified. | getPathParameters | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/checkpoints/TaskCheckpointMessageParameters.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/checkpoints/TaskCheckpointMessageParameters.java | Apache-2.0 |
@Override
public Collection<MessagePathParameter<?>> getPathParameters() {
return Arrays.asList(
jobPathParameter,
jobVertexIdPathParameter,
subtaskIndexPathParameter,
subtaskAttemptPathParameter);
} | The type Subtask attempt message parameters. | getPathParameters | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/SubtaskAttemptMessageParameters.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/SubtaskAttemptMessageParameters.java | Apache-2.0 |
@Override
public Collection<MessagePathParameter<?>> getPathParameters() {
return Arrays.asList(jobPathParameter, jobVertexIdPathParameter, subtaskIndexPathParameter);
} | Message parameters for subtask REST handlers. | getPathParameters | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/SubtaskMessageParameters.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/SubtaskMessageParameters.java | Apache-2.0 |
@JsonIgnore
public SerializedValue<CoordinationRequest> getSerializedCoordinationRequest() {
return serializedCoordinationRequest;
} | Request that carries a serialized {@link CoordinationRequest} to a specified coordinator. | getSerializedCoordinationRequest | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/coordination/ClientCoordinationRequestBody.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/coordination/ClientCoordinationRequestBody.java | Apache-2.0 |
@JsonIgnore
public SerializedValue<CoordinationResponse> getSerializedCoordinationResponse() {
return serializedCoordinationResponse;
} | Response that carries a serialized {@link CoordinationResponse} to the client. | getSerializedCoordinationResponse | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/coordination/ClientCoordinationResponseBody.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/coordination/ClientCoordinationResponseBody.java | Apache-2.0 |
@Override
public Collection<MessageQueryParameter<?>> getQueryParameters() {
return Collections.unmodifiableCollection(Arrays.asList(metrics, aggs, selector));
} | Base {@link MessageParameters} class for aggregating metrics. | getQueryParameters | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/metrics/AbstractAggregatedMetricsParameters.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/metrics/AbstractAggregatedMetricsParameters.java | Apache-2.0 |
@Override
public Collection<MessagePathParameter<?>> getPathParameters() {
return Collections.emptyList();
} | Parameters for aggregating task manager metrics. | getPathParameters | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/metrics/AggregateTaskManagerMetricsParameters.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/metrics/AggregateTaskManagerMetricsParameters.java | Apache-2.0 |
@JsonIgnore
public String getSavepointPath() {
return savepointPath;
} | Request body for a savepoint disposal call. | getSavepointPath | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/savepoints/SavepointDisposalRequest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/savepoints/SavepointDisposalRequest.java | Apache-2.0 |
private static void assertNotEndOfInput(
final JsonParser p, @Nullable final JsonToken jsonToken) {
checkState(jsonToken != null, "Unexpected end of input at %s", p.getCurrentLocation());
} | Asserts that the provided JsonToken is not null, i.e., not at the end of the input. | assertNotEndOfInput | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobResultDeserializer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobResultDeserializer.java | Apache-2.0 |
private static void assertNextToken(final JsonParser p, final JsonToken requiredJsonToken)
throws IOException {
final JsonToken jsonToken = p.nextToken();
if (jsonToken != requiredJsonToken) {
throw new JsonMappingException(
p, String.format("Expected token %s (was %s)", requiredJsonToken, jsonToken));
}
} | Advances the token and asserts that it matches the required {@link JsonToken}. | assertNextToken | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobResultDeserializer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobResultDeserializer.java | Apache-2.0 |
@Override
public void serialize(
final JobResult result, final JsonGenerator gen, final SerializerProvider provider)
throws IOException {
gen.writeStartObject();
gen.writeFieldName(FIELD_NAME_JOB_ID);
jobIdSerializer.serialize(result.getJobId(), gen, provider);
gen.writeFieldName(FIELD_NAME_APPLICATION_STATUS);
gen.writeString(result.getApplicationStatus().name());
gen.writeFieldName(FIELD_NAME_ACCUMULATOR_RESULTS);
gen.writeStartObject();
final Map<String, SerializedValue<OptionalFailure<Object>>> accumulatorResults =
result.getAccumulatorResults();
for (final Map.Entry<String, SerializedValue<OptionalFailure<Object>>> nameValue :
accumulatorResults.entrySet()) {
final String name = nameValue.getKey();
final SerializedValue<OptionalFailure<Object>> value = nameValue.getValue();
gen.writeFieldName(name);
serializedValueSerializer.serialize(value, gen, provider);
}
gen.writeEndObject();
gen.writeNumberField(FIELD_NAME_NET_RUNTIME, result.getNetRuntime());
if (result.getSerializedThrowable().isPresent()) {
gen.writeFieldName(FIELD_NAME_FAILURE_CAUSE);
final SerializedThrowable serializedThrowable = result.getSerializedThrowable().get();
serializedThrowableSerializer.serialize(serializedThrowable, gen, provider);
}
gen.writeEndObject();
} | JSON serializer for {@link JobResult}.
@see JobResultDeserializer | serialize | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobResultSerializer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobResultSerializer.java | Apache-2.0 |
@Override
public void serialize(JobVertexID value, JsonGenerator gen, SerializerProvider provider)
throws IOException {
gen.writeFieldName(value.toString());
} | Jackson serializer for {@link JobVertexID} used as a key serializer. | serialize | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobVertexIDKeySerializer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/JobVertexIDKeySerializer.java | Apache-2.0 |
@Override
public String deserialize(JsonParser p, DeserializationContext ctxt) throws IOException {
final JsonNode jsonNode = ctxt.readValue(p, JsonNode.class);
return jsonNode.toString();
} | Json deserializer which deserializes raw json. | deserialize | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/RawJsonDeserializer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/RawJsonDeserializer.java | Apache-2.0 |
@Override
public void serialize(
final SerializedValue<?> value,
final JsonGenerator gen,
final SerializerProvider provider)
throws IOException {
gen.writeBinary(value.getByteArray());
} | JSON serializer for {@link SerializedValue}.
<p>{@link SerializedValue}'s byte array will be base64 encoded. | serialize | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/SerializedValueSerializer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/SerializedValueSerializer.java | Apache-2.0 |
public HttpResponseStatus getHttpResponseStatus() {
return HttpResponseStatus.valueOf(responseCode);
} | An exception that is thrown if the failure of a REST operation was detected on the client. | getHttpResponseStatus | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/util/RestClientException.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/util/RestClientException.java | Apache-2.0 |
public static ObjectMapper getStrictObjectMapper() {
return strictObjectMapper;
} | Returns a preconfigured strict {@link ObjectMapper}.
@return preconfigured object mapper | getStrictObjectMapper | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/util/RestMapperUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/util/RestMapperUtils.java | Apache-2.0 |
public static ObjectMapper getFlexibleObjectMapper() {
return flexibleObjectMapper;
} | Returns a preconfigured flexible {@link ObjectMapper}.
@return preconfigured object mapper | getFlexibleObjectMapper | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/util/RestMapperUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/util/RestMapperUtils.java | Apache-2.0 |
static <E extends RestAPIVersion<E>> E getLatestVersion(Collection<E> versions) {
return Collections.max(versions);
} | Accept versions and one of them as a comparator, and get the latest one.
@return latest version that implement RestAPIVersion interface> | getLatestVersion | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/rest/versioning/RestAPIVersion.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/rest/versioning/RestAPIVersion.java | Apache-2.0 |
@Nonnull
static LinkedHashMap<JobVertexID, List<SchedulingExecutionVertex>> getExecutionVertices(
SchedulingTopology topology) {
final LinkedHashMap<JobVertexID, List<SchedulingExecutionVertex>> vertices =
new LinkedHashMap<>();
for (SchedulingExecutionVertex executionVertex : topology.getVertices()) {
final List<SchedulingExecutionVertex> executionVertexGroup =
vertices.computeIfAbsent(
executionVertex.getId().getJobVertexId(), k -> new ArrayList<>());
executionVertexGroup.add(executionVertex);
}
return vertices;
} | The vertices are topologically sorted since {@link DefaultExecutionTopology#getVertices} are
topologically sorted. | getExecutionVertices | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/AbstractSlotSharingStrategy.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/AbstractSlotSharingStrategy.java | Apache-2.0 |
public List<IntermediateDataSetID> getCorruptedClusterDatasetIds() {
return corruptedClusterDatasetIds;
} | Indicates some task fail to consume cluster dataset. | getCorruptedClusterDatasetIds | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/ClusterDatasetCorruptedException.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/ClusterDatasetCorruptedException.java | Apache-2.0 |
@Override
public DefaultExecutionDeployer createInstance(
Logger log,
ExecutionSlotAllocator executionSlotAllocator,
ExecutionOperations executionOperations,
ExecutionVertexVersioner executionVertexVersioner,
Duration partitionRegistrationTimeout,
BiConsumer<ExecutionVertexID, AllocationID> allocationReservationFunc,
ComponentMainThreadExecutor mainThreadExecutor) {
return new DefaultExecutionDeployer(
log,
executionSlotAllocator,
executionOperations,
executionVertexVersioner,
partitionRegistrationTimeout,
allocationReservationFunc,
mainThreadExecutor);
} | Factory to instantiate the {@link DefaultExecutionDeployer}. | createInstance | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultExecutionDeployer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultExecutionDeployer.java | Apache-2.0 |
private void tryRestoreExecutionGraphFromSavepoint(
ExecutionGraph executionGraphToRestore,
SavepointRestoreSettings savepointRestoreSettings)
throws Exception {
if (savepointRestoreSettings.restoreSavepoint()) {
final CheckpointCoordinator checkpointCoordinator =
executionGraphToRestore.getCheckpointCoordinator();
if (checkpointCoordinator != null) {
checkpointCoordinator.restoreSavepoint(
savepointRestoreSettings,
executionGraphToRestore.getAllVertices(),
userCodeClassLoader);
}
}
} | Tries to restore the given {@link ExecutionGraph} from the provided {@link
SavepointRestoreSettings}, iff checkpointing is enabled.
@param executionGraphToRestore {@link ExecutionGraph} which is supposed to be restored
@param savepointRestoreSettings {@link SavepointRestoreSettings} containing information about
the savepoint to restore from
@throws Exception if the {@link ExecutionGraph} could not be restored | tryRestoreExecutionGraphFromSavepoint | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultExecutionGraphFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultExecutionGraphFactory.java | Apache-2.0 |
@Override
public Collection<TaskManagerLocation> getPreferredLocations(
ExecutionVertexID executionVertexId, Set<ExecutionVertexID> producersToIgnore) {
CompletableFuture<Collection<TaskManagerLocation>> preferredLocationsFuture =
asyncPreferredLocationsRetriever.getPreferredLocations(
executionVertexId, producersToIgnore);
Preconditions.checkState(preferredLocationsFuture.isDone());
// it is safe to do the blocking call here
// as the underlying InputsLocationsRetriever returns only immediately available locations
return preferredLocationsFuture.join();
} | Synchronous version of {@link DefaultPreferredLocationsRetriever}.
<p>This class turns {@link DefaultPreferredLocationsRetriever} into {@link
SyncPreferredLocationsRetriever}. The method {@link #getPreferredLocations(ExecutionVertexID,
Set)} does not return {@link CompletableFuture} of preferred locations, it returns only locations
which are available immediately. This behaviour is achieved by wrapping the original {@link
InputsLocationsRetriever} with {@link AvailableInputsLocationsRetriever} and hence making it
synchronous without blocking. As {@link StateLocationRetriever} is already synchronous, the
overall location retrieval becomes synchronous without blocking. | getPreferredLocations | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultSyncPreferredLocationsRetriever.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultSyncPreferredLocationsRetriever.java | Apache-2.0 |
@Override
public SlotProfile getSlotProfile(
ExecutionSlotSharingGroup executionSlotSharingGroup,
ResourceProfile physicalSlotResourceProfile) {
Collection<AllocationID> priorAllocations = new HashSet<>();
Collection<TaskManagerLocation> preferredLocations = new ArrayList<>();
for (ExecutionVertexID execution : executionSlotSharingGroup.getExecutionVertexIds()) {
priorAllocationIdRetriever.apply(execution).ifPresent(priorAllocations::add);
preferredLocations.addAll(
preferredLocationsRetriever.getPreferredLocations(
execution, producersToIgnore));
}
return SlotProfile.priorAllocation(
physicalSlotResourceProfile,
physicalSlotResourceProfile,
preferredLocations,
priorAllocations,
reservedAllocationIds);
} | Computes a {@link SlotProfile} of an execution slot sharing group.
<p>The preferred locations of the {@link SlotProfile} is a union of the preferred
locations of all executions sharing the slot. The input locations within the bulk are
ignored to avoid cyclic dependencies within the region, e.g. in case of all-to-all
pipelined connections, so that the allocations do not block each other.
<p>The preferred {@link AllocationID}s of the {@link SlotProfile} are all previous {@link
AllocationID}s of all executions sharing the slot.
<p>The {@link SlotProfile} also refers to all reserved {@link AllocationID}s of the job.
@param executionSlotSharingGroup executions sharing the slot.
@param physicalSlotResourceProfile {@link ResourceProfile} of the slot.
@return {@link SlotProfile} to allocate for the {@code executionSlotSharingGroup}. | getSlotProfile | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/MergingSharedSlotProfileRetrieverFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/MergingSharedSlotProfileRetrieverFactory.java | Apache-2.0 |
public static VertexParallelismStore computeVertexParallelismStore(
Iterable<JobVertex> vertices,
Function<JobVertex, Integer> defaultMaxParallelismFunc,
Function<Integer, Integer> normalizeParallelismFunc) {
DefaultVertexParallelismStore store = new DefaultVertexParallelismStore();
for (JobVertex vertex : vertices) {
int parallelism = normalizeParallelismFunc.apply(vertex.getParallelism());
int maxParallelism = vertex.getMaxParallelism();
final boolean autoConfigured;
// if no max parallelism was configured by the user, we calculate and set a default
if (maxParallelism == JobVertex.MAX_PARALLELISM_DEFAULT) {
maxParallelism = defaultMaxParallelismFunc.apply(vertex);
autoConfigured = true;
} else {
autoConfigured = false;
}
VertexParallelismInformation parallelismInfo =
new DefaultVertexParallelismInfo(
parallelism,
maxParallelism,
// Allow rescaling if the max parallelism was not set explicitly by the
// user
(newMax) ->
autoConfigured
? Optional.empty()
: Optional.of(
"Cannot override a configured max parallelism."));
store.setParallelismInfo(vertex.getID(), parallelismInfo);
}
return store;
} | Compute the {@link VertexParallelismStore} for all given vertices, which will set defaults
and ensure that the returned store contains valid parallelisms, with a custom function for
default max parallelism calculation and a custom function for normalizing vertex parallelism.
@param vertices the vertices to compute parallelism for
@param defaultMaxParallelismFunc a function for computing a default max parallelism if none
is specified on a given vertex
@param normalizeParallelismFunc a function for normalizing vertex parallelism
@return the computed parallelism store | computeVertexParallelismStore | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java | Apache-2.0 |
public static VertexParallelismStore computeVertexParallelismStore(
Iterable<JobVertex> vertices) {
return computeVertexParallelismStore(vertices, SchedulerBase::getDefaultMaxParallelism);
} | Compute the {@link VertexParallelismStore} for all given vertices, which will set defaults
and ensure that the returned store contains valid parallelisms.
@param vertices the vertices to compute parallelism for
@return the computed parallelism store | computeVertexParallelismStore | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java | Apache-2.0 |
default boolean updateTaskExecutionState(TaskExecutionState taskExecutionState) {
return updateTaskExecutionState(new TaskExecutionStateTransition(taskExecutionState));
} | Interface for scheduling Flink jobs.
<p>Instances are created via {@link SchedulerNGFactory}, and receive a {@link JobGraph} when
instantiated.
<p>Implementations can expect that methods will not be invoked concurrently. In fact, all
invocations will originate from a thread in the {@link ComponentMainThreadExecutor}. | updateTaskExecutionState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerNG.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerNG.java | Apache-2.0 |
default JobResourceRequirements requestJobResourceRequirements() {
throw new UnsupportedOperationException(
String.format(
"The %s does not support changing the parallelism without a job restart. This feature is currently only expected to work with the %s.",
getClass().getSimpleName(), AdaptiveScheduler.class.getSimpleName()));
} | Read current {@link JobResourceRequirements job resource requirements}.
@return Current resource requirements. | requestJobResourceRequirements | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerNG.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerNG.java | Apache-2.0 |
default void updateJobResourceRequirements(JobResourceRequirements jobResourceRequirements) {
throw new UnsupportedOperationException(
String.format(
"The %s does not support changing the parallelism without a job restart. This feature is currently only expected to work with the %s.",
getClass().getSimpleName(), AdaptiveScheduler.class.getSimpleName()));
} | Update {@link JobResourceRequirements job resource requirements}.
@param jobResourceRequirements new resource requirements | updateJobResourceRequirements | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerNG.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerNG.java | Apache-2.0 |
boolean isEmpty() {
return requestedLogicalSlots.size() == 0;
} | Returns whether the shared slot has no assigned logical slot requests. | isEmpty | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SharedSlot.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SharedSlot.java | Apache-2.0 |
public void markFulfilled(ExecutionSlotSharingGroup group, AllocationID allocationId) {
pendingRequests.remove(group);
fulfilledRequests.put(group, allocationId);
} | Moves a pending request to fulfilled.
@param group {@link ExecutionSlotSharingGroup} of the pending request
@param allocationId {@link AllocationID} of the fulfilled request | markFulfilled | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SharingPhysicalSlotRequestBulk.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SharingPhysicalSlotRequestBulk.java | Apache-2.0 |
@Override
public Map<ExecutionAttemptID, ExecutionSlotAssignment> allocateSlotsFor(
List<ExecutionAttemptID> executionAttemptIds) {
final Map<ExecutionVertexID, ExecutionAttemptID> vertexIdToExecutionId = new HashMap<>();
executionAttemptIds.forEach(
executionId ->
vertexIdToExecutionId.put(executionId.getExecutionVertexId(), executionId));
checkState(
vertexIdToExecutionId.size() == executionAttemptIds.size(),
"SlotSharingExecutionSlotAllocator does not support one execution vertex to have multiple concurrent executions");
final List<ExecutionVertexID> vertexIds =
executionAttemptIds.stream()
.map(ExecutionAttemptID::getExecutionVertexId)
.collect(Collectors.toList());
return allocateSlotsForVertices(vertexIds).stream()
.collect(
Collectors.toMap(
vertexAssignment ->
vertexIdToExecutionId.get(
vertexAssignment.getExecutionVertexId()),
vertexAssignment ->
new ExecutionSlotAssignment(
vertexIdToExecutionId.get(
vertexAssignment.getExecutionVertexId()),
vertexAssignment.getLogicalSlotFuture())));
} | Allocates {@link LogicalSlot}s from physical shared slots.
<p>The allocator maintains a shared slot for each {@link ExecutionSlotSharingGroup}. It allocates
a physical slot for the shared slot and then allocates logical slots from it for scheduled tasks.
The physical slot is lazily allocated for a shared slot, upon any hosted subtask asking for the
shared slot. Each subsequent sharing subtask allocates a logical slot from the existing shared
slot. The shared/physical slot can be released only if all the requested logical slots are
released or canceled. | allocateSlotsFor | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SlotSharingExecutionSlotAllocator.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SlotSharingExecutionSlotAllocator.java | Apache-2.0 |
private static boolean containsIntraRegionAllToAllEdge(
DefaultLogicalPipelinedRegion logicalPipelinedRegion) {
for (LogicalVertex vertex : logicalPipelinedRegion.getVertices()) {
for (LogicalEdge inputEdge : vertex.getInputs()) {
if (inputEdge.getDistributionPattern() == DistributionPattern.ALL_TO_ALL
&& logicalPipelinedRegion.contains(inputEdge.getProducerVertexId())) {
return true;
}
}
}
return false;
} | Check if the {@link DefaultLogicalPipelinedRegion} contains intra-region all-to-all edges or
not. | containsIntraRegionAllToAllEdge | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adapter/DefaultExecutionTopology.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adapter/DefaultExecutionTopology.java | Apache-2.0 |
@VisibleForTesting
static VertexParallelismStore computeReactiveModeVertexParallelismStore(
Iterable<JobVertex> vertices,
Function<JobVertex, Integer> defaultMaxParallelismFunc,
boolean adjustParallelism) {
DefaultVertexParallelismStore store = new DefaultVertexParallelismStore();
for (JobVertex vertex : vertices) {
// if no max parallelism was configured by the user, we calculate and set a default
final int maxParallelism =
vertex.getMaxParallelism() == JobVertex.MAX_PARALLELISM_DEFAULT
? defaultMaxParallelismFunc.apply(vertex)
: vertex.getMaxParallelism();
// If the parallelism has already been adjusted, respect what has been configured in the
// vertex. Otherwise, scale it to the max parallelism to attempt to be "as parallel as
// possible"
final int parallelism;
if (adjustParallelism) {
parallelism = maxParallelism;
} else {
parallelism = vertex.getParallelism();
}
VertexParallelismInformation parallelismInfo =
new DefaultVertexParallelismInfo(
parallelism,
maxParallelism,
// Allow rescaling if the new desired max parallelism
// is not less than what was declared here during scheduling.
// This prevents the situation where more resources are requested
// based on the computed default, when actually fewer are necessary.
(newMax) ->
newMax >= maxParallelism
? Optional.empty()
: Optional.of(
"Cannot lower max parallelism in Reactive mode."));
store.setParallelismInfo(vertex.getID(), parallelismInfo);
}
return store;
} | Creates the parallelism store for a set of vertices, optionally with a flag to leave the
vertex parallelism unchanged. If the flag is set, the parallelisms must be valid for
execution.
<p>We need to set parallelism to the max possible value when requesting resources, but when
executing the graph we should respect what we are actually given.
@param vertices The vertices to store parallelism information for
@param adjustParallelism Whether to adjust the parallelism
@param defaultMaxParallelismFunc a function for computing a default max parallelism if none
is specified on a given vertex
@return The parallelism store. | computeReactiveModeVertexParallelismStore | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java | Apache-2.0 |
private static VertexParallelismStore computeVertexParallelismStore(
JobGraph jobGraph, SchedulerExecutionMode executionMode) {
if (executionMode == SchedulerExecutionMode.REACTIVE) {
return computeReactiveModeVertexParallelismStore(
jobGraph.getVertices(), SchedulerBase::getDefaultMaxParallelism, true);
}
return SchedulerBase.computeVertexParallelismStore(jobGraph);
} | Creates the parallelism store that should be used for determining scheduling requirements,
which may choose different parallelisms than set in the {@link JobGraph} depending on the
execution mode.
@param jobGraph The job graph for execution.
@param executionMode The mode of scheduler execution.
@return The parallelism store. | computeVertexParallelismStore | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java | Apache-2.0 |
private void checkIdleSlotTimeout() {
if (getState().getJobStatus().isGloballyTerminalState()) {
// Job has reached the terminal state, so we can return all slots to the ResourceManager
// to speed things up because we no longer need them. This optimization lets us skip
// waiting for the slot pool service to close.
for (SlotInfo slotInfo : declarativeSlotPool.getAllSlotsInformation()) {
declarativeSlotPool.releaseSlot(
slotInfo.getAllocationId(),
new FlinkException(
"Returning slots to their owners, because the job has reached a globally terminal state."));
}
return;
} else if (getState().getJobStatus().isTerminalState()) {
// do nothing
// prevent idleness check running again while scheduler was already shut down
// don't release slots because JobMaster may want to hold on to slots in case
// it re-acquires leadership
return;
}
declarativeSlotPool.releaseIdleSlots(System.currentTimeMillis());
getMainThreadExecutor()
.schedule(
this::checkIdleSlotTimeout,
settings.getSlotIdleTimeout().toMillis(),
TimeUnit.MILLISECONDS);
} | Check for slots that are idle for more than {@link JobManagerOptions#SLOT_IDLE_TIMEOUT} and
release them back to the ResourceManager. | checkIdleSlotTimeout | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/AdaptiveScheduler.java | Apache-2.0 |
<V> BackgroundTask<V> runAfter(
SupplierWithException<? extends V, ? extends Exception> task, Executor executor) {
return new BackgroundTask<>(terminationFuture, task, executor);
} | Runs the given task after this background task has completed (normally or exceptionally).
@param task task to run after this background task has completed
@param executor executor to run the task
@param <V> type of the result
@return new {@link BackgroundTask} representing the new task to execute | runAfter | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/BackgroundTask.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/BackgroundTask.java | Apache-2.0 |
static <V> BackgroundTask<V> finishedBackgroundTask() {
return new BackgroundTask<>();
} | Creates a finished background task which can be used as the start of a background task chain.
@param <V> type of the background task
@return A finished background task | finishedBackgroundTask | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/BackgroundTask.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/BackgroundTask.java | Apache-2.0 |
static <V> BackgroundTask<V> initialBackgroundTask(
SupplierWithException<? extends V, ? extends Exception> task, Executor executor) {
return new BackgroundTask<>(FutureUtils.completedVoidFuture(), task, executor);
} | Creates an initial background task. This means that this background task has no predecessor.
@param task task to run
@param executor executor to run the task
@param <V> type of the result
@return initial {@link BackgroundTask} representing the task to execute | initialBackgroundTask | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/BackgroundTask.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/BackgroundTask.java | Apache-2.0 |
@Override
public JobStatus getJobStatus() {
return JobStatus.INITIALIZING;
} | Initial state of the {@link AdaptiveScheduler}. | getJobStatus | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Created.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Created.java | Apache-2.0 |
@Override
void onChange() {
if (hasSufficientResources()) {
context().progressToStabilizing(now());
}
} | {@link Phase} which follows the {@link Cooldown} phase if no {@link
StateTransitionManager#onChange()} was observed, yet. The {@code
DefaultStateTransitionManager} waits for a first {@link StateTransitionManager#onChange()}
event. {@link StateTransitionManager#onTrigger()} events will be ignored. | onChange | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/DefaultStateTransitionManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/DefaultStateTransitionManager.java | Apache-2.0 |
@Override
void onTrigger() {
if (hasSufficientResources()) {
LOG.info(
"Sufficient resources are met, progressing to subsequent state, job {}.",
getJobId());
context().triggerTransitionToSubsequentState();
} else {
LOG.debug(
"Sufficient resources are not met, progressing to idling, job {}.",
getJobId());
context().progressToIdling();
}
} | {@link Phase} that handles the post-stabilization phase. A {@link
StateTransitionManager#onTrigger()} event initiates rescaling if sufficient resources are
available; otherwise transitioning to {@link Idling} will be performed. | onTrigger | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/DefaultStateTransitionManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/DefaultStateTransitionManager.java | Apache-2.0 |
static FailureResult canRestart(Throwable failureCause, Duration backoffTime) {
return new FailureResult(failureCause, backoffTime);
} | Creates a FailureResult which allows to restart the job.
@param failureCause failureCause for restarting the job
@param backoffTime backoffTime to wait before restarting the job
@return FailureResult which allows to restart the job | canRestart | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/FailureResult.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/FailureResult.java | Apache-2.0 |
static FailureResult canNotRestart(Throwable failureCause) {
return new FailureResult(failureCause, null);
} | Creates FailureResult which does not allow to restart the job.
@param failureCause failureCause describes the reason why the job cannot be restarted
@return FailureResult which does not allow to restart the job | canNotRestart | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/FailureResult.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/FailureResult.java | Apache-2.0 |
public static <T extends StateTransitions.ToRestarting & StateTransitions.ToFailing>
void restartOrFail(
FailureResult failureResult, T context, StateWithExecutionGraph sweg) {
if (failureResult.canRestart()) {
sweg.getLogger().info("Restarting job.", failureResult.getFailureCause());
context.goToRestarting(
sweg.getExecutionGraph(),
sweg.getExecutionGraphHandler(),
sweg.getOperatorCoordinatorHandler(),
failureResult.getBackoffTime(),
null,
sweg.getFailures());
} else {
sweg.getLogger().info("Failing job.", failureResult.getFailureCause());
context.goToFailing(
sweg.getExecutionGraph(),
sweg.getExecutionGraphHandler(),
sweg.getOperatorCoordinatorHandler(),
failureResult.getFailureCause(),
sweg.getFailures());
}
} | {@link FailureResultUtil} contains helper methods for {@link FailureResult}. | restartOrFail | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/FailureResultUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/FailureResultUtil.java | Apache-2.0 |
default <T> Optional<T> as(Class<? extends T> clazz) {
if (clazz.isAssignableFrom(this.getClass())) {
return Optional.of(clazz.cast(this));
} else {
return Optional.empty();
}
} | Tries to cast this state into a type of the given clazz.
@param clazz clazz describes the target type
@param <T> target type
@return {@link Optional#of} target type if the underlying type can be cast into clazz;
otherwise {@link Optional#empty()} | as | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | Apache-2.0 |
default <T, E extends Exception> void tryRun(
Class<? extends T> clazz, ThrowingConsumer<T, E> action, String debugMessage) throws E {
tryRun(
clazz,
x -> {
getLogger()
.debug(
"Running '{}' in state {}.",
debugMessage,
this.getClass().getSimpleName());
ThrowingConsumer.unchecked(action).accept(x);
},
logger ->
logger.debug(
"Cannot run '{}' because the actual state is {} and not {}.",
debugMessage,
this.getClass().getSimpleName(),
clazz.getSimpleName()));
} | Tries to run the action if this state is of type clazz.
@param clazz clazz describes the target type
@param action action to run if this state is of the target type
@param debugMessage debugMessage which is printed if this state is not the target type
@param <T> target type
@param <E> error type
@throws E an exception if the action fails | tryRun | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | Apache-2.0 |
default <T, E extends Exception> void tryRun(
Class<? extends T> clazz,
ThrowingConsumer<T, E> action,
Consumer<Logger> invalidStateCallback)
throws E {
final Optional<? extends T> asOptional = as(clazz);
if (asOptional.isPresent()) {
action.accept(asOptional.get());
} else {
invalidStateCallback.accept(getLogger());
}
} | Tries to run the action if this state is of type clazz.
@param clazz clazz describes the target type
@param action action to run if this state is of the target type
@param invalidStateCallback that is called if the state isn't matching the expected one.
@param <T> target type
@param <E> error type
@throws E an exception if the action fails | tryRun | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | Apache-2.0 |
default <T, V, E extends Exception> Optional<V> tryCall(
Class<? extends T> clazz, FunctionWithException<T, V, E> action, String debugMessage)
throws E {
final Optional<? extends T> asOptional = as(clazz);
if (asOptional.isPresent()) {
return Optional.of(action.apply(asOptional.get()));
} else {
getLogger()
.debug(
"Cannot run '{}' because the actual state is {} and not {}.",
debugMessage,
this.getClass().getSimpleName(),
clazz.getSimpleName());
return Optional.empty();
}
} | Tries to run the action if this state is of type clazz.
@param clazz clazz describes the target type
@param action action to run if this state is of the target type
@param debugMessage debugMessage which is printed if this state is not the target type
@param <T> target type
@param <V> value type
@param <E> error type
@return {@link Optional#of} the action result if it is successfully executed; otherwise
{@link Optional#empty()}
@throws E an exception if the action fails | tryCall | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/State.java | Apache-2.0 |
@Override
public void handleGlobalFailure(
Throwable cause, CompletableFuture<Map<String, String>> failureLabels) {
failureCollection.add(ExceptionHistoryEntry.createGlobal(cause, failureLabels));
onFailure(cause, failureLabels);
} | Transition to different state when the execution graph reaches a globally terminal state.
@param globallyTerminalState globally terminal state which the execution graph reached | handleGlobalFailure | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StateWithExecutionGraph.java | Apache-2.0 |
@Override
public void onLeave(Class<? extends State> newState) {
this.operationFuture.completeExceptionally(
new FlinkException(
"Stop with savepoint operation could not be completed.",
operationFailureCause));
super.onLeave(newState);
} | The result future of this operation, containing the path to the savepoint. This is the future
that other components (e.g., the REST API) wait for.
<p>Must only be completed successfully if the savepoint was created and the job has FINISHED. | onLeave | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StopWithSavepoint.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/StopWithSavepoint.java | Apache-2.0 |
private Iterator<TaskManagerLocation> getSortedTaskExecutors(
Map<TaskManagerLocation, ? extends Set<? extends SlotInfo>> slotsPerTaskExecutor) {
final Comparator<TaskManagerLocation> taskExecutorComparator =
(leftTml, rightTml) ->
Integer.compare(
slotsPerTaskExecutor.get(rightTml).size(),
slotsPerTaskExecutor.get(leftTml).size());
return slotsPerTaskExecutor.keySet().stream().sorted(taskExecutorComparator).iterator();
} | In order to minimize the using of task executors at the resource manager side in the
application-mode and release more task executors in a timely manner, it is a good choice to
prioritize selecting slots on task executors with the most available slots.
@param slotsPerTaskExecutor The slots per task manager.
@return The ordered task manager that orders by the number of free slots descending. | getSortedTaskExecutors | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/DefaultSlotAssigner.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/allocator/DefaultSlotAssigner.java | Apache-2.0 |
protected void onFineGrainedSubpartitionBytesNotNeeded() {
if (subpartitionBytesByPartitionIndex.size() == numOfPartitions) {
if (this.aggregatedSubpartitionBytes == null) {
this.aggregatedSubpartitionBytes = getAggregatedSubpartitionBytesInternal();
}
this.subpartitionBytesByPartitionIndex.clear();
}
} | This method should be called when fine-grained information is no longer needed. It will
aggregate and clears the fine-grained subpartition bytes to reduce space usage.
<p>Once all partitions are finished and all consumer jobVertices are initialized, we can
convert the subpartition bytes to aggregated value to reduce the space usage, because the
distribution of source splits does not affect the distribution of data consumed by downstream
tasks of ALL_TO_ALL edges(Hashing or Rebalancing, we do not consider rare cases such as
custom partitions here). | onFineGrainedSubpartitionBytesNotNeeded | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AllToAllBlockingResultInfo.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/AllToAllBlockingResultInfo.java | Apache-2.0 |
public void initializeStrategies(StreamGraphContext context) {
checkNotNull(optimizationStrategies).forEach(strategy -> strategy.initialize(context));
} | The {@code StreamGraphOptimizer} class is responsible for optimizing a StreamGraph based on
runtime information.
<p>Upon initialization, it obtains a {@code StreamGraphContext} from the {@code
AdaptiveGraphManager} and loads the specified optimization strategies. At runtime, it applies
these strategies sequentially to the StreamGraph using the provided context and information about
finished operators. | initializeStrategies | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/StreamGraphOptimizer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/StreamGraphOptimizer.java | Apache-2.0 |
private static Map<Integer, List<SubpartitionSlice>>
createBalancedSubpartitionSlicesForInputsWithInterKeysCorrelation(
int subpartitionIndex,
Map<Integer, AggregatedBlockingInputInfo> aggregatedInputInfoByTypeNumber) {
Map<Integer, List<SubpartitionSlice>> subpartitionSlices = new HashMap<>();
IndexRange subpartitionRange = new IndexRange(subpartitionIndex, subpartitionIndex);
for (Map.Entry<Integer, AggregatedBlockingInputInfo> entry :
aggregatedInputInfoByTypeNumber.entrySet()) {
Integer typeNumber = entry.getKey();
AggregatedBlockingInputInfo aggregatedBlockingInputInfo = entry.getValue();
if (aggregatedBlockingInputInfo.isSplittable()
&& aggregatedBlockingInputInfo.isSkewedSubpartition(subpartitionIndex)) {
List<IndexRange> partitionRanges =
computePartitionRangesEvenlyData(
subpartitionIndex,
aggregatedBlockingInputInfo.getTargetSize(),
aggregatedBlockingInputInfo.getSubpartitionBytesByPartition());
subpartitionSlices.put(
typeNumber,
createSubpartitionSlicesByMultiPartitionRanges(
partitionRanges,
subpartitionRange,
aggregatedBlockingInputInfo.getSubpartitionBytesByPartition()));
} else {
IndexRange partitionRange =
new IndexRange(0, aggregatedBlockingInputInfo.getMaxPartitionNum() - 1);
subpartitionSlices.put(
typeNumber,
Collections.singletonList(
SubpartitionSlice.createSubpartitionSlice(
partitionRange,
subpartitionRange,
aggregatedBlockingInputInfo.getAggregatedSubpartitionBytes(
subpartitionIndex))));
}
}
return subpartitionSlices;
} | Creates balanced subpartition slices for inputs with inter-key correlations.
<p>This method generates a mapping of subpartition indices to lists of subpartition slices,
ensuring balanced distribution of input data. When a subpartition is splittable and has data
skew, we will split it into n continuous and balanced parts (by split its partition range).
If the input is not splittable, this step will be skipped, and subpartitions with the same
index will be aggregated into a single SubpartitionSlice.
@param subpartitionIndex the index of the subpartition being processed.
@param aggregatedInputInfoByTypeNumber a map of aggregated blocking input info, keyed by
input type number.
@return a map where the key is the input type number and the value is a list of subpartition
slices for the specified subpartition. | createBalancedSubpartitionSlicesForInputsWithInterKeysCorrelation | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/AllToAllVertexInputInfoComputer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/AllToAllVertexInputInfoComputer.java | Apache-2.0 |
public static <T> List<List<T>> cartesianProduct(List<List<T>> lists) {
List<List<T>> resultLists = new ArrayList<>();
if (lists.isEmpty()) {
resultLists.add(new ArrayList<>());
return resultLists;
} else {
List<T> firstList = lists.get(0);
List<List<T>> remainingLists = cartesianProduct(lists.subList(1, lists.size()));
for (T condition : firstList) {
for (List<T> remainingList : remainingLists) {
ArrayList<T> resultList = new ArrayList<>();
resultList.add(condition);
resultList.addAll(remainingList);
resultLists.add(resultList);
}
}
}
return resultLists;
} | Computes the Cartesian product of a list of lists.
<p>The Cartesian product is a set of all possible combinations formed by picking one element
from each list. For example, given input lists [[1, 2], [3, 4]], the result will be [[1, 3],
[1, 4], [2, 3], [2, 4]].
<p>Note: If the input list is empty or contains an empty list, the result will be an empty
list.
@param <T> the type of elements in the lists
@param lists a list of lists for which the Cartesian product is to be computed
@return a list of lists representing the Cartesian product, where each inner list is a
combination | cartesianProduct | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | Apache-2.0 |
public static long median(long[] nums) {
int len = nums.length;
long[] sortedNums = LongStream.of(nums).sorted().toArray();
if (len % 2 == 0) {
return Math.max((sortedNums[len / 2] + sortedNums[len / 2 - 1]) / 2, 1L);
} else {
return Math.max(sortedNums[len / 2], 1L);
}
} | Calculates the median of a given array of long integers. If the calculated median is less
than 1, it returns 1 instead.
@param nums an array of long integers for which to calculate the median.
@return the median value, which will be at least 1. | median | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | Apache-2.0 |
public static long computeSkewThreshold(
long medianSize, double skewedFactor, long defaultSkewedThreshold) {
return (long) Math.max(medianSize * skewedFactor, defaultSkewedThreshold);
} | Computes the skew threshold based on the given media size and skewed factor.
<p>The skew threshold is calculated as the product of the media size and the skewed factor.
To ensure that the computed threshold does not fall below a specified default value, the
method uses {@link Math#max} to return the largest of the calculated threshold and the
default threshold.
@param medianSize the size of the median
@param skewedFactor a factor indicating the degree of skewness
@param defaultSkewedThreshold the default threshold to be used if the calculated threshold is
less than this value
@return the computed skew threshold, which is guaranteed to be at least the default skewed
threshold. | computeSkewThreshold | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | Apache-2.0 |
private static Map<IndexRange, IndexRange> computeConsumedSubpartitionGroups(
IndexRange subpartitionSliceRange,
List<SubpartitionSlice> subpartitionSlices,
int numPartitions,
boolean isPointwise) {
Map<IndexRange, List<IndexRange>> rangeMap =
new TreeMap<>(Comparator.comparingInt(IndexRange::getStartIndex));
for (int i = subpartitionSliceRange.getStartIndex();
i <= subpartitionSliceRange.getEndIndex();
++i) {
SubpartitionSlice subpartitionSlice = subpartitionSlices.get(i);
IndexRange keyRange, valueRange;
if (isPointwise) {
keyRange = subpartitionSlice.getPartitionRange(numPartitions);
valueRange = subpartitionSlice.getSubpartitionRange();
} else {
keyRange = subpartitionSlice.getSubpartitionRange();
valueRange = subpartitionSlice.getPartitionRange(numPartitions);
}
rangeMap.computeIfAbsent(keyRange, k -> new ArrayList<>()).add(valueRange);
}
rangeMap =
rangeMap.entrySet().stream()
.collect(
Collectors.toMap(
Map.Entry::getKey,
entry -> mergeIndexRanges(entry.getValue())));
// reversed the map to merge keys associated with the same value
Map<IndexRange, List<IndexRange>> reversedRangeMap = new HashMap<>();
for (Map.Entry<IndexRange, List<IndexRange>> entry : rangeMap.entrySet()) {
IndexRange valueRange = entry.getKey();
for (IndexRange keyRange : entry.getValue()) {
reversedRangeMap.computeIfAbsent(keyRange, k -> new ArrayList<>()).add(valueRange);
}
}
Map<IndexRange, IndexRange> mergedReversedRangeMap =
reversedRangeMap.entrySet().stream()
.collect(
Collectors.toMap(
Map.Entry::getKey,
entry -> {
List<IndexRange> mergedRange =
mergeIndexRanges(entry.getValue());
checkState(mergedRange.size() == 1);
return mergedRange.get(0);
}));
if (isPointwise) {
return reverseIndexRangeMap(mergedReversedRangeMap);
}
return mergedReversedRangeMap;
} | Merge the subpartition slices of the specified range into an index range map, which the key
is the partition index range and the value is the subpartition range.
<p>Note: In existing algorithms, the consumed subpartition groups for POINTWISE always ensure
that there is no overlap in the partition ranges, while for ALL_TO_ALL, the consumed
subpartition groups always ensure that there is no overlap in the subpartition ranges. For
example, if a task needs to subscribe to {[0,0]->[0,1] ,[1,1]->[0]} (partition range to
subpartition range), for POINT WISE it will be: {[0,0]->[0,1], [1,1]->[0,0]}, for ALL_TO-ALL
it will be: {[0,1]->[0,0], [0,0]->[1,1]}.The result of this method will also follow this
convention.
@param subpartitionSliceRange the range of subpartition slices to be merged
@param subpartitionSlices subpartition slices
@param numPartitions the real number of partitions of input info, use to correct the
partition range
@param isPointwise whether the input info is pointwise
@return a map indicating the ranges that task needs to consume, the key is partition range
and the value is subpartition range. | computeConsumedSubpartitionGroups | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptivebatch/util/VertexParallelismAndInputInfosDeciderUtils.java | Apache-2.0 |
public static ExceptionHistoryEntry create(
AccessExecution failedExecution,
String taskName,
CompletableFuture<Map<String, String>> failureLabels) {
Preconditions.checkNotNull(failedExecution, "No Execution is specified.");
Preconditions.checkNotNull(taskName, "No task name is specified.");
Preconditions.checkArgument(
failedExecution.getFailureInfo().isPresent(),
"The selected Execution " + failedExecution.getAttemptId() + " didn't fail.");
final ErrorInfo failure = failedExecution.getFailureInfo().get();
return new ExceptionHistoryEntry(
failure.getException(),
failure.getTimestamp(),
failureLabels,
taskName,
failedExecution.getAssignedResourceLocation());
} | Creates an {@code ExceptionHistoryEntry} based on the provided {@code Execution}.
@param failedExecution the failed {@code Execution}.
@param taskName the name of the task.
@param failureLabels the labels associated with the failure.
@return The {@code ExceptionHistoryEntry}.
@throws NullPointerException if {@code null} is passed as one of the parameters.
@throws IllegalArgumentException if the passed {@code Execution} does not provide a {@link
Execution#getFailureInfo() failureInfo}. | create | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | Apache-2.0 |
public static ExceptionHistoryEntry createGlobal(
Throwable cause, CompletableFuture<Map<String, String>> failureLabels) {
return new ExceptionHistoryEntry(
cause,
System.currentTimeMillis(),
failureLabels,
null,
(ArchivedTaskManagerLocation) null);
} | Creates an {@code ExceptionHistoryEntry} that is not based on an {@code Execution}. | createGlobal | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | Apache-2.0 |
public Map<String, String> getFailureLabels() {
return Optional.ofNullable(failureLabels).orElse(Collections.emptyMap());
} | Returns the labels associated with the failure that is set as soon as failureLabelsFuture is
completed. When failureLabelsFuture is not completed, it returns an empty map.
@return Map of failure labels | getFailureLabels | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | Apache-2.0 |
public CompletableFuture<Map<String, String>> getFailureLabelsFuture() {
return failureLabelsFuture;
} | Returns the labels future associated with the failure.
@return CompletableFuture of Map failure labels | getFailureLabelsFuture | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/ExceptionHistoryEntry.java | Apache-2.0 |
public static FailureHandlingResultSnapshot create(
FailureHandlingResult failureHandlingResult,
Function<ExecutionVertexID, Collection<Execution>> currentExecutionsLookup) {
final Execution rootCauseExecution =
failureHandlingResult.getFailedExecution().orElse(null);
if (rootCauseExecution != null && !rootCauseExecution.getFailureInfo().isPresent()) {
throw new IllegalArgumentException(
String.format(
"The failed execution %s didn't provide a failure info.",
rootCauseExecution.getAttemptId()));
}
final Set<Execution> concurrentlyFailedExecutions =
failureHandlingResult.getVerticesToRestart().stream()
.flatMap(id -> currentExecutionsLookup.apply(id).stream())
.filter(execution -> execution != rootCauseExecution)
.filter(execution -> execution.getFailureInfo().isPresent())
.collect(Collectors.toSet());
return new FailureHandlingResultSnapshot(
rootCauseExecution,
ErrorInfo.handleMissingThrowable(failureHandlingResult.getError()),
failureHandlingResult.getTimestamp(),
failureHandlingResult.getFailureLabels(),
concurrentlyFailedExecutions,
failureHandlingResult.isRootCause());
} | Creates a {@code FailureHandlingResultSnapshot} based on the passed {@link
FailureHandlingResult} and {@link ExecutionVertex ExecutionVertices}.
@param failureHandlingResult The {@code FailureHandlingResult} that is used for extracting
the failure information.
@param currentExecutionsLookup The look-up function for retrieving all the current {@link
Execution} instances for a given {@link ExecutionVertexID}.
@return The {@code FailureHandlingResultSnapshot}. | create | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | Apache-2.0 |
public Optional<Execution> getRootCauseExecution() {
return Optional.ofNullable(rootCauseExecution);
} | Returns the {@link Execution} that handled the root cause for this failure. An empty {@code
Optional} will be returned if it's a global failure.
@return The {@link Execution} that handled the root cause for this failure. | getRootCauseExecution | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | Apache-2.0 |
public Throwable getRootCause() {
return rootCause;
} | The actual failure that is handled.
@return The {@code Throwable}. | getRootCause | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | Apache-2.0 |
public Set<Execution> getConcurrentlyFailedExecution() {
return Collections.unmodifiableSet(concurrentlyFailedExecutions);
} | All {@link Execution Executions} that failed and are planned to be restarted as part of this
failure handling.
@return The concurrently failed {@code Executions}. | getConcurrentlyFailedExecution | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/FailureHandlingResultSnapshot.java | Apache-2.0 |
public static RootExceptionHistoryEntry fromFailureHandlingResultSnapshot(
FailureHandlingResultSnapshot snapshot) {
String failingTaskName = null;
TaskManagerLocation taskManagerLocation = null;
if (snapshot.getRootCauseExecution().isPresent()) {
final Execution rootCauseExecution = snapshot.getRootCauseExecution().get();
failingTaskName = rootCauseExecution.getVertexWithAttempt();
taskManagerLocation = rootCauseExecution.getAssignedResourceLocation();
}
return createRootExceptionHistoryEntry(
snapshot.getRootCause(),
snapshot.getTimestamp(),
snapshot.getFailureLabels(),
failingTaskName,
taskManagerLocation,
snapshot.getConcurrentlyFailedExecution());
} | Creates a {@code RootExceptionHistoryEntry} based on the passed {@link
FailureHandlingResultSnapshot}.
@param snapshot The reason for the failure.
@return The {@code RootExceptionHistoryEntry} instance.
@throws NullPointerException if {@code cause} or {@code failingTaskName} are {@code null}.
@throws IllegalArgumentException if the {@code timestamp} of the passed {@code
FailureHandlingResult} is not bigger than {@code 0}. | fromFailureHandlingResultSnapshot | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/RootExceptionHistoryEntry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/RootExceptionHistoryEntry.java | Apache-2.0 |
public Iterable<ExceptionHistoryEntry> getConcurrentExceptions() {
return concurrentExceptions;
} | Instantiates a {@code RootExceptionHistoryEntry}.
@param cause The reason for the failure.
@param timestamp The time the failure was caught.
@param failureLabels labels associated with the failure.
@param failingTaskName The name of the task that failed.
@param taskManagerLocation The host the task was running on.
@throws NullPointerException if {@code cause} is {@code null}.
@throws IllegalArgumentException if the passed {@code timestamp} is not bigger than {@code
0}. | getConcurrentExceptions | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/RootExceptionHistoryEntry.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/exceptionhistory/RootExceptionHistoryEntry.java | Apache-2.0 |
@Override
public void start(
final ExecutionGraph executionGraph,
final SlowTaskDetectorListener listener,
final ComponentMainThreadExecutor mainThreadExecutor) {
scheduleTask(executionGraph, listener, mainThreadExecutor);
} | The slow task detector which detects slow tasks based on their execution time. | start | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/slowtaskdetector/ExecutionTimeBasedSlowTaskDetector.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/slowtaskdetector/ExecutionTimeBasedSlowTaskDetector.java | Apache-2.0 |
private void terminateExceptionallyWithGlobalFailover(
Iterable<ExecutionState> unfinishedExecutionStates, String savepointPath) {
StopWithSavepointStoppingException inconsistentFinalStateException =
new StopWithSavepointStoppingException(savepointPath, jobId);
log.warn(
"Inconsistent execution state after stopping with savepoint. At least one"
+ " execution is still in one of the following states: {}.",
StringUtils.join(unfinishedExecutionStates, ", "),
inconsistentFinalStateException);
scheduler.handleGlobalFailure(inconsistentFinalStateException);
result.completeExceptionally(inconsistentFinalStateException);
} | Handles the termination of the {@code StopWithSavepointTerminationHandler} exceptionally
after triggering a global job fail-over.
@param unfinishedExecutionStates the unfinished states that caused the failure.
@param savepointPath the path to the successfully created savepoint. | terminateExceptionallyWithGlobalFailover | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/stopwithsavepoint/StopWithSavepointTerminationHandlerImpl.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/stopwithsavepoint/StopWithSavepointTerminationHandlerImpl.java | Apache-2.0 |
private void terminateExceptionally(Throwable throwable) {
checkpointScheduling.startCheckpointScheduler();
result.completeExceptionally(throwable);
} | Handles the termination of the {@code StopWithSavepointTerminationHandler} exceptionally
without triggering a global job fail-over but restarting the checkpointing. It does restart
the checkpoint scheduling.
@param throwable the error that caused the exceptional termination. | terminateExceptionally | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/stopwithsavepoint/StopWithSavepointTerminationHandlerImpl.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/stopwithsavepoint/StopWithSavepointTerminationHandlerImpl.java | Apache-2.0 |
@Override
public SchedulingStrategy createInstance(
final SchedulerOperations schedulerOperations,
final SchedulingTopology schedulingTopology) {
return new PipelinedRegionSchedulingStrategy(schedulerOperations, schedulingTopology);
} | The factory for creating {@link PipelinedRegionSchedulingStrategy}. | createInstance | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/PipelinedRegionSchedulingStrategy.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/PipelinedRegionSchedulingStrategy.java | Apache-2.0 |
default void scheduleAllVerticesIfPossible() {
throw new UnsupportedOperationException();
} | Schedules all vertices and excludes any vertices that are already finished or whose inputs
are not yet ready. | scheduleAllVerticesIfPossible | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/SchedulingStrategy.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/SchedulingStrategy.java | Apache-2.0 |
@Override
public SchedulingStrategy createInstance(
final SchedulerOperations schedulerOperations,
final SchedulingTopology schedulingTopology) {
return new VertexwiseSchedulingStrategy(
schedulerOperations, schedulingTopology, inputConsumableDeciderFactory);
} | The factory for creating {@link VertexwiseSchedulingStrategy}. | createInstance | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/VertexwiseSchedulingStrategy.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/VertexwiseSchedulingStrategy.java | Apache-2.0 |
public void addAppConfigurationEntry(String name, AppConfigurationEntry... entry) {
final AppConfigurationEntry[] existing = dynamicEntries.get(name);
final AppConfigurationEntry[] updated;
if (existing == null) {
updated = Arrays.copyOf(entry, entry.length);
} else {
updated = merge(existing, entry);
}
dynamicEntries.put(name, updated);
} | Add entries for the given application name. | addAppConfigurationEntry | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/DynamicConfiguration.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/DynamicConfiguration.java | Apache-2.0 |
public static SecurityModuleFactory findModuleFactory(String securityModuleFactoryClass)
throws NoMatchSecurityFactoryException {
return findFactoryInternal(
securityModuleFactoryClass,
SecurityModuleFactory.class,
SecurityModuleFactory.class.getClassLoader());
} | Find a suitable {@link SecurityModuleFactory} based on canonical name. | findModuleFactory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityFactoryServiceLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityFactoryServiceLoader.java | Apache-2.0 |
public <T> T runSecured(final Callable<T> securedCallable) throws Exception {
return ugi.doAs((PrivilegedExceptionAction<T>) securedCallable::call);
} | Hadoop security context which runs a Callable with the previously initialized UGI and appropriate
security credentials. | runSecured | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/contexts/HadoopSecurityContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/contexts/HadoopSecurityContext.java | Apache-2.0 |
@Override
public <T> T runSecured(Callable<T> securedCallable) throws Exception {
return securedCallable.call();
} | A security context that simply runs a Callable without performing a login action. | runSecured | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/contexts/NoOpSecurityContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/contexts/NoOpSecurityContext.java | Apache-2.0 |
default boolean isCompatibleWith(final SecurityConfiguration securityConfig) {
return false;
} | Check if this factory is compatible with the security configuration.
<p>Specific implementation must override this to provide compatibility check, by default it
will always return {@code false}.
@param securityConfig security configurations.
@return {@code true} if factory is compatible with the configuration. | isCompatibleWith | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/contexts/SecurityContextFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/contexts/SecurityContextFactory.java | Apache-2.0 |
@Override
public SecurityModule createModule(SecurityConfiguration securityConfig) {
// First check if we have Hadoop in the ClassPath. If not, we simply don't do anything.
if (!HadoopDependency.isHadoopCommonOnClasspath(HadoopModule.class.getClassLoader())) {
LOG.info(
"Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath.");
return null;
}
try {
Configuration hadoopConfiguration =
HadoopUtils.getHadoopConfiguration(securityConfig.getFlinkConfig());
return new HadoopModule(securityConfig, hadoopConfiguration);
} catch (LinkageError e) {
LOG.warn(
"Cannot create Hadoop Security Module due to an error that happened while instantiating the module. No security module will be loaded.",
e);
return null;
}
} | A {@link SecurityModuleFactory} for {@link HadoopModule}. This checks if Hadoop dependencies are
available before creating a {@link HadoopModule}. | createModule | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModuleFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModuleFactory.java | Apache-2.0 |
private static File generateDefaultConfigFile(String workingDir) {
checkArgument(workingDir != null, "working directory should not be null.");
final File jaasConfFile;
try {
Path path = Paths.get(workingDir);
if (Files.notExists(path)) {
// We intentionally favored Path.toRealPath over Files.readSymbolicLinks as the
// latter one might return a
// relative path if the symbolic link refers to it. Path.toRealPath resolves the
// relative path instead.
Path parent = path.getParent().toRealPath();
Path resolvedPath = Paths.get(parent.toString(), path.getFileName().toString());
path = Files.createDirectories(resolvedPath);
}
Path jaasConfPath = Files.createTempFile(path, "jaas-", ".conf");
try (InputStream resourceStream =
JaasModule.class
.getClassLoader()
.getResourceAsStream(JAAS_CONF_RESOURCE_NAME)) {
Files.copy(resourceStream, jaasConfPath, StandardCopyOption.REPLACE_EXISTING);
}
jaasConfFile = new File(workingDir, jaasConfPath.getFileName().toString());
jaasConfFile.deleteOnExit();
} catch (IOException e) {
throw new RuntimeException("unable to generate a JAAS configuration file", e);
}
return jaasConfFile;
} | Generate the default JAAS config file. | generateDefaultConfigFile | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/JaasModule.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/JaasModule.java | Apache-2.0 |
@Override
public void stop() {
LOG.info("Stopping credential renewal");
stopTokensUpdate();
LOG.info("Stopped credential renewal");
} | Stops re-occurring token obtain task. | stop | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DefaultDelegationTokenManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DefaultDelegationTokenManager.java | Apache-2.0 |
public void onNewTokensObtained(byte[] containerBytes) throws Exception {
if (containerBytes == null || containerBytes.length == 0) {
throw new IllegalArgumentException("Illegal container tried to be processed");
}
DelegationTokenContainer container =
InstantiationUtil.deserializeObject(
containerBytes, DelegationTokenContainer.class.getClassLoader());
onNewTokensObtained(container);
} | Callback function when new delegation tokens obtained.
@param containerBytes Serialized form of a DelegationTokenContainer. All the available tokens
will be forwarded to the appropriate {@link DelegationTokenReceiver} based on service
name. | onNewTokensObtained | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepository.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepository.java | Apache-2.0 |
public void onNewTokensObtained(DelegationTokenContainer container) throws Exception {
LOG.info("New delegation tokens arrived, sending them to receivers");
for (Map.Entry<String, byte[]> entry : container.getTokens().entrySet()) {
String serviceName = entry.getKey();
byte[] tokens = entry.getValue();
if (!delegationTokenReceivers.containsKey(serviceName)) {
throw new IllegalStateException(
"Tokens arrived for service but no receiver found for it: " + serviceName);
}
try {
delegationTokenReceivers.get(serviceName).onNewTokensObtained(tokens);
} catch (Exception e) {
LOG.warn("Failed to send tokens to delegation token receiver {}", serviceName, e);
}
}
LOG.info("Delegation tokens sent to receivers");
} | Callback function when new delegation tokens obtained.
@param container Serialized form of delegation tokens stored in DelegationTokenContainer. All
the available tokens will be forwarded to the appropriate {@link DelegationTokenReceiver}
based on service name. | onNewTokensObtained | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepository.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepository.java | Apache-2.0 |
@Override
public String serviceName() {
return "hadoopfs";
} | Delegation token receiver for Hadoop filesystems. | serviceName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenReceiver.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenReceiver.java | Apache-2.0 |
@Override
public ObtainedDelegationTokens obtainDelegationTokens() throws Exception {
UserGroupInformation freshUGI = kerberosLoginProvider.doLoginAndReturnUGI();
return freshUGI.doAs(
(PrivilegedExceptionAction<ObtainedDelegationTokens>)
() -> {
Token<?> token;
Preconditions.checkNotNull(hbaseConf);
try {
LOG.info("Obtaining Kerberos security token for HBase");
// ----
// Intended call: Token<AuthenticationTokenIdentifier> token
// =
// TokenUtil.obtainToken(conf);
token =
(Token<?>)
Class.forName(
"org.apache.hadoop.hbase.security.token.TokenUtil")
.getMethod(
"obtainToken",
org.apache.hadoop.conf.Configuration
.class)
.invoke(null, hbaseConf);
} catch (NoSuchMethodException e) {
// for HBase 2
// ----
// Intended call: ConnectionFactory connectionFactory =
// ConnectionFactory.createConnection(conf);
Closeable connectionFactory =
(Closeable)
Class.forName(
"org.apache.hadoop.hbase.client.ConnectionFactory")
.getMethod(
"createConnection",
org.apache.hadoop.conf.Configuration
.class)
.invoke(null, hbaseConf);
// ----
Class<?> connectionClass =
Class.forName("org.apache.hadoop.hbase.client.Connection");
// ----
// Intended call: Token<AuthenticationTokenIdentifier> token
// =
// TokenUtil.obtainToken(connectionFactory);
token =
(Token<?>)
Class.forName(
"org.apache.hadoop.hbase.security.token.TokenUtil")
.getMethod("obtainToken", connectionClass)
.invoke(null, connectionFactory);
if (null != connectionFactory) {
connectionFactory.close();
}
}
Credentials credentials = new Credentials();
credentials.addToken(token.getService(), token);
// HBase does not support to renew the delegation token currently
// https://cwiki.apache.org/confluence/display/HADOOP2/Hbase+HBaseTokenAuthentication
return new ObtainedDelegationTokens(
HadoopDelegationTokenConverter.serialize(credentials),
Optional.empty());
});
} | The general rule how a provider/receiver must behave is the following: The provider and
the receiver must be added to the classpath together with all the additionally required
dependencies.
<p>This null check is required because the HBase provider is always on classpath but
HBase jars are optional. Such case configuration is not able to be loaded. This construct
is intended to be removed when HBase provider/receiver pair can be externalized (namely
if a provider/receiver throws an exception then workload must be stopped). | obtainDelegationTokens | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HBaseDelegationTokenProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HBaseDelegationTokenProvider.java | Apache-2.0 |
@Override
public String serviceName() {
return "hbase";
} | Delegation token receiver implementation for HBase. Basically it would be good to move this to
flink-connector-hbase-base but HBase connection can be made without the connector. All in all I
tend to move this but that would be a breaking change. | serviceName | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HBaseDelegationTokenReceiver.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HBaseDelegationTokenReceiver.java | Apache-2.0 |
public void doLogin(boolean supportProxyUser) throws IOException {
if (principal != null) {
LOG.info(
"Attempting to login to KDC using principal: {} keytab: {}", principal, keytab);
UserGroupInformation.loginUserFromKeytab(principal, keytab);
LOG.info("Successfully logged into KDC");
} else if (!HadoopUserUtils.isProxyUser(UserGroupInformation.getCurrentUser())) {
LOG.info("Attempting to load user's ticket cache");
UserGroupInformation.loginUserFromSubject(null);
LOG.info("Loaded user's ticket cache successfully");
} else if (supportProxyUser) {
LOG.info("Proxy user doesn't need login since it must have credentials already");
} else {
throwProxyUserNotSupported();
}
} | Does kerberos login and sets current user. Must be called when isLoginPossible returns true. | doLogin | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java | Apache-2.0 |
public static int getNetworkBuffersPerInputChannel(
final int configuredNetworkBuffersPerChannel) {
return configuredNetworkBuffersPerChannel;
} | Calculates and returns the number of required exclusive network buffers per input channel. | getNetworkBuffersPerInputChannel | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/NettyShuffleUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/NettyShuffleUtils.java | Apache-2.0 |
public static Pair<Integer, Integer> getMinMaxFloatingBuffersPerInputGate(
final int numFloatingBuffersPerGate) {
// We should guarantee at-least one floating buffer for local channel state recovery.
return Pair.of(1, numFloatingBuffersPerGate);
} | Calculates and returns the floating network buffer pool size used by the input gate. The
left/right value of the returned pair represent the min/max buffers require by the pool. | getMinMaxFloatingBuffersPerInputGate | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/NettyShuffleUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/NettyShuffleUtils.java | Apache-2.0 |
default boolean isUnknown() {
return false;
} | Returns whether the partition is known and registered with the {@link ShuffleMaster}
implementation.
<p>When a partition consumer is being scheduled, it can happen that the producer of the
partition (consumer input channel) has not been scheduled and its location and other relevant
data is yet to be defined. To proceed with the consumer deployment, currently unknown input
channels have to be marked with placeholders. The placeholder is a special implementation of
the shuffle descriptor: {@link UnknownShuffleDescriptor}.
<p>Note: this method is not supposed to be overridden in concrete shuffle implementation. The
only class where it returns {@code true} is {@link UnknownShuffleDescriptor}.
@return whether the partition producer has been ever deployed and the corresponding shuffle
descriptor is obtained from the {@link ShuffleMaster} implementation. | isUnknown | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleDescriptor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleDescriptor.java | Apache-2.0 |
default Optional<ShuffleMetrics> getMetricsIfPartitionOccupyingLocalResource(
ResultPartitionID partitionId) {
return Optional.empty();
} | Get metrics of the partition if it still occupies some resources locally and have not been
released yet.
@param partitionId the partition id
@return An Optional of {@link ShuffleMetrics}, if found, of the given partition | getMetricsIfPartitionOccupyingLocalResource | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleEnvironment.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleEnvironment.java | Apache-2.0 |
default MemorySize computeShuffleMemorySizeForTask(
TaskInputsOutputsDescriptor taskInputsOutputsDescriptor) {
return MemorySize.ZERO;
} | Compute shuffle memory size for a task with the given {@link TaskInputsOutputsDescriptor}.
@param taskInputsOutputsDescriptor describes task inputs and outputs information for shuffle
memory calculation.
@return shuffle memory size for a task with the given {@link TaskInputsOutputsDescriptor}. | computeShuffleMemorySizeForTask | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMaster.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMaster.java | Apache-2.0 |
default CompletableFuture<Collection<PartitionWithMetrics>> getPartitionWithMetrics(
JobID jobId, Duration timeout, Set<ResultPartitionID> expectedPartitions) {
return CompletableFuture.completedFuture(Collections.emptyList());
} | Retrieves specified partitions and their metrics (identified by {@code expectedPartitions}),
the metrics include sizes of sub-partitions in a result partition.
@param jobId ID of the target job
@param timeout The timeout used for retrieve the specified partitions.
@param expectedPartitions The set of identifiers for the result partitions whose metrics are
to be fetched.
@return A future will contain a collection of the partitions with their metrics that could be
retrieved from the expected partitions within the specified timeout period. | getPartitionWithMetrics | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMaster.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMaster.java | Apache-2.0 |
default void snapshotState(
CompletableFuture<ShuffleMasterSnapshot> snapshotFuture,
ShuffleMasterSnapshotContext context,
JobID jobId) {} | Triggers a snapshot of the shuffle master's state which related the specified job. | snapshotState | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMaster.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMaster.java | Apache-2.0 |
public static void restoreOrSnapshotShuffleMaster(
ShuffleMaster<?> shuffleMaster, Configuration configuration, Executor ioExecutor)
throws IOException {
boolean isJobRecoveryEnabled =
configuration.get(BatchExecutionOptions.JOB_RECOVERY_ENABLED)
&& shuffleMaster.supportsBatchSnapshot();
if (isJobRecoveryEnabled) {
String clusterId = configuration.get(HighAvailabilityOptions.HA_CLUSTER_ID);
Path path =
new Path(
HighAvailabilityServicesUtils.getClusterHighAvailableStoragePath(
configuration),
"shuffleMaster-snapshot");
if (ShuffleMasterSnapshotUtil.isShuffleMasterSnapshotExist(path, clusterId)) {
ShuffleMasterSnapshot snapshot =
ShuffleMasterSnapshotUtil.readSnapshot(path, clusterId);
LOG.info("Restore shuffle master state from cluster level snapshot.");
shuffleMaster.restoreState(snapshot);
} else {
// always call restoreState to notify the shuffle master to initialize itself
shuffleMaster.restoreState(null);
CompletableFuture<ShuffleMasterSnapshot> snapshotFuture = new CompletableFuture<>();
ioExecutor.execute(
() -> {
LOG.info("Take a cluster level shuffle master snapshot.");
shuffleMaster.snapshotState(snapshotFuture);
snapshotFuture.thenAccept(
shuffleMasterSnapshot -> {
try {
ShuffleMasterSnapshotUtil.writeSnapshot(
shuffleMasterSnapshot, path, clusterId);
} catch (IOException e) {
LOG.warn(
"Write cluster level shuffle master snapshot failed.",
e);
}
});
});
}
}
} | Restores the state of the ShuffleMaster from a cluster-level snapshot if available. If the
snapshot does not exist, it will create a new snapshot.
<p>This method first checks if job recovery is enabled and supported by the ShuffleMaster. It
then attempts to locate and read an existing snapshot from the cluster storage. If a snapshot
exists, the ShuffleMaster state is restored from it. If no snapshot is found, a new snapshot
is taken and saved to the cluster storage asynchronously.
@param shuffleMaster the shuffle master which state needs to be restored or saved
@param configuration the configuration containing settings relevant to job recovery
@param ioExecutor an executor that handles the IO operations for snapshot creation
@throws IOException if an error occurs during reading or writing the snapshot | restoreOrSnapshotShuffleMaster | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterSnapshotUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterSnapshotUtil.java | Apache-2.0 |
@VisibleForTesting
static boolean isShuffleMasterSnapshotExist(Path workingDir, String clusterId)
throws IOException {
FileSystem fileSystem = workingDir.getFileSystem();
return fileSystem.exists(new Path(workingDir, clusterId));
} | Checks if a ShuffleMaster snapshot exists for the specified cluster.
@param workingDir The directory where the snapshot file is expected.
@param clusterId The unique identifier for the cluster.
@return True if the snapshot file exists, false otherwise.
@throws IOException If an I/O error occurs while checking the file existence. | isShuffleMasterSnapshotExist | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterSnapshotUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterSnapshotUtil.java | Apache-2.0 |
@VisibleForTesting
static ShuffleMasterSnapshot readSnapshot(Path workingDir, String clusterId)
throws IOException {
FileSystem fileSystem = workingDir.getFileSystem();
Path file = new Path(workingDir, clusterId);
try (DataInputStream inputStream = new DataInputStream(fileSystem.open(file))) {
int byteLength = inputStream.readInt();
byte[] bytes = new byte[byteLength];
inputStream.readFully(bytes);
return InstantiationUtil.deserializeObject(bytes, ClassLoader.getSystemClassLoader());
} catch (ClassNotFoundException exception) {
throw new IOException("Deserialize ShuffleMasterSnapshot failed.", exception);
}
} | Reads an immutable snapshot of the ShuffleMaster from the specified directory. This method
should be called only during the startup phase of the Flink cluster.
@param workingDir The directory where the snapshot file is located.
@param clusterId The unique identifier for the cluster.
@return The snapshot data read from the file.
@throws IOException If an I/O error occurs while reading the snapshot. | readSnapshot | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterSnapshotUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleMasterSnapshotUtil.java | Apache-2.0 |
public static ShuffleServiceFactory<?, ?, ?> loadShuffleServiceFactory(
Configuration configuration) throws FlinkException {
String shuffleServiceClassName = configuration.get(SHUFFLE_SERVICE_FACTORY_CLASS);
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
return InstantiationUtil.instantiate(
shuffleServiceClassName, ShuffleServiceFactory.class, classLoader);
} | Utility to load the pluggable {@link ShuffleServiceFactory} implementations. | loadShuffleServiceFactory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleServiceLoader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleServiceLoader.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.