code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1 value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7 values |
|---|---|---|---|---|---|---|---|
public static Context forTaskManagerFailure(
JobInfo jobInfo,
MetricGroup metricGroup,
Executor ioExecutor,
ClassLoader classLoader) {
return new DefaultFailureEnricherContext(
jobInfo, metricGroup, FailureType.TASK_MANAGER, ioExecutor, classLoader);
} | Factory method returning a TaskManager failure Context for the given params. | forTaskManagerFailure | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/failure/DefaultFailureEnricherContext.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/failure/DefaultFailureEnricherContext.java | Apache-2.0 |
@VisibleForTesting
static Set<String> getIncludedFailureEnrichers(final Configuration configuration) {
final String includedEnrichersString =
configuration.get(JobManagerOptions.FAILURE_ENRICHERS_LIST, "");
return enricherListPattern
.splitAsStream(includedEnrichersString)
.filter(r -> !r.isEmpty())
.collect(Collectors.toSet());
} | Returns a set of failure enricher names included in the given configuration.
@param configuration the configuration to get the failure enricher names from
@return failure enricher names | getIncludedFailureEnrichers | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/failure/FailureEnricherUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/failure/FailureEnricherUtils.java | Apache-2.0 |
@VisibleForTesting
static Collection<FailureEnricher> filterInvalidEnrichers(
final Set<FailureEnricher> failureEnrichers) {
final Map<String, Set<Class<?>>> enrichersByKey = new HashMap<>();
failureEnrichers.forEach(
enricher ->
enricher.getOutputKeys()
.forEach(
enricherKey ->
enrichersByKey
.computeIfAbsent(
enricherKey,
ignored -> new HashSet<>())
.add(enricher.getClass())));
final Set<Class<?>> invalidEnrichers =
enrichersByKey.entrySet().stream()
.filter(entry -> entry.getValue().size() > 1)
.flatMap(
entry -> {
LOG.warn(
"Following enrichers have have registered duplicate output key [%s] and will be ignored: {}.",
entry.getValue().stream()
.map(Class::getName)
.collect(Collectors.joining(", ")));
return entry.getValue().stream();
})
.collect(Collectors.toSet());
return failureEnrichers.stream()
.filter(enricher -> !invalidEnrichers.contains(enricher.getClass()))
.collect(Collectors.toList());
} | Filters out invalid {@link FailureEnricher} objects that have duplicate output keys.
@param failureEnrichers a set of {@link FailureEnricher} objects to filter
@return a filtered collection without any duplicate output keys | filterInvalidEnrichers | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/failure/FailureEnricherUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/failure/FailureEnricherUtils.java | Apache-2.0 |
public static CompletableFuture<Map<String, String>> labelFailure(
final Throwable cause,
final Context context,
final Executor mainThreadExecutor,
final Collection<FailureEnricher> failureEnrichers) {
// list of CompletableFutures to enrich failure with labels from each enricher
final Collection<CompletableFuture<Map<String, String>>> enrichFutures = new ArrayList<>();
for (final FailureEnricher enricher : failureEnrichers) {
enrichFutures.add(
enricher.processFailure(cause, context)
.thenApply(
enricherLabels -> {
final Map<String, String> validLabels = new HashMap<>();
enricherLabels.forEach(
(k, v) -> {
if (!enricher.getOutputKeys().contains(k)) {
LOG.warn(
"Ignoring label with key {} from enricher {}"
+ " violating contract, keys allowed {}.",
k,
enricher.getClass(),
enricher.getOutputKeys());
} else {
validLabels.put(k, v);
}
});
return validLabels;
})
.exceptionally(
t -> {
LOG.warn(
"Enricher {} threw an exception.",
enricher.getClass(),
t);
return Collections.emptyMap();
}));
}
// combine all CompletableFutures into a single CompletableFuture containing a Map of labels
return FutureUtils.combineAll(enrichFutures)
.thenApplyAsync(
labelsToMerge -> {
final Map<String, String> mergedLabels = new HashMap<>();
for (Map<String, String> labels : labelsToMerge) {
labels.forEach(
(k, v) ->
// merge label with existing, throwing an exception
// if there is a key conflict
mergedLabels.merge(
k,
v,
(first, second) -> {
throw new FlinkRuntimeException(
String.format(
MERGE_EXCEPTION_MSG,
k));
}));
}
return mergedLabels;
},
mainThreadExecutor);
} | Enriches a Throwable by returning the merged label output of a Set of FailureEnrichers.
@param cause the Throwable to label
@param context the context of the Throwable
@param mainThreadExecutor the executor to complete the enricher labeling on
@param failureEnrichers a collection of FailureEnrichers to enrich the context with
@return a CompletableFuture that will complete with a map of labels | labelFailure | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/failure/FailureEnricherUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/failure/FailureEnricherUtils.java | Apache-2.0 |
public void shutdown() {
synchronized (lock) {
// first shutdown the thread pool
ScheduledExecutorService es = this.executorService;
if (es != null) {
es.shutdown();
try {
es.awaitTermination(cleanupInterval, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
// may happen
}
}
entries.clear();
jobRefHolders.clear();
// clean up the all storage directories
for (File dir : storageDirectories) {
try {
FileUtils.deleteDirectory(dir);
LOG.info("removed file cache directory {}", dir.getAbsolutePath());
} catch (IOException e) {
LOG.error(
"File cache could not properly clean up storage directory: {}",
dir.getAbsolutePath(),
e);
}
}
// Remove shutdown hook to prevent resource leaks
ShutdownHookUtil.removeShutdownHook(shutdownHook, getClass().getSimpleName(), LOG);
}
} | Shuts down the file cache by cancelling all. | shutdown | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java | Apache-2.0 |
@Override
public Path call() throws IOException {
final File file = blobService.getFile(jobID, blobKey);
if (isDirectory) {
Path directory =
FileUtils.expandDirectory(new Path(file.getAbsolutePath()), target);
return directory;
} else {
//noinspection ResultOfMethodCallIgnored
file.setExecutable(isExecutable);
return Path.fromLocalFile(file);
}
} | Asynchronous file copy process from blob server. | call | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java | Apache-2.0 |
@Override
public void run() {
try {
synchronized (lock) {
Set<ExecutionAttemptID> jobRefs = jobRefHolders.get(jobID);
if (jobRefs != null && jobRefs.isEmpty()) {
// abort the copy
for (Future<Path> fileFuture : entries.get(jobID).values()) {
fileFuture.cancel(true);
}
// remove job specific entries in maps
entries.remove(jobID);
jobRefHolders.remove(jobID);
// remove the job wide temp directories
for (File storageDirectory : storageDirectories) {
File tempDir = new File(storageDirectory, jobID.toString());
FileUtils.deleteDirectory(tempDir);
}
}
}
} catch (IOException e) {
LOG.error("Could not delete file from local file cache.", e);
}
} | If no task is using this file after 5 seconds, clear it. | run | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/filecache/FileCache.java | Apache-2.0 |
public static boolean isHadoopCommonOnClasspath(ClassLoader classLoader) {
try {
LOG.debug("Checking whether hadoop common dependency in on classpath.");
Class.forName("org.apache.hadoop.conf.Configuration", false, classLoader);
Class.forName("org.apache.hadoop.security.UserGroupInformation", false, classLoader);
LOG.debug("Hadoop common dependency found on classpath.");
return true;
} catch (ClassNotFoundException e) {
LOG.debug("Hadoop common dependency cannot be found on classpath.");
return false;
}
} | Responsible telling if specific Hadoop dependencies are on classpath. | isHadoopCommonOnClasspath | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/hadoop/HadoopDependency.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/hadoop/HadoopDependency.java | Apache-2.0 |
@Override
public HeartbeatMonitor<O> createHeartbeatMonitor(
ResourceID resourceID,
HeartbeatTarget<O> heartbeatTarget,
ScheduledExecutor mainThreadExecutor,
HeartbeatListener<?, O> heartbeatListener,
long heartbeatTimeoutIntervalMs,
int failedRpcRequestsUntilUnreachable) {
return new DefaultHeartbeatMonitor<>(
resourceID,
heartbeatTarget,
mainThreadExecutor,
heartbeatListener,
heartbeatTimeoutIntervalMs,
failedRpcRequestsUntilUnreachable);
} | The factory that instantiates {@link DefaultHeartbeatMonitor}.
@param <O> Type of the outgoing heartbeat payload | createHeartbeatMonitor | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/heartbeat/DefaultHeartbeatMonitor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/heartbeat/DefaultHeartbeatMonitor.java | Apache-2.0 |
@Override
public final CompletableFuture<Void> requestHeartbeat(
ResourceID requestOrigin, I heartbeatPayload) {
return FutureUtils.unsupportedOperationFuture();
} | The receiver implementation of {@link HeartbeatTarget}, which mutes the {@link
HeartbeatTarget#requestHeartbeat(ResourceID, I)}. The extender only has to care about the
receiving logic.
@param <I> Type of the payload which is received by the heartbeat target | requestHeartbeat | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/heartbeat/HeartbeatReceiver.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/heartbeat/HeartbeatReceiver.java | Apache-2.0 |
@Override
public final CompletableFuture<Void> receiveHeartbeat(
ResourceID heartbeatOrigin, I heartbeatPayload) {
return FutureUtils.unsupportedOperationFuture();
} | The sender implementation of {@link HeartbeatTarget}, which mutes the {@link
HeartbeatTarget#receiveHeartbeat(ResourceID, I)}. The extender only has to care about the sending
logic.
@param <I> Type of the payload which is sent to the heartbeat target | receiveHeartbeat | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/heartbeat/HeartbeatSender.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/heartbeat/HeartbeatSender.java | Apache-2.0 |
@Override
public ClientHighAvailabilityServices create(
Configuration configuration, FatalErrorHandler fatalErrorHandler) throws Exception {
return HighAvailabilityServicesUtils.createClientHAService(
configuration, fatalErrorHandler);
} | Default factory for creating client high availability services. | create | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/DefaultClientHighAvailabilityServicesFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/DefaultClientHighAvailabilityServicesFactory.java | Apache-2.0 |
private Path constructDirtyPath(JobID jobId) {
return constructEntryPath(jobId.toString() + DIRTY_FILE_EXTENSION);
} | Given a job ID, construct the path for a dirty entry corresponding to it in the job result
store.
@param jobId The job ID to construct a dirty entry path from.
@return A path for a dirty entry for the given the Job ID. | constructDirtyPath | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/FileSystemJobResultStore.java | Apache-2.0 |
@Deprecated
default LeaderRetrievalService getWebMonitorLeaderRetriever() {
throw new UnsupportedOperationException(
"getWebMonitorLeaderRetriever should no longer be used. Instead use "
+ "#getClusterRestEndpointLeaderRetriever to instantiate the cluster "
+ "rest endpoint leader retriever. If you called this method, then "
+ "make sure that #getClusterRestEndpointLeaderRetriever has been "
+ "implemented by your HighAvailabilityServices implementation.");
} | This retriever should no longer be used on the cluster side. The web monitor retriever is
only required on the client-side and we have a dedicated high-availability services for the
client, named {@link ClientHighAvailabilityServices}. See also FLINK-13750.
@return the leader retriever for web monitor
@deprecated just use {@link #getClusterRestEndpointLeaderRetriever()} instead of this method. | getWebMonitorLeaderRetriever | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java | Apache-2.0 |
@Deprecated
default LeaderElection getWebMonitorLeaderElection() {
throw new UnsupportedOperationException(
"getWebMonitorLeaderElectionService should no longer be used. Instead use "
+ "#getClusterRestEndpointLeaderElectionService to instantiate the cluster "
+ "rest endpoint's leader election service. If you called this method, then "
+ "make sure that #getClusterRestEndpointLeaderElectionService has been "
+ "implemented by your HighAvailabilityServices implementation.");
} | Gets the {@link LeaderElection} for the cluster's rest endpoint.
@deprecated Use {@link #getClusterRestEndpointLeaderElection()} instead. | getWebMonitorLeaderElection | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServices.java | Apache-2.0 |
default ClientHighAvailabilityServices createClientHAServices(Configuration configuration)
throws Exception {
return createHAServices(configuration, UnsupportedOperationExecutor.INSTANCE);
} | Create a {@link ClientHighAvailabilityServices} instance.
@param configuration Flink configuration
@return instance of {@link ClientHighAvailabilityServices}
@throws Exception when ClientHAServices cannot be created | createClientHAServices | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesFactory.java | Apache-2.0 |
public static Tuple2<String, Integer> getJobManagerAddress(Configuration configuration)
throws ConfigurationException {
final String hostname = configuration.get(JobManagerOptions.ADDRESS);
final int port = configuration.get(JobManagerOptions.PORT);
if (hostname == null) {
throw new ConfigurationException(
"Config parameter '"
+ JobManagerOptions.ADDRESS
+ "' is missing (hostname/address of JobManager to connect to).");
}
if (port <= 0 || port >= 65536) {
throw new ConfigurationException(
"Invalid value for '"
+ JobManagerOptions.PORT
+ "' (port of the JobManager actor system) : "
+ port
+ ". it must be greater than 0 and less than 65536.");
}
return Tuple2.of(hostname, port);
} | Returns the JobManager's hostname and port extracted from the given {@link Configuration}.
@param configuration Configuration to extract the JobManager's address from
@return The JobManager's hostname and port
@throws ConfigurationException if the JobManager's address cannot be extracted from the
configuration | getJobManagerAddress | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesUtils.java | Apache-2.0 |
public static String getWebMonitorAddress(
Configuration configuration, AddressResolution resolution) throws UnknownHostException {
final String address =
checkNotNull(
configuration.get(RestOptions.ADDRESS),
"%s must be set",
RestOptions.ADDRESS.key());
if (resolution == AddressResolution.TRY_ADDRESS_RESOLUTION) {
// Fail fast if the hostname cannot be resolved
//noinspection ResultOfMethodCallIgnored
InetAddress.getByName(address);
}
final int port = configuration.get(RestOptions.PORT);
final boolean enableSSL = SecurityOptions.isRestSSLEnabled(configuration);
final String protocol = enableSSL ? "https://" : "http://";
return String.format("%s%s:%s", protocol, address, port);
} | Get address of web monitor from configuration.
@param configuration Configuration contains those for WebMonitor.
@param resolution Whether to try address resolution of the given hostname or not. This allows
to fail fast in case that the hostname cannot be resolved.
@return Address of WebMonitor. | getWebMonitorAddress | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesUtils.java | Apache-2.0 |
default CompletableFuture<Boolean> hasJobResultEntryAsync(JobID jobId) {
return hasDirtyJobResultEntryAsync(jobId)
.thenCombine(
hasCleanJobResultEntryAsync(jobId),
(result1, result2) -> result1 || result2);
} | Returns the future of whether the store already contains an entry for a job.
@param jobId Ident of the job we wish to check the store for.
@return a successfully completed future with {@code true} if a {@code dirty} or {@code clean}
{@link JobResultEntry} exists for the given {@code JobID}; otherwise {@code false}. | hasJobResultEntryAsync | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/JobResultStore.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/JobResultStore.java | Apache-2.0 |
private void addContender(
EmbeddedLeaderElection embeddedLeaderElection, LeaderContender contender) {
synchronized (lock) {
checkState(!shutdown, "leader election is shut down");
checkState(!embeddedLeaderElection.running, "leader election is already started");
try {
if (!allLeaderContenders.add(embeddedLeaderElection)) {
throw new IllegalStateException(
"leader election was added to this service multiple times");
}
embeddedLeaderElection.contender = contender;
embeddedLeaderElection.running = true;
updateLeader()
.whenComplete(
(aVoid, throwable) -> {
if (throwable != null) {
fatalError(throwable);
}
});
} catch (Throwable t) {
fatalError(t);
}
}
} | Callback from leader contenders when they start their service. | addContender | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/nonha/embedded/EmbeddedLeaderService.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/nonha/embedded/EmbeddedLeaderService.java | Apache-2.0 |
private void removeContender(EmbeddedLeaderElection embeddedLeaderElection) {
synchronized (lock) {
// if the leader election was not even started, simply do nothing
if (!embeddedLeaderElection.running || shutdown) {
return;
}
try {
if (!allLeaderContenders.remove(embeddedLeaderElection)) {
throw new IllegalStateException(
"leader election does not belong to this service");
}
// stop the service
if (embeddedLeaderElection.isLeader) {
embeddedLeaderElection.contender.revokeLeadership();
}
embeddedLeaderElection.contender = null;
embeddedLeaderElection.running = false;
embeddedLeaderElection.isLeader = false;
// if that was the current leader, unset its status
if (currentLeaderConfirmed == embeddedLeaderElection) {
currentLeaderConfirmed = null;
currentLeaderSessionId = null;
currentLeaderAddress = null;
}
if (currentLeaderProposed == embeddedLeaderElection) {
currentLeaderProposed = null;
currentLeaderSessionId = null;
}
updateLeader()
.whenComplete(
(aVoid, throwable) -> {
if (throwable != null) {
fatalError(throwable);
}
});
} catch (Throwable t) {
fatalError(t);
}
}
} | Callback from leader contenders when they stop their service. | removeContender | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/nonha/embedded/EmbeddedLeaderService.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/nonha/embedded/EmbeddedLeaderService.java | Apache-2.0 |
public static Path archiveJob(
Path rootPath, JobID jobId, Collection<ArchivedJson> jsonToArchive) throws IOException {
try {
FileSystem fs = rootPath.getFileSystem();
Path path = new Path(rootPath, jobId.toString());
OutputStream out = fs.create(path, FileSystem.WriteMode.NO_OVERWRITE);
try (JsonGenerator gen = jacksonFactory.createGenerator(out, JsonEncoding.UTF8)) {
gen.writeStartObject();
gen.writeArrayFieldStart(ARCHIVE);
for (ArchivedJson archive : jsonToArchive) {
gen.writeStartObject();
gen.writeStringField(PATH, archive.getPath());
gen.writeStringField(JSON, archive.getJson());
gen.writeEndObject();
}
gen.writeEndArray();
gen.writeEndObject();
} catch (Exception e) {
fs.delete(path, false);
throw e;
}
LOG.info("Job {} has been archived at {}.", jobId, path);
return path;
} catch (IOException e) {
LOG.error("Failed to archive job.", e);
throw e;
}
} | Writes the given {@link AccessExecutionGraph} to the {@link FileSystem} pointed to by {@link
JobManagerOptions#ARCHIVE_DIR}.
@param rootPath directory to which the archive should be written to
@param jobId job id
@param jsonToArchive collection of json-path pairs to that should be archived
@return path to where the archive was written, or null if no archive was created
@throws IOException | archiveJob | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/history/FsJobArchivist.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/history/FsJobArchivist.java | Apache-2.0 |
public static Collection<ArchivedJson> getArchivedJsons(Path file) throws IOException {
try (FSDataInputStream input = file.getFileSystem().open(file);
ByteArrayOutputStream output = new ByteArrayOutputStream()) {
IOUtils.copyBytes(input, output);
try {
JsonNode archive = mapper.readTree(output.toByteArray());
Collection<ArchivedJson> archives = new ArrayList<>();
for (JsonNode archivePart : archive.get(ARCHIVE)) {
String path = archivePart.get(PATH).asText();
String json = archivePart.get(JSON).asText();
archives.add(new ArchivedJson(path, json));
}
return archives;
} catch (NullPointerException npe) {
// occurs if the archive is empty or any of the expected fields are not present
throw new IOException(
"Job archive (" + file.getPath() + ") did not conform to expected format.");
}
}
} | Reads the given archive file and returns a {@link Collection} of contained {@link
ArchivedJson}.
@param file archive to extract
@return collection of archived jsons
@throws IOException if the file can't be opened, read or doesn't contain valid json | getArchivedJsons | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/history/FsJobArchivist.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/history/FsJobArchivist.java | Apache-2.0 |
public int getNumberOfCPUCores() {
return this.numberOfCPUCores;
} | Returns the number of CPU cores available to the JVM on the compute node.
@return the number of CPU cores available to the JVM on the compute node | getNumberOfCPUCores | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/instance/HardwareDescription.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/instance/HardwareDescription.java | Apache-2.0 |
public long getSizeOfPhysicalMemory() {
return this.sizeOfPhysicalMemory;
} | Returns the size of physical memory in bytes available on the compute node.
@return the size of physical memory in bytes available on the compute node | getSizeOfPhysicalMemory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/instance/HardwareDescription.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/instance/HardwareDescription.java | Apache-2.0 |
public long getSizeOfJvmHeap() {
return this.sizeOfJvmHeap;
} | Returns the size of the JVM heap memory
@return The size of the JVM heap memory | getSizeOfJvmHeap | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/instance/HardwareDescription.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/instance/HardwareDescription.java | Apache-2.0 |
default boolean isAvailable() {
CompletableFuture<?> future = getAvailableFuture();
return future == AVAILABLE || future.isDone();
} | In order to best-effort avoid volatile access in {@link CompletableFuture#isDone()}, we check
the condition of <code>future == AVAILABLE</code> firstly for getting probable performance
benefits while hot looping.
<p>It is always safe to use this method in performance nonsensitive scenarios to get the
precise state.
@return true if this instance is available for further processing. | isAvailable | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | Apache-2.0 |
public void resetUnavailable() {
if (isAvailable()) {
availableFuture = new CompletableFuture<>();
}
} | Judges to reset the current available state as unavailable. | resetUnavailable | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | Apache-2.0 |
public void resetAvailable() {
availableFuture = AVAILABLE;
} | Resets the constant completed {@link #AVAILABLE} as the current state. | resetAvailable | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | Apache-2.0 |
public CompletableFuture<?> getUnavailableToResetAvailable() {
CompletableFuture<?> toNotify = availableFuture;
availableFuture = AVAILABLE;
return toNotify;
} | Returns the previously not completed future and resets the constant completed {@link
#AVAILABLE} as the current state. | getUnavailableToResetAvailable | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | Apache-2.0 |
public CompletableFuture<?> getUnavailableToResetUnavailable() {
CompletableFuture<?> toNotify = availableFuture;
availableFuture = new CompletableFuture<>();
return toNotify;
} | Creates a new uncompleted future as the current state and returns the previous
uncompleted one. | getUnavailableToResetUnavailable | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/AvailabilityProvider.java | Apache-2.0 |
static BlockCompressionFactory createBlockCompressionFactory(CompressionCodec compressionName) {
checkNotNull(compressionName);
BlockCompressionFactory blockCompressionFactory;
switch (compressionName) {
case LZ4:
blockCompressionFactory = new Lz4BlockCompressionFactory();
break;
case LZO:
blockCompressionFactory =
new AirCompressorFactory(new LzoCompressor(), new LzoDecompressor());
break;
case ZSTD:
blockCompressionFactory =
new AirCompressorFactory(new ZstdCompressor(), new ZstdDecompressor());
break;
default:
throw new IllegalStateException("Unknown CompressionMethod " + compressionName);
}
return blockCompressionFactory;
} | Creates {@link BlockCompressionFactory} according to the configuration.
@param compressionName supported compression codecs. | createBlockCompressionFactory | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/compression/BlockCompressionFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/compression/BlockCompressionFactory.java | Apache-2.0 |
public List<MemorySegment> requestBuffers() throws Exception {
List<MemorySegment> allocated = new ArrayList<>(numBuffersPerRequest);
synchronized (buffers) {
checkState(!destroyed, "Buffer pool is already destroyed.");
if (!initialized) {
initialize();
}
Deadline deadline = Deadline.fromNow(WAITING_TIME);
while (buffers.size() < numBuffersPerRequest) {
checkState(!destroyed, "Buffer pool is already destroyed.");
buffers.wait(WAITING_TIME.toMillis());
if (!deadline.hasTimeLeft()) {
return allocated; // return the empty list
}
}
while (allocated.size() < numBuffersPerRequest) {
allocated.add(buffers.poll());
}
lastBufferOperationTimestamp = System.currentTimeMillis();
}
return allocated;
} | Requests a collection of buffers (determined by {@link #numBuffersPerRequest}) from this
buffer pool. | requestBuffers | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPool.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPool.java | Apache-2.0 |
public void recycle(MemorySegment segment) {
checkArgument(segment != null, "Buffer must be not null.");
recycle(Collections.singletonList(segment));
} | Recycles the target buffer to this buffer pool. This method should never throw any exception. | recycle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPool.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPool.java | Apache-2.0 |
private void close(boolean delete) throws IOException {
try {
// send off set last segment, if we have not been closed before
MemorySegment current = getCurrentSegment();
if (current != null) {
writeSegment(current, getCurrentPositionInSegment());
}
clear();
if (delete) {
writer.closeAndDelete();
} else {
writer.close();
}
} finally {
memManager.release(memory);
}
} | Closes this output, writing pending data and releasing the memory.
@throws IOException Thrown, if the pending data could not be written. | close | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/FileChannelOutputView.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/FileChannelOutputView.java | Apache-2.0 |
public int getBlockCount() {
return numBlocksWritten;
} | Gets the number of blocks written by this output view.
@return The number of blocks written by this output view. | getBlockCount | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/FileChannelOutputView.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/FileChannelOutputView.java | Apache-2.0 |
public List<MemorySegment> close() throws IOException {
final ArrayList<MemorySegment> segments =
new ArrayList<MemorySegment>(
this.fullSegments.size() + this.numMemorySegmentsInWriter);
// if the buffer is still being written, clean that up
if (getCurrentSegment() != null) {
segments.add(getCurrentSegment());
clear();
}
moveAll(this.fullSegments, segments);
this.fullSegments.clear();
// clean up the writer
if (this.writer != null) {
// closing before the first flip, collect the memory in the writer
this.writer.close();
for (int i = this.numMemorySegmentsInWriter; i > 0; i--) {
segments.add(this.writer.getNextReturnedBlock());
}
this.writer.closeAndDelete();
this.writer = null;
}
// clean up the views
if (this.inMemInView != null) {
this.inMemInView = null;
}
if (this.externalInView != null) {
if (!this.externalInView.isClosed()) {
this.externalInView.close();
}
this.externalInView = null;
}
return segments;
} | @return A list with all memory segments that have been taken from the memory segment source. | close | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/SpillingBuffer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/SpillingBuffer.java | Apache-2.0 |
private static final <E> void moveAll(ArrayList<E> source, ArrayList<E> target) {
target.ensureCapacity(target.size() + source.size());
for (int i = source.size() - 1; i >= 0; i--) {
target.add(source.remove(i));
}
} | Utility method that moves elements. It avoids copying the data into a dedicated array first,
as the {@link ArrayList#addAll(java.util.Collection)} method does.
@param <E>
@param source
@param target | moveAll | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/SpillingBuffer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/SpillingBuffer.java | Apache-2.0 |
@Override
public MemorySegment getNextReturnedBlock() throws IOException {
try {
while (true) {
final MemorySegment next = this.returnSegments.poll(1000, TimeUnit.MILLISECONDS);
if (next != null) {
return next;
} else {
if (this.closed) {
throw new IOException("The reader has been asynchronously closed.");
}
checkErroneous();
}
}
} catch (InterruptedException iex) {
throw new IOException(
"Reader was interrupted while waiting for the next returning segment.");
}
} | Gets the next memory segment that has been filled with data by the reader. This method blocks
until such a segment is available, or until an error occurs in the reader, or the reader is
closed.
<p>WARNING: If this method is invoked without any segment ever returning (for example,
because the {@link #readBlock(MemorySegment)} method has not been invoked appropriately), the
method may block forever.
@return The next memory segment from the reader's return queue.
@throws IOException Thrown, if an I/O error occurs in the reader while waiting for the
request to return. | getNextReturnedBlock | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockReader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockReader.java | Apache-2.0 |
@Override
public LinkedBlockingQueue<MemorySegment> getReturnQueue() {
return this.returnSegments;
} | Gets the queue in which the full memory segments are queued after the asynchronous read is
complete.
@return The queue with the full memory segments. | getReturnQueue | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockReader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockReader.java | Apache-2.0 |
@Override
public MemorySegment getNextReturnedBlock() throws IOException {
try {
while (true) {
final MemorySegment next = returnSegments.poll(1000, TimeUnit.MILLISECONDS);
if (next != null) {
return next;
} else {
if (this.closed) {
throw new IOException("The writer has been closed.");
}
checkErroneous();
}
}
} catch (InterruptedException e) {
throw new IOException(
"Writer was interrupted while waiting for the next returning segment.");
}
} | Gets the next memory segment that has been written and is available again. This method blocks
until such a segment is available, or until an error occurs in the writer, or the writer is
closed.
<p>NOTE: If this method is invoked without any segment ever returning (for example, because
the {@link #writeBlock(MemorySegment)} method has not been invoked accordingly), the method
may block forever.
@return The next memory segment from the writers's return queue.
@throws IOException Thrown, if an I/O error occurs in the writer while waiting for the
request to return. | getNextReturnedBlock | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockWriter.java | Apache-2.0 |
@Override
public void writeBlock(MemorySegment segment) throws IOException {
addRequest(new SegmentWriteRequest(this, segment));
} | Issues a asynchronous write request to the writer.
@param segment The segment to be written.
@throws IOException Thrown, when the writer encounters an I/O error. Due to the asynchronous
nature of the writer, the exception thrown here may have been caused by an earlier write
request. | writeBlock | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockWriterWithCallback.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousBlockWriterWithCallback.java | Apache-2.0 |
@Override
public boolean isClosed() {
return this.closed;
} | Creates a new channel access to the path indicated by the given ID. The channel accepts
buffers to be read/written and hands them to the asynchronous I/O thread. After being
processed, the buffers are returned by adding the to the given queue.
@param channelID The id describing the path of the file that the channel accessed.
@param requestQueue The queue that this channel hands its IO requests to.
@param callback The callback to be invoked when a request is done.
@param writeEnabled Flag describing whether the channel should be opened in read/write mode,
rather than in read-only mode.
@throws IOException Thrown, if the channel could no be opened. | isClosed | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | Apache-2.0 |
@Override
public void close() throws IOException {
// atomically set the close flag
synchronized (this.closeLock) {
if (this.closed) {
return;
}
this.closed = true;
try {
// wait until as many buffers have been returned as were written
// only then is everything guaranteed to be consistent.
while (this.requestsNotReturned.get() > 0) {
try {
// we add a timeout here, because it is not guaranteed that the
// decrementing during buffer return and the check here are deadlock free.
// the deadlock situation is however unlikely and caught by the timeout
this.closeLock.wait(1000);
checkErroneous();
} catch (InterruptedException iex) {
throw new IOException(
"Closing of asynchronous file channel was interrupted.");
}
}
// Additional check because we might have skipped the while loop
checkErroneous();
} finally {
// close the file
if (this.fileChannel.isOpen()) {
this.fileChannel.close();
}
}
}
} | Closes the channel and waits until all pending asynchronous requests are processed. The
underlying <code>FileChannel</code> is closed even if an exception interrupts the closing.
<p><strong>Important:</strong> the {@link #isClosed()} method returns <code>true</code>
immediately after this method has been called even when there are outstanding requests.
@throws IOException Thrown, if an I/O exception occurred while waiting for the buffers, or if
the closing was interrupted. | close | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | Apache-2.0 |
@Override
public void closeAndDelete() throws IOException {
try {
close();
} finally {
deleteChannel();
}
} | This method waits for all pending asynchronous requests to return. When the last request has
returned, the channel is closed and deleted.
<p>Even if an exception interrupts the closing, such that not all request are handled, the
underlying <tt>FileChannel</tt> is closed and deleted.
@throws IOException Thrown, if an I/O exception occurred while waiting for the buffers, or if
the closing was interrupted. | closeAndDelete | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | Apache-2.0 |
public final void checkErroneous() throws IOException {
if (this.exception != null) {
throw this.exception;
}
} | Checks the exception state of this channel. The channel is erroneous, if one of its requests
could not be processed correctly.
@throws IOException Thrown, if the channel is erroneous. The thrown exception contains the
original exception that defined the erroneous state as its cause. | checkErroneous | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | Apache-2.0 |
protected boolean registerAllRequestsProcessedListener(NotificationListener listener)
throws IOException {
checkNotNull(listener);
synchronized (listenerLock) {
if (allRequestsProcessedListener == null) {
// There was a race with the processing of the last outstanding request
if (requestsNotReturned.get() == 0) {
return false;
}
allRequestsProcessedListener = listener;
return true;
}
}
throw new IllegalStateException("Already subscribed.");
} | Registers a listener to be notified when all outstanding requests have been processed.
<p>New requests can arrive right after the listener got notified. Therefore, it is not safe
to assume that the number of outstanding requests is still zero after a notification unless
there was a close right before the listener got called.
<p>Returns <code>true</code>, if the registration was successful. A registration can fail, if
there are no outstanding requests when trying to register a listener. | registerAllRequestsProcessedListener | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/AsynchronousFileIOChannel.java | Apache-2.0 |
public boolean readBufferFromFileChannel(Buffer buffer) throws IOException {
checkArgument(fileChannel.size() - fileChannel.position() > 0);
// Read header
header.clear();
fileChannel.read(header);
header.flip();
final boolean isBuffer = header.getInt() == 1;
final int size = header.getInt();
if (size > buffer.getMaxCapacity()) {
throw new IllegalStateException(
"Buffer is too small for data: "
+ buffer.getMaxCapacity()
+ " bytes available, but "
+ size
+ " needed. This is most likely due to an serialized event, which is larger than the buffer size.");
}
checkArgument(buffer.getSize() == 0, "Buffer not empty");
fileChannel.read(buffer.getNioBuffer(0, size));
buffer.setSize(size);
buffer.setDataType(isBuffer ? Buffer.DataType.DATA_BUFFER : Buffer.DataType.EVENT_BUFFER);
return fileChannel.size() - fileChannel.position() == 0;
} | Reads data from the object's file channel into the given buffer.
@param buffer the buffer to read into
@return whether the end of the file has been reached (<tt>true</tt>) or not (<tt>false</tt>) | readBufferFromFileChannel | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/BufferFileChannelReader.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/BufferFileChannelReader.java | Apache-2.0 |
protected void sendReadRequest(MemorySegment seg) throws IOException {
if (this.numRequestsRemaining != 0) {
this.reader.readBlock(seg);
if (this.numRequestsRemaining != -1) {
this.numRequestsRemaining--;
}
} else {
// directly add it to the end of the return queue
this.freeMem.add(seg);
}
} | Sends a new read requests, if further requests remain. Otherwise, this method adds the
segment directly to the readers return queue.
@param seg The segment to use for the read request.
@throws IOException Thrown, if the reader is in error. | sendReadRequest | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelReaderInputView.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelReaderInputView.java | Apache-2.0 |
public List<MemorySegment> close() throws IOException {
// send off set last segment
writeSegment(getCurrentSegment(), getCurrentPositionInSegment(), true);
clear();
// close the writer and gather all segments
final LinkedBlockingQueue<MemorySegment> queue = this.writer.getReturnQueue();
this.writer.close();
// re-collect all memory segments
ArrayList<MemorySegment> list = new ArrayList<MemorySegment>(this.numSegments);
for (int i = 0; i < this.numSegments; i++) {
final MemorySegment m = queue.poll();
if (m == null) {
// we get null if the queue is empty. that should not be the case if the reader was
// properly closed.
throw new RuntimeException(
"ChannelWriterOutputView: MemorySegments have been taken from return queue by different actor.");
}
list.add(m);
}
return list;
} | Closes this OutputView, closing the underlying writer and returning all memory segments.
@return A list containing all memory segments originally supplied to this view.
@throws IOException Thrown, if the underlying writer could not be properly closed. | close | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelWriterOutputView.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelWriterOutputView.java | Apache-2.0 |
public int getBlockCount() {
return this.blockCount;
} | Gets the number of blocks used by this view.
@return The number of blocks used. | getBlockCount | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelWriterOutputView.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelWriterOutputView.java | Apache-2.0 |
public long getBytesWritten() {
return this.bytesBeforeSegment + getCurrentPositionInSegment() - HEADER_LENGTH;
} | Gets the number of pay-load bytes already written. This excludes the number of bytes spent on
headers in the segments.
@return The number of bytes that have been written to this output view. | getBytesWritten | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelWriterOutputView.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/ChannelWriterOutputView.java | Apache-2.0 |
public String getPath() {
return path.getAbsolutePath();
} | Returns the path to the underlying temporary file. | getPath | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/FileIOChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/FileIOChannel.java | Apache-2.0 |
public ID next() {
// The local counter is used to increment file names while the global counter is used
// for indexing the directory and associated read and write threads. This performs a
// round-robin among all spilling operators and avoids I/O bunching.
int threadNum = globalCounter.getAndIncrement() % paths.length;
String filename = String.format("%s.%06d.channel", namePrefix, (localCounter++));
return new ID(new File(paths[threadNum], filename), threadNum);
} | An enumerator for channels that logically belong together. | next | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/FileIOChannel.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/FileIOChannel.java | Apache-2.0 |
public ID createChannel() {
return fileChannelManager.createChannel();
} | Creates a new {@link ID} in one of the temp directories. Multiple invocations of this method
spread the channels evenly across the different directories.
@return A channel to a temporary directory. | createChannel | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | Apache-2.0 |
public Enumerator createChannelEnumerator() {
return fileChannelManager.createChannelEnumerator();
} | Creates a new {@link Enumerator}, spreading the channels in a round-robin fashion across the
temporary file directories.
@return An enumerator for channels. | createChannelEnumerator | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | Apache-2.0 |
public static void deleteChannel(ID channel) {
if (channel != null) {
if (channel.getPathFile().exists() && !channel.getPathFile().delete()) {
LOG.warn("IOManager failed to delete temporary file {}", channel.getPath());
}
}
} | Deletes the file underlying the given channel. If the channel is still open, this call may
fail.
@param channel The channel to be deleted. | deleteChannel | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | Apache-2.0 |
public File[] getSpillingDirectories() {
return fileChannelManager.getPaths();
} | Gets the directories that the I/O manager spills to.
@return The directories that the I/O manager spills to. | getSpillingDirectories | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | Apache-2.0 |
public String[] getSpillingDirectoriesPaths() {
File[] paths = fileChannelManager.getPaths();
String[] strings = new String[paths.length];
for (int i = 0; i < strings.length; i++) {
strings[i] = paths[i].getAbsolutePath();
}
return strings;
} | Gets the directories that the I/O manager spills to, as path strings.
@return The directories that the I/O manager spills to, as path strings. | getSpillingDirectoriesPaths | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | Apache-2.0 |
public BlockChannelWriter<MemorySegment> createBlockChannelWriter(ID channelID)
throws IOException {
return createBlockChannelWriter(channelID, new LinkedBlockingQueue<>());
} | Creates a block channel writer that writes to the given channel. The writer adds the written
segment to its return-queue afterwards (to allow for asynchronous implementations).
@param channelID The descriptor for the channel to write to.
@return A block channel writer that writes to the given channel.
@throws IOException Thrown, if the channel for the writer could not be opened. | createBlockChannelWriter | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | Apache-2.0 |
public BlockChannelReader<MemorySegment> createBlockChannelReader(ID channelID)
throws IOException {
return createBlockChannelReader(channelID, new LinkedBlockingQueue<>());
} | Creates a block channel reader that reads blocks from the given channel. The reader pushed
full memory segments (with the read data) to its "return queue", to allow for asynchronous
read implementations.
@param channelID The descriptor for the channel to write to.
@return A block channel reader that reads from the given channel.
@throws IOException Thrown, if the channel for the reader could not be opened. | createBlockChannelReader | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/IOManager.java | Apache-2.0 |
@Override
public void close() {
this.closed = true;
} | Closes this request queue.
@see java.io.Closeable#close() | close | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/RequestQueue.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/RequestQueue.java | Apache-2.0 |
public boolean isClosed() {
return this.closed;
} | Checks whether this request queue is closed.
@return True, if the queue is closed, false otherwise. | isClosed | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/RequestQueue.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/RequestQueue.java | Apache-2.0 |
public void registerPartition(ResultPartitionID partitionId) {
checkNotNull(partitionId);
synchronized (registeredHandlers) {
LOG.debug("registering {}", partitionId);
if (registeredHandlers.put(partitionId, new TaskEventHandler()) != null) {
throw new IllegalStateException(
"Partition "
+ partitionId
+ " already registered at task event dispatcher.");
}
}
} | Registers the given partition for incoming task events allowing calls to {@link
#subscribeToEvent(ResultPartitionID, EventListener, Class)}.
@param partitionId the partition ID | registerPartition | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | Apache-2.0 |
public void unregisterPartition(ResultPartitionID partitionId) {
checkNotNull(partitionId);
synchronized (registeredHandlers) {
LOG.debug("unregistering {}", partitionId);
// NOTE: tolerate un-registration of non-registered task (unregister is always called
// in the cleanup phase of a task even if it never came to the registration - see
// Task.java)
registeredHandlers.remove(partitionId);
}
} | Removes the given partition from listening to incoming task events, thus forbidding calls to
{@link #subscribeToEvent(ResultPartitionID, EventListener, Class)}.
@param partitionId the partition ID | unregisterPartition | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | Apache-2.0 |
public void subscribeToEvent(
ResultPartitionID partitionId,
EventListener<TaskEvent> eventListener,
Class<? extends TaskEvent> eventType) {
checkNotNull(partitionId);
checkNotNull(eventListener);
checkNotNull(eventType);
TaskEventHandler taskEventHandler;
synchronized (registeredHandlers) {
taskEventHandler = registeredHandlers.get(partitionId);
}
if (taskEventHandler == null) {
throw new IllegalStateException(
"Partition " + partitionId + " not registered at task event dispatcher.");
}
taskEventHandler.subscribe(eventListener, eventType);
} | Subscribes a listener to this dispatcher for events on a partition.
@param partitionId ID of the partition to subscribe for (must be registered via {@link
#registerPartition(ResultPartitionID)} first!)
@param eventListener the event listener to subscribe
@param eventType event type to subscribe to | subscribeToEvent | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | Apache-2.0 |
@Override
public boolean publish(ResultPartitionID partitionId, TaskEvent event) {
checkNotNull(partitionId);
checkNotNull(event);
TaskEventHandler taskEventHandler;
synchronized (registeredHandlers) {
taskEventHandler = registeredHandlers.get(partitionId);
}
if (taskEventHandler != null) {
taskEventHandler.publish(event);
return true;
}
return false;
} | Publishes the event to the registered {@link EventListener} instances.
<p>This method is either called directly from a {@link LocalInputChannel} or the network I/O
thread on behalf of a {@link RemoteInputChannel}.
@return whether the event was published to a registered event handler (initiated via {@link
#registerPartition(ResultPartitionID)}) or not | publish | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/TaskEventDispatcher.java | Apache-2.0 |
public void publish(TaskEvent event) {
synchronized (listeners) {
for (EventListener<TaskEvent> listener : listeners.get(event.getClass())) {
listener.onEvent(event);
}
}
} | Publishes the task event to all subscribed event listeners.
@param event The event to publish. | publish | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/TaskEventHandler.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/TaskEventHandler.java | Apache-2.0 |
public int getNumberOfSubpartitions() {
return numberOfSubpartitions;
} | Sets the metric group for this RecordWriter. | getNumberOfSubpartitions | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java | Apache-2.0 |
public void randomEmit(T record) throws IOException {
checkErroneous();
int targetSubpartition = rng.nextInt(numberOfSubpartitions);
emit(record, targetSubpartition);
} | This is used to send LatencyMarks to a random target subpartition. | randomEmit | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java | Apache-2.0 |
public void close() {
// make sure we terminate the thread in any case
if (outputFlusher != null) {
outputFlusher.terminate();
try {
outputFlusher.join();
} catch (InterruptedException e) {
// ignore on close
// restore interrupt flag to fast exit further blocking calls
Thread.currentThread().interrupt();
}
}
} | Closes the writer. This stops the flushing thread (if there is one). | close | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java | Apache-2.0 |
@Override
public int[] getOldSubtasks(
int newSubtaskIndex, int oldNumberOfSubtasks, int newNumberOfSubtasks) {
// The current implementation uses round robin but that may be changed later.
return ROUND_ROBIN.getOldSubtasks(
newSubtaskIndex, oldNumberOfSubtasks, newNumberOfSubtasks);
} | Extra state is redistributed to other subtasks without any specific guarantee (only that up-
and downstream are matched). | getOldSubtasks | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | Apache-2.0 |
@Override
public int[] getOldSubtasks(
int newSubtaskIndex, int oldNumberOfSubtasks, int newNumberOfSubtasks) {
return newSubtaskIndex == 0 ? IntStream.range(0, oldNumberOfSubtasks).toArray() : EMPTY;
} | Restores extra subtasks to the first subtask. | getOldSubtasks | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | Apache-2.0 |
public RescaleMappings getNewToOldSubtasksMapping(int oldParallelism, int newParallelism) {
return RescaleMappings.of(
IntStream.range(0, newParallelism)
.mapToObj(
channelIndex ->
getOldSubtasks(
channelIndex, oldParallelism, newParallelism)),
oldParallelism);
} | Returns a mapping new subtask index to all old subtask indexes. | getNewToOldSubtasksMapping | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | Apache-2.0 |
public boolean isAmbiguous() {
return false;
} | Returns true iff this mapper can potentially lead to ambiguous mappings where the different
new subtasks map to the same old subtask. The assumption is that such replicated data needs
to be filtered. | isAmbiguous | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/SubtaskStateMapper.java | Apache-2.0 |
default String toDebugString(boolean includeHash) {
StringBuilder prettyString =
new StringBuilder("Buffer{cnt=")
.append(refCnt())
.append(", size=")
.append(getSize());
if (includeHash) {
byte[] bytes = new byte[getSize()];
readOnlySlice().asByteBuf().readBytes(bytes);
prettyString.append(", hash=").append(Arrays.hashCode(bytes));
}
return prettyString.append("}").toString();
} | The current reference counter. Increased by {@link #retainBuffer()} and decreased with {@link
#recycleBuffer()}. | toDebugString | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/Buffer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/Buffer.java | Apache-2.0 |
public BufferConsumer createBufferConsumer() {
return createBufferConsumer(positionMarker.cachedPosition);
} | This method always creates a {@link BufferConsumer} starting from the current writer offset.
Data written to {@link BufferBuilder} before creation of {@link BufferConsumer} won't be
visible for that {@link BufferConsumer}.
@return created matching instance of {@link BufferConsumer} to this {@link BufferBuilder}. | createBufferConsumer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java | Apache-2.0 |
public Buffer.DataType getDataType() {
return buffer.getDataType();
} | Gets the data type of the internal buffer. | getDataType | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java | Apache-2.0 |
public int append(ByteBuffer source) {
checkState(!isFinished());
int needed = source.remaining();
int available = getMaxCapacity() - positionMarker.getCached();
int toCopy = Math.min(needed, available);
memorySegment.put(positionMarker.getCached(), source, toCopy);
positionMarker.move(toCopy);
return toCopy;
} | Append as many data as possible from {@code source}. Not everything might be copied if there
is not enough space in the underlying {@link MemorySegment}
@return number of copied bytes | append | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java | Apache-2.0 |
public Buffer compressToIntermediateBuffer(Buffer buffer) {
int compressedLen;
if ((compressedLen = compress(buffer)) == 0) {
return buffer;
}
internalBuffer.setCompressed(true);
internalBuffer.setSize(compressedLen);
return internalBuffer.retainBuffer();
} | Compresses the given {@link Buffer} using {@link BlockCompressor}. The compressed data will
be stored in the intermediate buffer of this {@link BufferCompressor} and returned to the
caller. The caller must guarantee that the returned {@link Buffer} has been freed when
calling the method next time.
<p>Notes that the compression will always start from offset 0 to the size of the input {@link
Buffer}. | compressToIntermediateBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferCompressor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferCompressor.java | Apache-2.0 |
public Buffer compressToOriginalBuffer(Buffer buffer) {
int compressedLen;
if ((compressedLen = compress(buffer)) == 0) {
return buffer;
}
// copy the compressed data back
int memorySegmentOffset = buffer.getMemorySegmentOffset();
MemorySegment segment = buffer.getMemorySegment();
segment.put(memorySegmentOffset, internalBufferArray, 0, compressedLen);
return new ReadOnlySlicedNetworkBuffer(
buffer.asByteBuf(), 0, compressedLen, memorySegmentOffset, true);
} | The difference between this method and {@link #compressToIntermediateBuffer(Buffer)} is that
this method will copy the compressed data back to the input {@link Buffer} starting from
offset 0.
<p>The caller must guarantee that the input {@link Buffer} is writable. | compressToOriginalBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferCompressor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferCompressor.java | Apache-2.0 |
private int compress(Buffer buffer) {
checkArgument(buffer != null, "The input buffer must not be null.");
checkArgument(buffer.isBuffer(), "Event can not be compressed.");
checkArgument(!buffer.isCompressed(), "Buffer already compressed.");
checkArgument(buffer.getReaderIndex() == 0, "Reader index of the input buffer must be 0.");
checkArgument(buffer.readableBytes() > 0, "No data to be compressed.");
checkState(
internalBuffer.refCnt() == 1,
"Illegal reference count, buffer need to be released.");
try {
int compressedLen;
int length = buffer.getSize();
MemorySegment memorySegment = buffer.getMemorySegment();
// If buffer is on-heap, manipulate the underlying array directly. There are two main
// reasons why NIO buffer is not directly used here: One is that some compression
// libraries will use the underlying array for heap buffer, but our input buffer may be
// a read-only ByteBuffer, and it is illegal to access internal array. Another reason
// is that for the on-heap buffer, directly operating the underlying array can reduce
// additional overhead compared to generating a NIO buffer.
if (!memorySegment.isOffHeap()) {
compressedLen =
blockCompressor.compress(
memorySegment.getArray(),
buffer.getMemorySegmentOffset(),
length,
internalBufferArray,
0);
} else {
// compress the given buffer into the internal heap buffer
compressedLen =
blockCompressor.compress(
buffer.getNioBuffer(0, length),
0,
length,
internalBuffer.getNioBuffer(0, internalBuffer.capacity()),
0);
}
return compressedLen < length ? compressedLen : 0;
} catch (Throwable throwable) {
// return the original buffer if failed to compress
return 0;
}
} | Compresses the given {@link Buffer} into the intermediate buffer and returns the compressed
data size. | compress | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferCompressor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferCompressor.java | Apache-2.0 |
public boolean isFinished() {
return writerPosition.isFinished();
} | Checks whether the {@link BufferBuilder} has already been finished.
<p>BEWARE: this method accesses the cached value of the position marker which is only updated
after calls to {@link #build()} and {@link #skip(int)}!
@return <tt>true</tt> if the buffer was finished, <tt>false</tt> otherwise | isFinished | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | Apache-2.0 |
public Buffer build() {
writerPosition.update();
int cachedWriterPosition = writerPosition.getCached();
Buffer slice =
buffer.readOnlySlice(
currentReaderPosition, cachedWriterPosition - currentReaderPosition);
currentReaderPosition = cachedWriterPosition;
return slice.retainBuffer();
} | @return sliced {@link Buffer} containing the not yet consumed data. Returned {@link Buffer}
shares the reference counter with the parent {@link BufferConsumer} - in order to recycle
memory both of them must be recycled/closed. | build | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | Apache-2.0 |
void skip(int bytesToSkip) {
writerPosition.update();
int cachedWriterPosition = writerPosition.getCached();
int bytesReadable = cachedWriterPosition - currentReaderPosition;
checkState(bytesToSkip <= bytesReadable, "bytes to skip beyond readable range");
currentReaderPosition += bytesToSkip;
} | @param bytesToSkip number of bytes to skip from currentReaderPosition | skip | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | Apache-2.0 |
public BufferConsumer copy() {
return new BufferConsumer(
buffer.retainBuffer(), writerPosition.positionMarker, currentReaderPosition);
} | Returns a retained copy with separate indexes. This allows to read from the same {@link
MemorySegment} twice.
<p>WARNING: the newly returned {@link BufferConsumer} will have its reader index copied from
the original buffer. In other words, data already consumed before copying will not be visible
to the returned copies.
@return a retained copy of self with separate indexes | copy | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.java | Apache-2.0 |
public Buffer decompressToIntermediateBuffer(Buffer buffer) {
int decompressedLen = decompress(buffer);
internalBuffer.setSize(decompressedLen);
return internalBuffer.retainBuffer();
} | Decompresses the given {@link Buffer} using {@link BlockDecompressor}. The decompressed data
will be stored in the intermediate buffer of this {@link BufferDecompressor} and returned to
the caller. The caller must guarantee that the returned {@link Buffer} has been freed when
calling the method next time.
<p>Notes that the decompression will always start from offset 0 to the size of the input
{@link Buffer}. | decompressToIntermediateBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferDecompressor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferDecompressor.java | Apache-2.0 |
@VisibleForTesting
public Buffer decompressToOriginalBuffer(Buffer buffer) {
int decompressedLen = decompress(buffer);
// copy the decompressed data back
int memorySegmentOffset = buffer.getMemorySegmentOffset();
MemorySegment segment = buffer.getMemorySegment();
segment.put(memorySegmentOffset, internalBufferArray, 0, decompressedLen);
return new ReadOnlySlicedNetworkBuffer(
buffer.asByteBuf(), 0, decompressedLen, memorySegmentOffset, false);
} | The difference between this method and {@link #decompressToIntermediateBuffer(Buffer)} is
that this method copies the decompressed data to the input {@link Buffer} starting from
offset 0.
<p>The caller must guarantee that the input {@link Buffer} is writable and there's enough
space left. | decompressToOriginalBuffer | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferDecompressor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferDecompressor.java | Apache-2.0 |
private int decompress(Buffer buffer) {
checkArgument(buffer != null, "The input buffer must not be null.");
checkArgument(buffer.isBuffer(), "Event can not be decompressed.");
checkArgument(buffer.isCompressed(), "Buffer not compressed.");
checkArgument(buffer.getReaderIndex() == 0, "Reader index of the input buffer must be 0.");
checkArgument(buffer.readableBytes() > 0, "No data to be decompressed.");
checkState(
internalBuffer.refCnt() == 1,
"Illegal reference count, buffer need to be released.");
int length = buffer.getSize();
MemorySegment memorySegment = buffer.getMemorySegment();
// If buffer is on-heap, manipulate the underlying array directly. There are two main
// reasons why NIO buffer is not directly used here: One is that some compression
// libraries will use the underlying array for heap buffer, but our input buffer may be
// a read-only ByteBuffer, and it is illegal to access internal array. Another reason
// is that for the on-heap buffer, directly operating the underlying array can reduce
// additional overhead compared to generating a NIO buffer.
if (!memorySegment.isOffHeap()) {
return blockDecompressor.decompress(
memorySegment.getArray(),
buffer.getMemorySegmentOffset(),
length,
internalBufferArray,
0);
} else {
// decompress the given buffer into the internal heap buffer
return blockDecompressor.decompress(
buffer.getNioBuffer(0, length),
0,
length,
internalBuffer.getNioBuffer(0, internalBuffer.capacity()),
0);
}
} | Decompresses the input {@link Buffer} into the intermediate buffer and returns the
decompressed data size. | decompress | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferDecompressor.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferDecompressor.java | Apache-2.0 |
@Override
public void recycle(MemorySegment memorySegment) {} | The buffer recycler does nothing for recycled segment. | recycle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferRecycler.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferRecycler.java | Apache-2.0 |
@Override
public ByteBuf asByteBuf() {
CompositeByteBuf compositeByteBuf = checkNotNull(allocator).compositeDirectBuffer();
for (Buffer buffer : partialBuffers) {
compositeByteBuf.addComponent(buffer.asByteBuf());
}
compositeByteBuf.writerIndex(currentLength);
return compositeByteBuf;
} | An implementation of {@link Buffer} which contains multiple partial buffers for network data
communication.
<p>This class used for that all partial buffers are derived from the same initial write buffer
due to the potential fragmentation during the read process. | asByteBuf | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/CompositeBuffer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/CompositeBuffer.java | Apache-2.0 |
@Override
public void recycle(MemorySegment memorySegment) {
memorySegment.free();
} | Frees the given memory segment.
@param memorySegment The memory segment to be recycled. | recycle | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/FreeingBufferRecycler.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/FreeingBufferRecycler.java | Apache-2.0 |
public void recyclePooledMemorySegment(MemorySegment segment) {
// Adds the segment back to the queue, which does not immediately free the memory
// however, since this happens when references to the global pool are also released,
// making the availableMemorySegments queue and its contained object reclaimable
internalRecycleMemorySegments(Collections.singleton(checkNotNull(segment)));
} | Corresponding to {@link #requestPooledMemorySegmentsBlocking} and {@link
#requestPooledMemorySegment}, this method is for pooled memory segments recycling. | recyclePooledMemorySegment | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java | Apache-2.0 |
@Override
public Float getValue() {
int usedBuffers = 0;
int totalBuffers = 0;
for (SingleInputGate inputGate : inputGates) {
usedBuffers += calculateUsedBuffers(inputGate);
totalBuffers += calculateTotalBuffers(inputGate);
}
if (totalBuffers != 0) {
return ((float) usedBuffers) / totalBuffers;
} else {
return 0.0f;
}
} | Abstract gauge implementation for calculating the buffer usage percent. | getValue | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/AbstractBuffersUsageGauge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/AbstractBuffersUsageGauge.java | Apache-2.0 |
@Override
public Integer getValue() {
int totalBuffers = 0;
for (SingleInputGate inputGate : inputGates) {
totalBuffers += inputGate.getNumberOfQueuedBuffers();
}
return totalBuffers;
} | Gauge metric measuring the number of queued input buffers for {@link SingleInputGate}s. | getValue | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputBuffersGauge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputBuffersGauge.java | Apache-2.0 |
@Override
public Long getValue() {
long totalBuffers = 0;
for (SingleInputGate inputGate : inputGates) {
totalBuffers += inputGate.getSizeOfQueuedBuffers();
}
return totalBuffers;
} | Gauge metric measuring the size in bytes of queued input buffers for {@link SingleInputGate}s. | getValue | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputBuffersSizeGauge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputBuffersSizeGauge.java | Apache-2.0 |
long refreshAndGetTotal() {
long total = 0;
for (InputChannel channel : inputGate.inputChannels()) {
if (channel instanceof RemoteInputChannel) {
RemoteInputChannel rc = (RemoteInputChannel) channel;
total += rc.unsynchronizedGetNumberOfQueuedBuffers();
}
}
return total;
} | Iterates over all input channels and collects the total number of queued buffers in a
best-effort way.
@return total number of queued buffers | refreshAndGetTotal | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputGateMetrics.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputGateMetrics.java | Apache-2.0 |
int refreshAndGetMin() {
int min = Integer.MAX_VALUE;
for (InputChannel channel : inputGate.inputChannels()) {
if (channel instanceof RemoteInputChannel) {
RemoteInputChannel rc = (RemoteInputChannel) channel;
int size = rc.unsynchronizedGetNumberOfQueuedBuffers();
min = Math.min(min, size);
}
}
if (min == Integer.MAX_VALUE) { // in case all channels are local, or the channel collection
// was empty
return 0;
}
return min;
} | Iterates over all input channels and collects the minimum number of queued buffers in a
channel in a best-effort way.
@return minimum number of queued buffers per channel (<tt>0</tt> if no channels exist) | refreshAndGetMin | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputGateMetrics.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputGateMetrics.java | Apache-2.0 |
int refreshAndGetMax() {
int max = 0;
for (InputChannel channel : inputGate.inputChannels()) {
if (channel instanceof RemoteInputChannel) {
RemoteInputChannel rc = (RemoteInputChannel) channel;
int size = rc.unsynchronizedGetNumberOfQueuedBuffers();
max = Math.max(max, size);
}
}
return max;
} | Iterates over all input channels and collects the maximum number of queued buffers in a
channel in a best-effort way.
@return maximum number of queued buffers per channel | refreshAndGetMax | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputGateMetrics.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/InputGateMetrics.java | Apache-2.0 |
@Override
public Float getValue() {
int usedBuffers = 0;
int bufferPoolSize = 0;
for (ResultPartition resultPartition : resultPartitions) {
BufferPool bufferPool = resultPartition.getBufferPool();
if (bufferPool != null) {
usedBuffers += bufferPool.bestEffortGetNumOfUsedBuffers();
bufferPoolSize += bufferPool.getNumBuffers();
}
}
if (bufferPoolSize != 0) {
return ((float) usedBuffers) / bufferPoolSize;
} else {
return 0.0f;
}
} | Gauge metric measuring the output buffer pool usage gauge for {@link ResultPartition}s. | getValue | java | apache/flink | flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/OutputBufferPoolUsageGauge.java | https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/metrics/OutputBufferPoolUsageGauge.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.