code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1 value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7 values |
|---|---|---|---|---|---|---|---|
@Override
public void go() throws Exception {
server.getStorageLocation(jobId, key);
} | Checked thread that calls {@link BlobServer#getStorageLocation(JobID, BlobKey)}. | go | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerPutTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerPutTest.java | Apache-2.0 |
@Test
void testOnEphemeralPort() throws IOException {
Configuration conf = new Configuration();
conf.set(BlobServerOptions.PORT, "0");
BlobServer server = TestingBlobUtils.createServer(tempDir, conf);
server.start();
server.close();
} | Start blob server on 0 = pick an ephemeral port. | testOnEphemeralPort | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerRangeTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerRangeTest.java | Apache-2.0 |
@Test
void testPortUnavailable() throws IOException {
// allocate on an ephemeral port
ServerSocket socket = null;
try {
socket = new ServerSocket(0);
} catch (IOException e) {
e.printStackTrace();
fail("An exception was thrown while preparing the test " + e.getMessage());
}
Configuration conf = new Configuration();
conf.set(BlobServerOptions.PORT, String.valueOf(socket.getLocalPort()));
// this thing is going to throw an exception
try {
assertThatThrownBy(() -> TestingBlobUtils.createServer(tempDir, conf))
.isInstanceOf(IOException.class)
.hasMessageStartingWith("Unable to open BLOB Server in specified port range: ");
} finally {
socket.close();
}
} | Try allocating on an unavailable port. | testPortUnavailable | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerRangeTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerRangeTest.java | Apache-2.0 |
@Test
void testOnePortAvailable() throws IOException {
int numAllocated = 2;
ServerSocket[] sockets = new ServerSocket[numAllocated];
for (int i = 0; i < numAllocated; i++) {
try {
sockets[i] = new ServerSocket(0);
} catch (IOException e) {
e.printStackTrace();
fail("An exception was thrown while preparing the test " + e.getMessage());
}
}
Configuration conf = new Configuration();
conf.set(
BlobServerOptions.PORT,
sockets[0].getLocalPort() + "," + sockets[1].getLocalPort() + ",50000-50050");
// this thing is going to throw an exception
try {
BlobServer server = TestingBlobUtils.createServer(tempDir, conf);
server.start();
assertThat(server.getPort()).isBetween(50000, 50050);
server.close();
} finally {
for (int i = 0; i < numAllocated; ++i) {
sockets[i].close();
}
}
} | Give the BlobServer a choice of three ports, where two of them are allocated. | testOnePortAvailable | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerRangeTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobServerRangeTest.java | Apache-2.0 |
@Test
void testDefaultBlobStorageDirectory() throws IOException {
Configuration config = new Configuration();
String blobStorageDir = TempDirUtils.newFolder(tempDir).getAbsolutePath();
config.set(BlobServerOptions.STORAGE_DIRECTORY, blobStorageDir);
config.set(CoreOptions.TMP_DIRS, TempDirUtils.newFolder(tempDir).getAbsolutePath());
File dir = BlobUtils.createBlobStorageDirectory(config, null).deref();
assertThat(dir.getAbsolutePath()).startsWith(blobStorageDir);
} | Tests {@link BlobUtils#createBlobStorageDirectory} using {@link
BlobServerOptions#STORAGE_DIRECTORY} per default. | testDefaultBlobStorageDirectory | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobUtilsTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobUtilsTest.java | Apache-2.0 |
public static <T> int checkFilesExist(
JobID jobId, Collection<? extends BlobKey> keys, T blobService, boolean doThrow)
throws IOException {
int numFiles = 0;
for (BlobKey key : keys) {
final File storageDir;
if (blobService instanceof BlobServer) {
BlobServer server = (BlobServer) blobService;
storageDir = server.getStorageDir();
} else if (blobService instanceof PermanentBlobCache) {
PermanentBlobCache cache = (PermanentBlobCache) blobService;
storageDir = cache.getStorageDir();
} else if (blobService instanceof TransientBlobCache) {
TransientBlobCache cache = (TransientBlobCache) blobService;
storageDir = cache.getStorageDir();
} else {
throw new UnsupportedOperationException(
"unsupported BLOB service class: "
+ blobService.getClass().getCanonicalName());
}
final File blobFile =
new File(
BlobUtils.getStorageLocationPath(
storageDir.getAbsolutePath(), jobId, key));
if (blobFile.exists()) {
++numFiles;
} else if (doThrow) {
throw new IOException("File " + blobFile + " does not exist.");
}
}
return numFiles;
} | Checks how many of the files given by blob keys are accessible.
@param jobId ID of a job
@param keys blob keys to check
@param blobService BLOB store to use
@param doThrow whether exceptions should be ignored (<tt>false</tt>), or thrown
(<tt>true</tt>)
@return number of files existing at {@link BlobServer#getStorageLocation(JobID, BlobKey)} and
{@link PermanentBlobCache#getStorageLocation(JobID, BlobKey)}, respectively | checkFilesExist | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | Apache-2.0 |
public static void checkFileCountForJob(
int expectedCount, JobID jobId, PermanentBlobService blobService) throws IOException {
final File jobDir;
if (blobService instanceof BlobServer) {
BlobServer server = (BlobServer) blobService;
jobDir = server.getStorageLocation(jobId, new PermanentBlobKey()).getParentFile();
} else {
PermanentBlobCache cache = (PermanentBlobCache) blobService;
jobDir = cache.getStorageLocation(jobId, new PermanentBlobKey()).getParentFile();
}
File[] blobsForJob = jobDir.listFiles();
if (blobsForJob == null) {
if (expectedCount != 0) {
throw new IOException("File " + jobDir + " does not exist.");
}
} else {
assertThat(blobsForJob.length)
.as("Too many/few files in job dir: " + Arrays.asList(blobsForJob))
.isEqualTo(expectedCount);
}
} | Checks how many of the files given by blob keys are accessible.
@param expectedCount number of expected files in the blob service for the given job
@param jobId ID of a job
@param blobService BLOB store to use | checkFileCountForJob | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | Apache-2.0 |
public static void testGetFailsFromCorruptFile(
Configuration config, BlobStore blobStore, File blobStorage) throws IOException {
Random rnd = new Random();
JobID jobId = new JobID();
try (BlobServer server = new BlobServer(config, blobStorage, blobStore)) {
server.start();
byte[] data = new byte[2000000];
rnd.nextBytes(data);
// put content addressable (like libraries)
BlobKey key = put(server, jobId, data, PERMANENT_BLOB);
assertThat(key).isNotNull();
// delete local file to make sure that the GET requests downloads from HA
File blobFile = server.getStorageLocation(jobId, key);
assertThat(blobFile.delete()).isTrue();
// change HA store file contents to make sure that GET requests fail
byte[] data2 = Arrays.copyOf(data, data.length);
data2[0] ^= 1;
File tmpFile = Files.createTempFile("blob", ".jar").toFile();
try {
FileUtils.writeByteArrayToFile(tmpFile, data2);
blobStore.put(tmpFile, jobId, key);
} finally {
//noinspection ResultOfMethodCallIgnored
tmpFile.delete();
}
assertThatThrownBy(() -> get(server, jobId, key))
.satisfies(
FlinkAssertions.anyCauseMatches(IOException.class, "data corruption"));
}
} | Checks the GET operation fails when the downloaded file (from HA store) is corrupt, i.e. its
content's hash does not match the {@link BlobKey}'s hash.
@param config blob server configuration (including HA settings like {@link
HighAvailabilityOptions#HA_STORAGE_PATH} and {@link
HighAvailabilityOptions#HA_CLUSTER_ID}) used to set up <tt>blobStore</tt>
@param blobStore shared HA blob store to use | testGetFailsFromCorruptFile | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | Apache-2.0 |
public static void testGetFailsFromCorruptFile(
JobID jobId, Configuration config, BlobStore blobStore, File blobStorage)
throws IOException {
testGetFailsFromCorruptFile(jobId, PERMANENT_BLOB, true, config, blobStore, blobStorage);
} | Checks the GET operation fails when the downloaded file (from HA store) is corrupt, i.e. its
content's hash does not match the {@link BlobKey}'s hash, using a permanent BLOB.
@param jobId job ID
@param config blob server configuration (including HA settings like {@link
HighAvailabilityOptions#HA_STORAGE_PATH} and {@link
HighAvailabilityOptions#HA_CLUSTER_ID}) used to set up <tt>blobStore</tt>
@param blobStore shared HA blob store to use | testGetFailsFromCorruptFile | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | Apache-2.0 |
static void testGetFailsFromCorruptFile(
@Nullable JobID jobId,
BlobKey.BlobType blobType,
boolean corruptOnHAStore,
Configuration config,
BlobStore blobStore,
File blobStorage)
throws IOException {
assertThat(!corruptOnHAStore || blobType == PERMANENT_BLOB)
.as("Check HA setup for corrupt HA file")
.isTrue();
Random rnd = new Random();
try (BlobServer server =
new BlobServer(config, new File(blobStorage, "server"), blobStore);
BlobCacheService cache =
new BlobCacheService(
config,
new File(blobStorage, "cache"),
corruptOnHAStore ? blobStore : new VoidBlobStore(),
new InetSocketAddress("localhost", server.getPort()))) {
server.start();
byte[] data = new byte[2000000];
rnd.nextBytes(data);
// put content addressable (like libraries)
BlobKey key = put(server, jobId, data, blobType);
assertThat(key).isNotNull();
// change server/HA store file contents to make sure that GET requests fail
byte[] data2 = Arrays.copyOf(data, data.length);
data2[0] ^= 1;
if (corruptOnHAStore) {
File tmpFile = Files.createTempFile("blob", ".jar").toFile();
try {
FileUtils.writeByteArrayToFile(tmpFile, data2);
blobStore.put(tmpFile, jobId, key);
} finally {
//noinspection ResultOfMethodCallIgnored
tmpFile.delete();
}
// delete local (correct) file on server to make sure that the GET request does not
// fall back to downloading the file from the BlobServer's local store
File blobFile = server.getStorageLocation(jobId, key);
assertThat(blobFile.delete()).isTrue();
} else {
File blobFile = server.getStorageLocation(jobId, key);
assertThat(blobFile).exists();
FileUtils.writeByteArrayToFile(blobFile, data2);
}
// issue a GET request that fails
assertThatThrownBy(() -> get(cache, jobId, key))
.satisfies(
FlinkAssertions.anyCauseMatches(IOException.class, "data corruption"));
}
} | Checks the GET operation fails when the downloaded file (from {@link BlobServer} or HA store)
is corrupt, i.e. its content's hash does not match the {@link BlobKey}'s hash.
@param jobId job ID or <tt>null</tt> if job-unrelated
@param blobType whether the BLOB should become permanent or transient
@param corruptOnHAStore whether the file should be corrupt in the HA store (<tt>true</tt>,
required <tt>highAvailability</tt> to be set) or on the {@link BlobServer}'s local store
(<tt>false</tt>)
@param config blob server configuration (including HA settings like {@link
HighAvailabilityOptions#HA_STORAGE_PATH} and {@link
HighAvailabilityOptions#HA_CLUSTER_ID}) used to set up <tt>blobStore</tt>
@param blobStore shared HA blob store to use | testGetFailsFromCorruptFile | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | Apache-2.0 |
public static void testBlobServerRecovery(
final Configuration config, final BlobStore blobStore, final File blobStorage)
throws Exception {
final String clusterId = config.get(HighAvailabilityOptions.HA_CLUSTER_ID);
String storagePath = config.get(HighAvailabilityOptions.HA_STORAGE_PATH) + "/" + clusterId;
Random rand = new Random();
try (BlobServer server0 =
new BlobServer(config, new File(blobStorage, "server0"), blobStore);
BlobServer server1 =
new BlobServer(config, new File(blobStorage, "server1"), blobStore);
// use VoidBlobStore as the HA store to force download from server[1]'s HA store
BlobCacheService cache1 =
new BlobCacheService(
config,
new File(blobStorage, "cache1"),
new VoidBlobStore(),
new InetSocketAddress("localhost", server1.getPort()))) {
server0.start();
server1.start();
// Random data
byte[] expected = new byte[1024];
rand.nextBytes(expected);
byte[] expected2 = Arrays.copyOfRange(expected, 32, 288);
BlobKey[] keys = new BlobKey[2];
BlobKey nonHAKey;
// Put job-related HA data
JobID[] jobId = new JobID[] {new JobID(), new JobID()};
keys[0] = put(server0, jobId[0], expected, PERMANENT_BLOB); // Request 1
keys[1] = put(server0, jobId[1], expected2, PERMANENT_BLOB); // Request 2
// put non-HA data
nonHAKey = put(server0, jobId[0], expected2, TRANSIENT_BLOB);
verifyKeyDifferentHashEquals(keys[1], nonHAKey);
// check that the storage directory exists
final Path blobServerPath = new Path(storagePath, "blob");
FileSystem fs = blobServerPath.getFileSystem();
assertThat(fs.exists(blobServerPath)).isTrue();
// Verify HA requests from cache1 (connected to server1) with no immediate access to the
// file
verifyContents(cache1, jobId[0], keys[0], expected);
verifyContents(cache1, jobId[1], keys[1], expected2);
// Verify non-HA file is not accessible from server1
verifyDeleted(cache1, jobId[0], nonHAKey);
// Remove again
server1.globalCleanupAsync(jobId[0], Executors.directExecutor()).join();
server1.globalCleanupAsync(jobId[1], Executors.directExecutor()).join();
// Verify everything is clean
assertThat(fs.exists(new Path(storagePath))).isTrue();
if (fs.exists(blobServerPath)) {
final org.apache.flink.core.fs.FileStatus[] recoveryFiles =
fs.listStatus(blobServerPath);
ArrayList<String> filenames = new ArrayList<>(recoveryFiles.length);
for (org.apache.flink.core.fs.FileStatus file : recoveryFiles) {
filenames.add(file.toString());
}
fail("Unclean state backend: %s", filenames);
}
}
} | Helper to test that the {@link BlobServer} recovery from its HA store works.
<p>Uploads two BLOBs to one {@link BlobServer} and expects a second one to be able to
retrieve them via a shared HA store upon request of a {@link BlobCacheService}.
@param config blob server configuration (including HA settings like {@link
HighAvailabilityOptions#HA_STORAGE_PATH} and {@link
HighAvailabilityOptions#HA_CLUSTER_ID}) used to set up <tt>blobStore</tt>
@param blobStore shared HA blob store to use
@throws IOException in case of failures | testBlobServerRecovery | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | Apache-2.0 |
public static void testBlobCacheRecovery(
final Configuration config, final BlobStore blobStore, final File blobStorage)
throws IOException {
final String clusterId = config.get(HighAvailabilityOptions.HA_CLUSTER_ID);
String storagePath = config.get(HighAvailabilityOptions.HA_STORAGE_PATH) + "/" + clusterId;
Random rand = new Random();
try (BlobServer server0 =
new BlobServer(config, new File(blobStorage, "server0"), blobStore);
BlobServer server1 =
new BlobServer(config, new File(blobStorage, "server1"), blobStore);
// use VoidBlobStore as the HA store to force download from each server's HA store
BlobCacheService cache0 =
new BlobCacheService(
config,
new File(blobStorage, "cache0"),
new VoidBlobStore(),
new InetSocketAddress("localhost", server0.getPort()));
BlobCacheService cache1 =
new BlobCacheService(
config,
new File(blobStorage, "cache1"),
new VoidBlobStore(),
new InetSocketAddress("localhost", server1.getPort()))) {
server0.start();
server1.start();
// Random data
byte[] expected = new byte[1024];
rand.nextBytes(expected);
byte[] expected2 = Arrays.copyOfRange(expected, 32, 288);
BlobKey[] keys = new BlobKey[2];
BlobKey nonHAKey;
// Put job-related HA data
JobID[] jobId = new JobID[] {new JobID(), new JobID()};
keys[0] = put(cache0, jobId[0], expected, PERMANENT_BLOB); // Request 1
keys[1] = put(cache0, jobId[1], expected2, PERMANENT_BLOB); // Request 2
// put non-HA data
nonHAKey = put(cache0, jobId[0], expected2, TRANSIENT_BLOB);
verifyKeyDifferentHashDifferent(keys[0], nonHAKey);
verifyKeyDifferentHashEquals(keys[1], nonHAKey);
// check that the storage directory exists
final Path blobServerPath = new Path(storagePath, "blob");
FileSystem fs = blobServerPath.getFileSystem();
assertThat(fs.exists(blobServerPath)).isTrue();
// Verify HA requests from cache1 (connected to server1) with no immediate access to the
// file
verifyContents(cache1, jobId[0], keys[0], expected);
verifyContents(cache1, jobId[1], keys[1], expected2);
// Verify non-HA file is not accessible from server1
verifyDeleted(cache1, jobId[0], nonHAKey);
}
} | Helper to test that the {@link BlobServer} recovery from its HA store works.
<p>Uploads two BLOBs to one {@link BlobServer} via a {@link BlobCacheService} and expects a
second {@link BlobCacheService} to be able to retrieve them from a second {@link BlobServer}
that is configured with the same HA store.
@param config blob server configuration (including HA settings like {@link
HighAvailabilityOptions#HA_STORAGE_PATH} and {@link
HighAvailabilityOptions#HA_CLUSTER_ID}) used to set up <tt>blobStore</tt>
@param blobStore shared HA blob store to use
@throws IOException in case of failures | testBlobCacheRecovery | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingBlobHelpers.java | Apache-2.0 |
@Override
public void run() {
// we do properly the first operation (PUT)
try {
for (int num = 0; num < numAccept && !isShutdown(); num++) {
new BlobServerConnection(NetUtils.acceptWithoutTimeout(getServerSocket()), this)
.start();
}
} catch (Throwable t) {
t.printStackTrace();
}
// do some failing operations
for (int num = 0; num < numFailures && !isShutdown(); num++) {
Socket socket = null;
try {
socket = NetUtils.acceptWithoutTimeout(getServerSocket());
InputStream is = socket.getInputStream();
OutputStream os = socket.getOutputStream();
// just abort everything
is.close();
os.close();
socket.close();
} catch (IOException ignored) {
} finally {
if (socket != null) {
try {
socket.close();
} catch (Throwable ignored) {
}
}
}
}
// regular runs
super.run();
} | Implements a {@link BlobServer} that, after some initial normal operation, closes incoming
connections for a given number of times and continues normally again. | run | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingFailingBlobServer.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/TestingFailingBlobServer.java | Apache-2.0 |
@Test
void testDeduplicateOnRegister() throws Exception {
ExecutionGraph graph =
new CheckpointCoordinatorTestingUtils.CheckpointExecutionGraphBuilder()
.addJobVertex(new JobVertexID())
.build(EXECUTOR_RESOURCE.getExecutor());
final CheckpointCoordinator cc = instantiateCheckpointCoordinator(graph);
MasterTriggerRestoreHook<?> hook1 = mock(MasterTriggerRestoreHook.class);
when(hook1.getIdentifier()).thenReturn("test id");
MasterTriggerRestoreHook<?> hook2 = mock(MasterTriggerRestoreHook.class);
when(hook2.getIdentifier()).thenReturn("test id");
MasterTriggerRestoreHook<?> hook3 = mock(MasterTriggerRestoreHook.class);
when(hook3.getIdentifier()).thenReturn("anotherId");
assertThat(cc.addMasterHook(hook1)).isTrue();
assertThat(cc.addMasterHook(hook2)).isFalse();
assertThat(cc.addMasterHook(hook3)).isTrue();
} | This method tests that hooks with the same identifier are not registered multiple times. | testDeduplicateOnRegister | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorMasterHooksTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorMasterHooksTest.java | Apache-2.0 |
private void testRestoreLatestCheckpointedStateWithChangingParallelism(boolean scaleOut)
throws Exception {
final JobVertexID jobVertexID1 = new JobVertexID();
final JobVertexID jobVertexID2 = new JobVertexID();
int parallelism1 = 3;
int parallelism2 = scaleOut ? 2 : 13;
int maxParallelism1 = 42;
int maxParallelism2 = 13;
int newParallelism2 = scaleOut ? 13 : 2;
CompletedCheckpointStore completedCheckpointStore = new EmbeddedCompletedCheckpointStore();
final ExecutionGraph graph =
new CheckpointCoordinatorTestingUtils.CheckpointExecutionGraphBuilder()
.addJobVertex(jobVertexID1, parallelism1, maxParallelism1)
.addJobVertex(jobVertexID2, parallelism2, maxParallelism2)
.build(EXECUTOR_RESOURCE.getExecutor());
final ExecutionJobVertex jobVertex1 = graph.getJobVertex(jobVertexID1);
final ExecutionJobVertex jobVertex2 = graph.getJobVertex(jobVertexID2);
// set up the coordinator and validate the initial state
CheckpointCoordinator coord =
new CheckpointCoordinatorBuilder()
.setCompletedCheckpointStore(completedCheckpointStore)
.setTimer(manuallyTriggeredScheduledExecutor)
.build(graph);
// trigger the checkpoint
coord.triggerCheckpoint(false);
manuallyTriggeredScheduledExecutor.triggerAll();
assertThat(coord.getPendingCheckpoints().size()).isOne();
long checkpointId = Iterables.getOnlyElement(coord.getPendingCheckpoints().keySet());
List<KeyGroupRange> keyGroupPartitions1 =
StateAssignmentOperation.createKeyGroupPartitions(maxParallelism1, parallelism1);
List<KeyGroupRange> keyGroupPartitions2 =
StateAssignmentOperation.createKeyGroupPartitions(maxParallelism2, parallelism2);
// vertex 1
for (int index = 0; index < jobVertex1.getParallelism(); index++) {
OperatorStateHandle opStateBackend =
generatePartitionableStateHandle(jobVertexID1, index, 2, 8, false);
KeyGroupsStateHandle keyedStateBackend =
generateKeyGroupState(jobVertexID1, keyGroupPartitions1.get(index), false);
KeyGroupsStateHandle keyedStateRaw =
generateKeyGroupState(jobVertexID1, keyGroupPartitions1.get(index), true);
OperatorSubtaskState operatorSubtaskState =
OperatorSubtaskState.builder()
.setManagedOperatorState(opStateBackend)
.setManagedKeyedState(keyedStateBackend)
.setRawKeyedState(keyedStateRaw)
.setInputChannelState(
StateObjectCollection.singleton(
createNewInputChannelStateHandle(3, new Random())))
.build();
TaskStateSnapshot taskOperatorSubtaskStates = new TaskStateSnapshot();
taskOperatorSubtaskStates.putSubtaskStateByOperatorID(
OperatorID.fromJobVertexID(jobVertexID1), operatorSubtaskState);
AcknowledgeCheckpoint acknowledgeCheckpoint =
new AcknowledgeCheckpoint(
graph.getJobID(),
jobVertex1
.getTaskVertices()[index]
.getCurrentExecutionAttempt()
.getAttemptId(),
checkpointId,
new CheckpointMetrics(),
taskOperatorSubtaskStates);
coord.receiveAcknowledgeMessage(acknowledgeCheckpoint, TASK_MANAGER_LOCATION_INFO);
}
// vertex 2
final List<ChainedStateHandle<OperatorStateHandle>> expectedOpStatesBackend =
new ArrayList<>(jobVertex2.getParallelism());
final List<ChainedStateHandle<OperatorStateHandle>> expectedOpStatesRaw =
new ArrayList<>(jobVertex2.getParallelism());
for (int index = 0; index < jobVertex2.getParallelism(); index++) {
KeyGroupsStateHandle keyedStateBackend =
generateKeyGroupState(jobVertexID2, keyGroupPartitions2.get(index), false);
KeyGroupsStateHandle keyedStateRaw =
generateKeyGroupState(jobVertexID2, keyGroupPartitions2.get(index), true);
OperatorStateHandle opStateBackend =
generatePartitionableStateHandle(jobVertexID2, index, 2, 8, false);
OperatorStateHandle opStateRaw =
generatePartitionableStateHandle(jobVertexID2, index, 2, 8, true);
expectedOpStatesBackend.add(new ChainedStateHandle<>(singletonList(opStateBackend)));
expectedOpStatesRaw.add(new ChainedStateHandle<>(singletonList(opStateRaw)));
OperatorSubtaskState operatorSubtaskState =
OperatorSubtaskState.builder()
.setManagedOperatorState(opStateBackend)
.setRawOperatorState(opStateRaw)
.setManagedKeyedState(keyedStateBackend)
.setRawKeyedState(keyedStateRaw)
.build();
TaskStateSnapshot taskOperatorSubtaskStates = new TaskStateSnapshot();
taskOperatorSubtaskStates.putSubtaskStateByOperatorID(
OperatorID.fromJobVertexID(jobVertexID2), operatorSubtaskState);
AcknowledgeCheckpoint acknowledgeCheckpoint =
new AcknowledgeCheckpoint(
graph.getJobID(),
jobVertex2
.getTaskVertices()[index]
.getCurrentExecutionAttempt()
.getAttemptId(),
checkpointId,
new CheckpointMetrics(),
taskOperatorSubtaskStates);
coord.receiveAcknowledgeMessage(acknowledgeCheckpoint, TASK_MANAGER_LOCATION_INFO);
}
List<CompletedCheckpoint> completedCheckpoints = coord.getSuccessfulCheckpoints();
assertThat(completedCheckpoints.size()).isOne();
List<KeyGroupRange> newKeyGroupPartitions2 =
StateAssignmentOperation.createKeyGroupPartitions(maxParallelism2, newParallelism2);
// rescale vertex 2
final ExecutionGraph newGraph =
new CheckpointCoordinatorTestingUtils.CheckpointExecutionGraphBuilder()
.addJobVertex(jobVertexID1, parallelism1, maxParallelism1)
.addJobVertex(jobVertexID2, newParallelism2, maxParallelism2)
.build(EXECUTOR_RESOURCE.getExecutor());
final ExecutionJobVertex newJobVertex1 = newGraph.getJobVertex(jobVertexID1);
final ExecutionJobVertex newJobVertex2 = newGraph.getJobVertex(jobVertexID2);
// set up the coordinator and validate the initial state
CheckpointCoordinator newCoord =
new CheckpointCoordinatorBuilder()
.setCompletedCheckpointStore(completedCheckpointStore)
.setTimer(manuallyTriggeredScheduledExecutor)
.build(newGraph);
Set<ExecutionJobVertex> tasks = new HashSet<>();
tasks.add(newJobVertex1);
tasks.add(newJobVertex2);
assertThat(newCoord.restoreLatestCheckpointedStateToAll(tasks, false)).isTrue();
// verify the restored state
verifyStateRestore(jobVertexID1, newJobVertex1, keyGroupPartitions1);
List<List<Collection<OperatorStateHandle>>> actualOpStatesBackend =
new ArrayList<>(newJobVertex2.getParallelism());
List<List<Collection<OperatorStateHandle>>> actualOpStatesRaw =
new ArrayList<>(newJobVertex2.getParallelism());
for (int i = 0; i < newJobVertex2.getParallelism(); i++) {
List<OperatorIDPair> operatorIDs = newJobVertex2.getOperatorIDs();
KeyGroupsStateHandle originalKeyedStateBackend =
generateKeyGroupState(jobVertexID2, newKeyGroupPartitions2.get(i), false);
KeyGroupsStateHandle originalKeyedStateRaw =
generateKeyGroupState(jobVertexID2, newKeyGroupPartitions2.get(i), true);
JobManagerTaskRestore taskRestore =
newJobVertex2
.getTaskVertices()[i]
.getCurrentExecutionAttempt()
.getTaskRestore();
assertThat(taskRestore.getRestoreCheckpointId()).isOne();
TaskStateSnapshot taskStateHandles = taskRestore.getTaskStateSnapshot();
final int headOpIndex = operatorIDs.size() - 1;
List<Collection<OperatorStateHandle>> allParallelManagedOpStates =
new ArrayList<>(operatorIDs.size());
List<Collection<OperatorStateHandle>> allParallelRawOpStates =
new ArrayList<>(operatorIDs.size());
for (int idx = 0; idx < operatorIDs.size(); ++idx) {
OperatorID operatorID = operatorIDs.get(idx).getGeneratedOperatorID();
OperatorSubtaskState opState =
taskStateHandles.getSubtaskStateByOperatorID(operatorID);
Collection<OperatorStateHandle> opStateBackend = opState.getManagedOperatorState();
Collection<OperatorStateHandle> opStateRaw = opState.getRawOperatorState();
allParallelManagedOpStates.add(opStateBackend);
allParallelRawOpStates.add(opStateRaw);
if (idx == headOpIndex) {
Collection<KeyedStateHandle> keyedStateBackend = opState.getManagedKeyedState();
Collection<KeyedStateHandle> keyGroupStateRaw = opState.getRawKeyedState();
compareKeyedState(singletonList(originalKeyedStateBackend), keyedStateBackend);
compareKeyedState(singletonList(originalKeyedStateRaw), keyGroupStateRaw);
}
}
actualOpStatesBackend.add(allParallelManagedOpStates);
actualOpStatesRaw.add(allParallelRawOpStates);
}
comparePartitionableState(expectedOpStatesBackend, actualOpStatesBackend);
comparePartitionableState(expectedOpStatesRaw, actualOpStatesRaw);
} | Tests the checkpoint restoration with changing parallelism of job vertex with partitioned
state. | testRestoreLatestCheckpointedStateWithChangingParallelism | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorRestoringTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorRestoringTest.java | Apache-2.0 |
private void testTriggerAndDeclineCheckpointSimple(
CheckpointFailureReason checkpointFailureReason) throws Exception {
final CheckpointException checkpointException =
new CheckpointException(checkpointFailureReason);
JobVertexID jobVertexID1 = new JobVertexID();
JobVertexID jobVertexID2 = new JobVertexID();
CheckpointCoordinatorTestingUtils.CheckpointRecorderTaskManagerGateway gateway =
new CheckpointCoordinatorTestingUtils.CheckpointRecorderTaskManagerGateway();
ExecutionGraph graph =
new CheckpointCoordinatorTestingUtils.CheckpointExecutionGraphBuilder()
.addJobVertex(jobVertexID1)
.addJobVertex(jobVertexID2)
.setTaskManagerGateway(gateway)
.build(EXECUTOR_RESOURCE.getExecutor());
ExecutionVertex vertex1 = graph.getJobVertex(jobVertexID1).getTaskVertices()[0];
ExecutionVertex vertex2 = graph.getJobVertex(jobVertexID2).getTaskVertices()[0];
ExecutionAttemptID attemptID1 = vertex1.getCurrentExecutionAttempt().getAttemptId();
ExecutionAttemptID attemptID2 = vertex2.getCurrentExecutionAttempt().getAttemptId();
TestFailJobCallback failJobCallback = new TestFailJobCallback();
// set up the coordinator and validate the initial state
CheckpointCoordinator checkpointCoordinator =
new CheckpointCoordinatorBuilder()
.setCheckpointCoordinatorConfiguration(
CheckpointCoordinatorConfiguration.builder()
.setAlignedCheckpointTimeout(Long.MAX_VALUE)
.setMaxConcurrentCheckpoints(Integer.MAX_VALUE)
.build())
.setTimer(manuallyTriggeredScheduledExecutor)
.setCheckpointFailureManager(
new CheckpointFailureManager(0, failJobCallback))
.build(graph);
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints()).isZero();
assertThat(checkpointCoordinator.getNumberOfRetainedSuccessfulCheckpoints()).isZero();
// trigger the first checkpoint. this should succeed
final CompletableFuture<CompletedCheckpoint> checkpointFuture =
checkpointCoordinator.triggerCheckpoint(false);
manuallyTriggeredScheduledExecutor.triggerAll();
FutureUtils.throwIfCompletedExceptionally(checkpointFuture);
// validate that we have a pending checkpoint
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints()).isOne();
assertThat(checkpointCoordinator.getNumberOfRetainedSuccessfulCheckpoints()).isZero();
// we have one task scheduled that will cancel after timeout
assertThat(manuallyTriggeredScheduledExecutor.getActiveScheduledTasks()).hasSize(1);
long checkpointId =
checkpointCoordinator.getPendingCheckpoints().entrySet().iterator().next().getKey();
PendingCheckpoint checkpoint =
checkpointCoordinator.getPendingCheckpoints().get(checkpointId);
assertThat(checkpoint).isNotNull();
assertThat(checkpoint.getCheckpointID()).isEqualTo(checkpointId);
assertThat(checkpoint.getJobId()).isEqualTo(graph.getJobID());
assertThat(checkpoint.getNumberOfNonAcknowledgedTasks()).isEqualTo(2);
assertThat(checkpoint.getNumberOfAcknowledgedTasks()).isZero();
assertThat(checkpoint.getOperatorStates().size()).isZero();
assertThat(checkpoint.isDisposed()).isFalse();
assertThat(checkpoint.areTasksFullyAcknowledged()).isFalse();
// check that the vertices received the trigger checkpoint message
for (ExecutionVertex vertex : Arrays.asList(vertex1, vertex2)) {
CheckpointCoordinatorTestingUtils.TriggeredCheckpoint triggeredCheckpoint =
gateway.getOnlyTriggeredCheckpoint(
vertex.getCurrentExecutionAttempt().getAttemptId());
assertThat(triggeredCheckpoint.checkpointId).isEqualTo(checkpointId);
assertThat(triggeredCheckpoint.timestamp)
.isEqualTo(checkpoint.getCheckpointTimestamp());
assertThat(triggeredCheckpoint.checkpointOptions)
.isEqualTo(CheckpointOptions.forCheckpointWithDefaultLocation());
}
// acknowledge from one of the tasks
checkpointCoordinator.receiveAcknowledgeMessage(
new AcknowledgeCheckpoint(graph.getJobID(), attemptID2, checkpointId),
"Unknown location");
assertThat(checkpoint.getNumberOfAcknowledgedTasks()).isOne();
assertThat(checkpoint.getNumberOfNonAcknowledgedTasks()).isOne();
assertThat(checkpoint.isDisposed()).isFalse();
assertThat(checkpoint.areTasksFullyAcknowledged()).isFalse();
// acknowledge the same task again (should not matter)
checkpointCoordinator.receiveAcknowledgeMessage(
new AcknowledgeCheckpoint(graph.getJobID(), attemptID2, checkpointId),
"Unknown location");
assertThat(checkpoint.isDisposed()).isFalse();
assertThat(checkpoint.areTasksFullyAcknowledged()).isFalse();
// decline checkpoint from the other task, this should cancel the checkpoint
// and trigger a new one
checkpointCoordinator.receiveDeclineMessage(
new DeclineCheckpoint(
graph.getJobID(), attemptID1, checkpointId, checkpointException),
TASK_MANAGER_LOCATION_INFO);
assertThat(checkpoint.isDisposed()).isTrue();
// the canceler is also removed
assertThat(manuallyTriggeredScheduledExecutor.getActiveScheduledTasks().size()).isZero();
// validate that we have no new pending checkpoint
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints()).isZero();
assertThat(checkpointCoordinator.getNumberOfRetainedSuccessfulCheckpoints()).isZero();
// decline again, nothing should happen
// decline from the other task, nothing should happen
checkpointCoordinator.receiveDeclineMessage(
new DeclineCheckpoint(
graph.getJobID(), attemptID1, checkpointId, checkpointException),
TASK_MANAGER_LOCATION_INFO);
checkpointCoordinator.receiveDeclineMessage(
new DeclineCheckpoint(
graph.getJobID(), attemptID2, checkpointId, checkpointException),
TASK_MANAGER_LOCATION_INFO);
assertThat(checkpoint.isDisposed()).isTrue();
assertThat(failJobCallback.getInvokeCounter()).isOne();
checkpointCoordinator.shutdown();
} | This test triggers a checkpoint and then sends a decline checkpoint message from one of the
tasks. The expected behaviour is that said checkpoint is discarded and a new checkpoint is
triggered. | testTriggerAndDeclineCheckpointSimple | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testConcurrentSavepoints() throws Exception {
int numSavepoints = 5;
JobVertexID jobVertexID1 = new JobVertexID();
ExecutionGraph graph =
new CheckpointCoordinatorTestingUtils.CheckpointExecutionGraphBuilder()
.addJobVertex(jobVertexID1)
.build(EXECUTOR_RESOURCE.getExecutor());
ExecutionVertex vertex1 = graph.getJobVertex(jobVertexID1).getTaskVertices()[0];
ExecutionAttemptID attemptID1 = vertex1.getCurrentExecutionAttempt().getAttemptId();
StandaloneCheckpointIDCounter checkpointIDCounter = new StandaloneCheckpointIDCounter();
CheckpointCoordinatorConfiguration chkConfig =
new CheckpointCoordinatorConfiguration.CheckpointCoordinatorConfigurationBuilder()
.setMaxConcurrentCheckpoints(
1) // max one checkpoint at a time => should not affect savepoints
.build();
CheckpointCoordinator checkpointCoordinator =
new CheckpointCoordinatorBuilder()
.setCheckpointCoordinatorConfiguration(chkConfig)
.setCheckpointIDCounter(checkpointIDCounter)
.setCompletedCheckpointStore(new StandaloneCompletedCheckpointStore(2))
.setTimer(manuallyTriggeredScheduledExecutor)
.build(graph);
List<CompletableFuture<CompletedCheckpoint>> savepointFutures = new ArrayList<>();
String savepointDir = TempDirUtils.newFolder(tmpFolder).getAbsolutePath();
// Trigger savepoints
for (int i = 0; i < numSavepoints; i++) {
savepointFutures.add(
checkpointCoordinator.triggerSavepoint(
savepointDir, SavepointFormatType.CANONICAL));
}
// After triggering multiple savepoints, all should in progress
for (CompletableFuture<CompletedCheckpoint> savepointFuture : savepointFutures) {
assertThat(savepointFuture).isNotDone();
}
manuallyTriggeredScheduledExecutor.triggerAll();
// ACK all savepoints
long checkpointId = checkpointIDCounter.getLast();
for (int i = 0; i < numSavepoints; i++, checkpointId--) {
checkpointCoordinator.receiveAcknowledgeMessage(
new AcknowledgeCheckpoint(graph.getJobID(), attemptID1, checkpointId),
TASK_MANAGER_LOCATION_INFO);
}
// After ACKs, all should be completed
for (CompletableFuture<CompletedCheckpoint> savepointFuture : savepointFutures) {
assertThat(savepointFuture).isCompletedWithValueMatching(Objects::nonNull);
}
} | Tests that the savepoints can be triggered concurrently. | testConcurrentSavepoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testMinDelayBetweenSavepoints() throws Exception {
CheckpointCoordinatorConfiguration chkConfig =
new CheckpointCoordinatorConfiguration.CheckpointCoordinatorConfigurationBuilder()
.setMinPauseBetweenCheckpoints(
100000000L) // very long min delay => should not affect savepoints
.setMaxConcurrentCheckpoints(1)
.build();
CheckpointCoordinator checkpointCoordinator =
new CheckpointCoordinatorBuilder()
.setCheckpointCoordinatorConfiguration(chkConfig)
.setCompletedCheckpointStore(new StandaloneCompletedCheckpointStore(2))
.setTimer(manuallyTriggeredScheduledExecutor)
.build(EXECUTOR_RESOURCE.getExecutor());
String savepointDir = TempDirUtils.newFolder(tmpFolder).getAbsolutePath();
CompletableFuture<CompletedCheckpoint> savepoint0 =
checkpointCoordinator.triggerSavepoint(savepointDir, SavepointFormatType.CANONICAL);
assertThat(savepoint0).as("Did not trigger savepoint").isNotDone();
CompletableFuture<CompletedCheckpoint> savepoint1 =
checkpointCoordinator.triggerSavepoint(savepointDir, SavepointFormatType.CANONICAL);
assertThat(savepoint1).as("Did not trigger savepoint").isNotDone();
} | Tests that no minimum delay between savepoints is enforced. | testMinDelayBetweenSavepoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testCheckpointStatsTrackerPendingCheckpointCallback() throws Exception {
// set up the coordinator and validate the initial state
CheckpointStatsTracker tracker = mock(CheckpointStatsTracker.class);
CheckpointCoordinator checkpointCoordinator =
new CheckpointCoordinatorBuilder()
.setTimer(manuallyTriggeredScheduledExecutor)
.setCheckpointStatsTracker(tracker)
.build(EXECUTOR_RESOURCE.getExecutor());
when(tracker.reportPendingCheckpoint(
anyLong(), anyLong(), any(CheckpointProperties.class), any(Map.class)))
.thenReturn(mock(PendingCheckpointStats.class));
// Trigger a checkpoint and verify callback
CompletableFuture<CompletedCheckpoint> checkpointFuture =
checkpointCoordinator.triggerCheckpoint(false);
manuallyTriggeredScheduledExecutor.triggerAll();
FutureUtils.throwIfCompletedExceptionally(checkpointFuture);
verify(tracker, times(1))
.reportPendingCheckpoint(
eq(1L),
any(Long.class),
eq(
CheckpointProperties.forCheckpoint(
CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION)),
any());
} | Tests that the pending checkpoint stats callbacks are created. | testCheckpointStatsTrackerPendingCheckpointCallback | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testTriggerCheckpointAfterStopping() throws Exception {
StoppingCheckpointIDCounter testingCounter = new StoppingCheckpointIDCounter();
CheckpointCoordinator checkpointCoordinator =
new CheckpointCoordinatorBuilder()
.setCheckpointIDCounter(testingCounter)
.setTimer(manuallyTriggeredScheduledExecutor)
.build(EXECUTOR_RESOURCE.getExecutor());
testingCounter.setOwner(checkpointCoordinator);
testTriggerCheckpoint(checkpointCoordinator, PERIODIC_SCHEDULER_SHUTDOWN);
} | Tests that do not trigger checkpoint when stop the coordinator after the eager pre-check. | testTriggerCheckpointAfterStopping | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testTriggerCheckpointWithCounterIOException() throws Exception {
// given: Checkpoint coordinator which fails on getCheckpointId.
IOExceptionCheckpointIDCounter testingCounter = new IOExceptionCheckpointIDCounter();
TestFailJobCallback failureCallback = new TestFailJobCallback();
CheckpointStatsTracker statsTracker =
new DefaultCheckpointStatsTracker(
Integer.MAX_VALUE,
UnregisteredMetricGroups.createUnregisteredJobManagerJobMetricGroup());
CheckpointCoordinator checkpointCoordinator =
new CheckpointCoordinatorBuilder()
.setCheckpointIDCounter(testingCounter)
.setFailureManager(new CheckpointFailureManager(0, failureCallback))
.setTimer(manuallyTriggeredScheduledExecutor)
.setCheckpointStatsTracker(statsTracker)
.build(EXECUTOR_RESOURCE.getExecutor());
testingCounter.setOwner(checkpointCoordinator);
// when: The checkpoint is triggered.
testTriggerCheckpoint(checkpointCoordinator, IO_EXCEPTION);
// then: Failure manager should fail the job.
assertThat(failureCallback.getInvokeCounter()).isOne();
// then: The NumberOfFailedCheckpoints and TotalNumberOfCheckpoints should be 1.
CheckpointStatsCounts counts = statsTracker.createSnapshot().getCounts();
assertThat(counts.getNumberOfRestoredCheckpoints()).isZero();
assertThat(counts.getTotalNumberOfCheckpoints()).isOne();
assertThat(counts.getNumberOfInProgressCheckpoints()).isZero();
assertThat(counts.getNumberOfCompletedCheckpoints()).isZero();
assertThat(counts.getNumberOfFailedCheckpoints()).isOne();
// then: The PendingCheckpoint shouldn't be created.
assertThat(statsTracker.getPendingCheckpointStats(1)).isNull();
} | Tests that do not trigger checkpoint when CheckpointIDCounter IOException occurred. | testTriggerCheckpointWithCounterIOException | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testSerialIncrementAndGet() throws Exception {
final CheckpointIDCounter counter = createCheckpointIdCounter();
try {
counter.start();
assertThat(counter.getAndIncrement()).isOne();
assertThat(counter.get()).isEqualTo(2);
assertThat(counter.getAndIncrement()).isEqualTo(2);
assertThat(counter.get()).isEqualTo(3);
assertThat(counter.getAndIncrement()).isEqualTo(3);
assertThat(counter.get()).isEqualTo(4);
assertThat(counter.getAndIncrement()).isEqualTo(4);
} finally {
counter.shutdown(JobStatus.FINISHED).join();
}
} | Tests serial increment and get calls. | testSerialIncrementAndGet | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointIDCounterTestBase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointIDCounterTestBase.java | Apache-2.0 |
@Test
void testSetCount() throws Exception {
final CheckpointIDCounter counter = createCheckpointIdCounter();
counter.start();
// Test setCount
counter.setCount(1337);
assertThat(counter.get()).isEqualTo(1337);
assertThat(counter.getAndIncrement()).isEqualTo(1337);
assertThat(counter.get()).isEqualTo(1338);
assertThat(counter.getAndIncrement()).isEqualTo(1338);
counter.shutdown(JobStatus.FINISHED).join();
} | Tests a simple {@link CheckpointIDCounter#setCount(long)} operation. | testSetCount | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointIDCounterTestBase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointIDCounterTestBase.java | Apache-2.0 |
@Override
public List<Long> call() throws Exception {
final Random rand = new Random();
final List<Long> counts = new ArrayList<>();
// Wait for the main thread to kick off execution
this.startLatch.await();
for (int i = 0; i < NumIncrements; i++) {
counts.add(counter.getAndIncrement());
// To get some "random" interleaving ;)
Thread.sleep(rand.nextInt(20));
}
return counts;
} | Total number of {@link CheckpointIDCounter#getAndIncrement()} calls. | call | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointIDCounterTestBase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointIDCounterTestBase.java | Apache-2.0 |
@Test
void testMaxParallelismMismatch() throws Exception {
final OperatorID operatorId = new OperatorID();
final int parallelism = 128128;
final CompletedCheckpointStorageLocation testSavepoint =
createSavepointWithOperatorSubtaskState(242L, operatorId, parallelism);
final Map<JobVertexID, ExecutionJobVertex> tasks =
createTasks(operatorId, parallelism, parallelism + 1);
assertThatThrownBy(
() ->
Checkpoints.loadAndValidateCheckpoint(
new JobID(),
tasks,
testSavepoint,
cl,
false,
CheckpointProperties.forSavepoint(
false, SavepointFormatType.CANONICAL)))
.hasMessageContaining("Max parallelism mismatch")
.isInstanceOf(IllegalStateException.class);
} | Tests that savepoint loading fails when there is a max-parallelism mismatch. | testMaxParallelismMismatch | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java | Apache-2.0 |
@Test
void testNonRestoredStateWhenDisallowed() throws Exception {
final OperatorID operatorId = new OperatorID();
final int parallelism = 9;
final CompletedCheckpointStorageLocation testSavepoint =
createSavepointWithOperatorSubtaskState(242L, operatorId, parallelism);
final Map<JobVertexID, ExecutionJobVertex> tasks = Collections.emptyMap();
assertThatThrownBy(
() ->
Checkpoints.loadAndValidateCheckpoint(
new JobID(),
tasks,
testSavepoint,
cl,
false,
CheckpointProperties.forSavepoint(
false, SavepointFormatType.CANONICAL)))
.hasMessageContaining("allowNonRestoredState")
.isInstanceOf(IllegalStateException.class);
} | Tests that savepoint loading fails when there is non-restored state, but it is not allowed. | testNonRestoredStateWhenDisallowed | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java | Apache-2.0 |
@Test
void testNonRestoredStateWhenAllowed() throws Exception {
final OperatorID operatorId = new OperatorID();
final int parallelism = 9;
final CompletedCheckpointStorageLocation testSavepoint =
createSavepointWithOperatorSubtaskState(242L, operatorId, parallelism);
final Map<JobVertexID, ExecutionJobVertex> tasks = Collections.emptyMap();
final CompletedCheckpoint loaded =
Checkpoints.loadAndValidateCheckpoint(
new JobID(),
tasks,
testSavepoint,
cl,
true,
CheckpointProperties.forSavepoint(false, SavepointFormatType.CANONICAL));
assertThat(loaded.getOperatorStates()).isEmpty();
} | Tests that savepoint loading succeeds when there is non-restored state and it is not allowed. | testNonRestoredStateWhenAllowed | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java | Apache-2.0 |
@Test
void testSavepointProperties() {
CheckpointProperties props =
CheckpointProperties.forSavepoint(true, SavepointFormatType.CANONICAL);
assertThat(props.forceCheckpoint()).isTrue();
assertThat(props.discardOnSubsumed()).isFalse();
assertThat(props.discardOnJobFinished()).isFalse();
assertThat(props.discardOnJobCancelled()).isFalse();
assertThat(props.discardOnJobFailed()).isFalse();
assertThat(props.discardOnJobSuspended()).isFalse();
} | Tests the default (manually triggered) savepoint properties. | testSavepointProperties | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointPropertiesTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointPropertiesTest.java | Apache-2.0 |
@Test
void testIsSavepoint() throws Exception {
{
CheckpointProperties props =
CheckpointProperties.forCheckpoint(CheckpointRetentionPolicy.RETAIN_ON_FAILURE);
assertThat(props.isSavepoint()).isFalse();
}
{
CheckpointProperties props =
CheckpointProperties.forCheckpoint(
CheckpointRetentionPolicy.RETAIN_ON_CANCELLATION);
assertThat(props.isSavepoint()).isFalse();
}
{
CheckpointProperties props =
CheckpointProperties.forSavepoint(true, SavepointFormatType.CANONICAL);
assertThat(props.isSavepoint()).isTrue();
CheckpointProperties deserializedCheckpointProperties =
InstantiationUtil.deserializeObject(
InstantiationUtil.serializeObject(props), getClass().getClassLoader());
assertThat(deserializedCheckpointProperties.isSavepoint()).isTrue();
}
} | Tests the isSavepoint utility works as expected. | testIsSavepoint | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointPropertiesTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointPropertiesTest.java | Apache-2.0 |
@Test
void testCounts() {
CheckpointStatsCounts counts = new CheckpointStatsCounts();
assertThat(counts.getNumberOfRestoredCheckpoints()).isZero();
assertThat(counts.getTotalNumberOfCheckpoints()).isZero();
assertThat(counts.getNumberOfInProgressCheckpoints()).isZero();
assertThat(counts.getNumberOfCompletedCheckpoints()).isZero();
assertThat(counts.getNumberOfFailedCheckpoints()).isZero();
counts.incrementRestoredCheckpoints();
assertThat(counts.getNumberOfRestoredCheckpoints()).isOne();
assertThat(counts.getTotalNumberOfCheckpoints()).isZero();
assertThat(counts.getNumberOfInProgressCheckpoints()).isZero();
assertThat(counts.getNumberOfCompletedCheckpoints()).isZero();
assertThat(counts.getNumberOfFailedCheckpoints()).isZero();
// 1st checkpoint
counts.incrementInProgressCheckpoints();
assertThat(counts.getNumberOfRestoredCheckpoints()).isOne();
assertThat(counts.getTotalNumberOfCheckpoints()).isOne();
assertThat(counts.getNumberOfInProgressCheckpoints()).isOne();
assertThat(counts.getNumberOfCompletedCheckpoints()).isZero();
assertThat(counts.getNumberOfFailedCheckpoints()).isZero();
counts.incrementCompletedCheckpoints();
assertThat(counts.getNumberOfRestoredCheckpoints()).isOne();
assertThat(counts.getTotalNumberOfCheckpoints()).isOne();
assertThat(counts.getNumberOfInProgressCheckpoints()).isZero();
assertThat(counts.getNumberOfCompletedCheckpoints()).isOne();
assertThat(counts.getNumberOfFailedCheckpoints()).isZero();
// 2nd checkpoint
counts.incrementInProgressCheckpoints();
assertThat(counts.getNumberOfRestoredCheckpoints()).isOne();
assertThat(counts.getTotalNumberOfCheckpoints()).isEqualTo(2);
assertThat(counts.getNumberOfInProgressCheckpoints()).isOne();
assertThat(counts.getNumberOfCompletedCheckpoints()).isOne();
assertThat(counts.getNumberOfFailedCheckpoints()).isZero();
counts.incrementFailedCheckpoints();
assertThat(counts.getNumberOfRestoredCheckpoints()).isOne();
assertThat(counts.getTotalNumberOfCheckpoints()).isEqualTo(2);
assertThat(counts.getNumberOfInProgressCheckpoints()).isZero();
assertThat(counts.getNumberOfCompletedCheckpoints()).isOne();
assertThat(counts.getNumberOfFailedCheckpoints()).isOne();
counts.incrementFailedCheckpointsWithoutInProgress();
assertThat(counts.getNumberOfRestoredCheckpoints()).isOne();
assertThat(counts.getTotalNumberOfCheckpoints()).isEqualTo(3);
assertThat(counts.getNumberOfInProgressCheckpoints()).isZero();
assertThat(counts.getNumberOfCompletedCheckpoints()).isOne();
assertThat(counts.getNumberOfFailedCheckpoints()).isEqualTo(2);
} | Tests that counts are reported correctly. | testCounts | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsCountsTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsCountsTest.java | Apache-2.0 |
@Test
void testCompleteOrFailWithoutInProgressCheckpoint() {
CheckpointStatsCounts counts = new CheckpointStatsCounts();
counts.incrementCompletedCheckpoints();
assertThat(counts.getNumberOfInProgressCheckpoints())
.as("Number of checkpoints in progress should never be negative")
.isGreaterThanOrEqualTo(0);
counts.incrementFailedCheckpoints();
assertThat(counts.getNumberOfInProgressCheckpoints())
.as("Number of checkpoints in progress should never be negative")
.isGreaterThanOrEqualTo(0);
} | Tests that increment the completed or failed number of checkpoints without incrementing the
in progress checkpoints before throws an Exception. | testCompleteOrFailWithoutInProgressCheckpoint | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsCountsTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsCountsTest.java | Apache-2.0 |
@Test
void testCreateSnapshot() {
CheckpointStatsCounts counts = new CheckpointStatsCounts();
counts.incrementRestoredCheckpoints();
counts.incrementRestoredCheckpoints();
counts.incrementRestoredCheckpoints();
counts.incrementInProgressCheckpoints();
counts.incrementCompletedCheckpoints();
counts.incrementInProgressCheckpoints();
counts.incrementCompletedCheckpoints();
counts.incrementInProgressCheckpoints();
counts.incrementCompletedCheckpoints();
counts.incrementInProgressCheckpoints();
counts.incrementCompletedCheckpoints();
counts.incrementInProgressCheckpoints();
counts.incrementFailedCheckpoints();
long restored = counts.getNumberOfRestoredCheckpoints();
long total = counts.getTotalNumberOfCheckpoints();
long inProgress = counts.getNumberOfInProgressCheckpoints();
long completed = counts.getNumberOfCompletedCheckpoints();
long failed = counts.getNumberOfFailedCheckpoints();
CheckpointStatsCounts snapshot = counts.createSnapshot();
assertThat(snapshot.getNumberOfRestoredCheckpoints()).isEqualTo(restored);
assertThat(snapshot.getTotalNumberOfCheckpoints()).isEqualTo(total);
assertThat(snapshot.getNumberOfInProgressCheckpoints()).isEqualTo(inProgress);
assertThat(snapshot.getNumberOfCompletedCheckpoints()).isEqualTo(completed);
assertThat(snapshot.getNumberOfFailedCheckpoints()).isEqualTo(failed);
// Update the original
counts.incrementRestoredCheckpoints();
counts.incrementRestoredCheckpoints();
counts.incrementInProgressCheckpoints();
counts.incrementCompletedCheckpoints();
counts.incrementInProgressCheckpoints();
counts.incrementFailedCheckpoints();
assertThat(snapshot.getNumberOfRestoredCheckpoints()).isEqualTo(restored);
assertThat(snapshot.getTotalNumberOfCheckpoints()).isEqualTo(total);
assertThat(snapshot.getNumberOfInProgressCheckpoints()).isEqualTo(inProgress);
assertThat(snapshot.getNumberOfCompletedCheckpoints()).isEqualTo(completed);
assertThat(snapshot.getNumberOfFailedCheckpoints()).isEqualTo(failed);
} | Tests that taking snapshots of the state are independent of the parent. | testCreateSnapshot | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsCountsTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsCountsTest.java | Apache-2.0 |
@Test
void testZeroMaxSizeHistory() {
CheckpointStatsHistory history = new CheckpointStatsHistory(0);
history.addInProgressCheckpoint(createPendingCheckpointStats(0));
assertThat(history.replacePendingCheckpointById(createCompletedCheckpointStats(0)))
.isFalse();
CheckpointStatsHistory snapshot = history.createSnapshot();
int counter = 0;
for (AbstractCheckpointStats ignored : snapshot.getCheckpoints()) {
counter++;
}
assertThat(counter).isZero();
assertThat(snapshot.getCheckpointById(0)).isNotNull();
} | Tests a checkpoint history with allowed size 0. | testZeroMaxSizeHistory | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsHistoryTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsHistoryTest.java | Apache-2.0 |
@Test
void testSizeOneHistory() {
CheckpointStatsHistory history = new CheckpointStatsHistory(1);
history.addInProgressCheckpoint(createPendingCheckpointStats(0));
history.addInProgressCheckpoint(createPendingCheckpointStats(1));
assertThat(history.replacePendingCheckpointById(createCompletedCheckpointStats(0)))
.isFalse();
assertThat(history.replacePendingCheckpointById(createCompletedCheckpointStats(1)))
.isTrue();
CheckpointStatsHistory snapshot = history.createSnapshot();
for (AbstractCheckpointStats stats : snapshot.getCheckpoints()) {
assertThat(stats.getCheckpointId()).isOne();
assertThat(stats.getStatus().isCompleted()).isTrue();
}
} | Tests a checkpoint history with allowed size 1. | testSizeOneHistory | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsHistoryTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsHistoryTest.java | Apache-2.0 |
@Test
void testCheckpointHistory() throws Exception {
CheckpointStatsHistory history = new CheckpointStatsHistory(3);
history.addInProgressCheckpoint(createPendingCheckpointStats(0));
CheckpointStatsHistory snapshot = history.createSnapshot();
for (AbstractCheckpointStats stats : snapshot.getCheckpoints()) {
assertThat(stats.getCheckpointId()).isZero();
assertThat(stats.getStatus().isInProgress()).isTrue();
}
history.addInProgressCheckpoint(createPendingCheckpointStats(1));
history.addInProgressCheckpoint(createPendingCheckpointStats(2));
history.addInProgressCheckpoint(createPendingCheckpointStats(3));
snapshot = history.createSnapshot();
// Check in progress stats.
Iterator<AbstractCheckpointStats> it = snapshot.getCheckpoints().iterator();
for (int i = 3; i > 0; i--) {
assertThat(it).hasNext();
AbstractCheckpointStats stats = it.next();
assertThat(stats.getCheckpointId()).isEqualTo(i);
assertThat(stats.getStatus().isInProgress()).isTrue();
}
assertThat(it).isExhausted();
// Update checkpoints
history.replacePendingCheckpointById(createFailedCheckpointStats(1));
history.replacePendingCheckpointById(createCompletedCheckpointStats(3));
history.replacePendingCheckpointById(createFailedCheckpointStats(2));
snapshot = history.createSnapshot();
it = snapshot.getCheckpoints().iterator();
assertThat(it).hasNext();
AbstractCheckpointStats stats = it.next();
assertThat(stats.getCheckpointId()).isEqualTo(3);
assertThat(snapshot.getCheckpointById(3)).isNotNull();
assertThat(stats.getStatus().isCompleted()).isTrue();
assertThat(snapshot.getCheckpointById(3).getStatus().isCompleted()).isTrue();
assertThat(it).hasNext();
stats = it.next();
assertThat(stats.getCheckpointId()).isEqualTo(2);
assertThat(snapshot.getCheckpointById(2)).isNotNull();
assertThat(stats.getStatus().isFailed()).isTrue();
assertThat(snapshot.getCheckpointById(2).getStatus().isFailed()).isTrue();
assertThat(it).hasNext();
stats = it.next();
assertThat(stats.getCheckpointId()).isOne();
assertThat(snapshot.getCheckpointById(1)).isNotNull();
assertThat(stats.getStatus().isFailed()).isTrue();
assertThat(snapshot.getCheckpointById(1).getStatus().isFailed()).isTrue();
assertThat(it).isExhausted();
} | Tests the checkpoint history with multiple checkpoints. | testCheckpointHistory | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsHistoryTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsHistoryTest.java | Apache-2.0 |
@Test
void testStatusValues() {
CheckpointStatsStatus inProgress = CheckpointStatsStatus.IN_PROGRESS;
assertThat(inProgress.isInProgress()).isTrue();
assertThat(inProgress.isCompleted()).isFalse();
assertThat(inProgress.isFailed()).isFalse();
CheckpointStatsStatus completed = CheckpointStatsStatus.COMPLETED;
assertThat(completed.isInProgress()).isFalse();
assertThat(completed.isCompleted()).isTrue();
assertThat(completed.isFailed()).isFalse();
CheckpointStatsStatus failed = CheckpointStatsStatus.FAILED;
assertThat(failed.isInProgress()).isFalse();
assertThat(failed.isCompleted()).isFalse();
assertThat(failed.isFailed()).isTrue();
} | Tests the getters of each status. | testStatusValues | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsStatusTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointStatsStatusTest.java | Apache-2.0 |
@Test
void testQuantiles() {
int stateSize = 100;
int processedData = 200;
int persistedData = 300;
boolean unalignedCheckpoint = true;
long triggerTimestamp = 1234;
long lastAck = triggerTimestamp + 123;
CompletedCheckpointStatsSummary summary = new CompletedCheckpointStatsSummary();
summary.updateSummary(
new CompletedCheckpointStats(
1L,
triggerTimestamp,
CheckpointProperties.forSavepoint(false, SavepointFormatType.CANONICAL),
1,
singletonMap(new JobVertexID(), new TaskStateStats(new JobVertexID(), 1)),
1,
stateSize,
processedData,
persistedData,
unalignedCheckpoint,
new SubtaskStateStats(0, lastAck),
""));
CompletedCheckpointStatsSummarySnapshot snapshot = summary.createSnapshot();
assertThat(snapshot.getStateSizeStats().getQuantile(1)).isCloseTo(stateSize, offset(0d));
assertThat(snapshot.getProcessedDataStats().getQuantile(1))
.isCloseTo(processedData, offset(0d));
assertThat(snapshot.getPersistedDataStats().getQuantile(1))
.isCloseTo(persistedData, offset(0d));
assertThat(snapshot.getEndToEndDurationStats().getQuantile(1))
.isCloseTo(lastAck - triggerTimestamp, offset(0d));
} | Simply test that quantiles can be computed and fields are not permuted. | testQuantiles | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStatsSummaryTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStatsSummaryTest.java | Apache-2.0 |
protected CompletedCheckpointStore createRecoveredCompletedCheckpointStore(
int maxNumberOfCheckpointsToRetain) throws Exception {
return createRecoveredCompletedCheckpointStore(
maxNumberOfCheckpointsToRetain, Executors.directExecutor());
} | Creates the {@link CompletedCheckpointStore} implementation to be tested. | createRecoveredCompletedCheckpointStore | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testExceptionOnNoRetainedCheckpoints() {
assertThatExceptionOfType(Exception.class)
.isThrownBy(() -> createRecoveredCompletedCheckpointStore(0));
} | Tests that at least one checkpoint needs to be retained. | testExceptionOnNoRetainedCheckpoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testAddCheckpointMoreThanMaxRetained() throws Exception {
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
CompletedCheckpointStore checkpoints = createRecoveredCompletedCheckpointStore(1);
CheckpointsCleaner checkpointsCleaner = new CheckpointsCleaner();
TestCompletedCheckpoint[] expected =
new TestCompletedCheckpoint[] {
createCheckpoint(0, sharedStateRegistry),
createCheckpoint(1, sharedStateRegistry),
createCheckpoint(2, sharedStateRegistry),
createCheckpoint(3, sharedStateRegistry)
};
// Add checkpoints
checkpoints.addCheckpointAndSubsumeOldestOne(expected[0], checkpointsCleaner, () -> {});
assertThat(checkpoints.getNumberOfRetainedCheckpoints()).isOne();
for (int i = 1; i < expected.length; i++) {
checkpoints.addCheckpointAndSubsumeOldestOne(expected[i], checkpointsCleaner, () -> {});
// The ZooKeeper implementation discards asynchronously
expected[i - 1].awaitDiscard();
assertThat(expected[i - 1].isDiscarded()).isTrue();
assertThat(checkpoints.getNumberOfRetainedCheckpoints()).isOne();
}
} | Tests that adding more checkpoints than retained discards the correct checkpoints (using the
correct class loader). | testAddCheckpointMoreThanMaxRetained | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testEmptyState() throws Exception {
CompletedCheckpointStore checkpoints = createRecoveredCompletedCheckpointStore(1);
assertThat(checkpoints.getLatestCheckpoint()).isNull();
assertThat(checkpoints.getAllCheckpoints()).isEmpty();
assertThat(checkpoints.getNumberOfRetainedCheckpoints()).isZero();
} | Tests that
<ul>
<li>{@link CompletedCheckpointStore#getLatestCheckpoint()} returns <code>null</code> ,
<li>{@link CompletedCheckpointStore#getAllCheckpoints()} returns an empty list,
<li>{@link CompletedCheckpointStore#getNumberOfRetainedCheckpoints()} returns 0.
</ul> | testEmptyState | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testGetAllCheckpoints() throws Exception {
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
CompletedCheckpointStore checkpoints = createRecoveredCompletedCheckpointStore(4);
TestCompletedCheckpoint[] expected =
new TestCompletedCheckpoint[] {
createCheckpoint(0, sharedStateRegistry),
createCheckpoint(1, sharedStateRegistry),
createCheckpoint(2, sharedStateRegistry),
createCheckpoint(3, sharedStateRegistry)
};
for (TestCompletedCheckpoint checkpoint : expected) {
checkpoints.addCheckpointAndSubsumeOldestOne(
checkpoint, new CheckpointsCleaner(), () -> {});
}
List<CompletedCheckpoint> actual = checkpoints.getAllCheckpoints();
assertThat(actual).hasSameSizeAs(expected).containsExactly(expected);
} | Tests that all added checkpoints are returned. | testGetAllCheckpoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testCleanUpOnSubsume() throws Exception {
OperatorState state = mock(OperatorState.class);
Map<OperatorID, OperatorState> operatorStates = new HashMap<>();
operatorStates.put(new OperatorID(), state);
EmptyStreamStateHandle metadata = new EmptyStreamStateHandle();
TestCompletedCheckpointStorageLocation location =
new TestCompletedCheckpointStorageLocation(metadata, "ptr");
CheckpointProperties props =
new CheckpointProperties(
false, CheckpointType.CHECKPOINT, true, false, false, false, false, false);
CompletedCheckpoint checkpoint =
new CompletedCheckpoint(
new JobID(),
0,
0,
1,
operatorStates,
Collections.emptyList(),
props,
location,
null);
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
checkpoint.registerSharedStatesAfterRestored(
sharedStateRegistry, RecoveryClaimMode.DEFAULT);
verify(state, times(1)).registerSharedStates(sharedStateRegistry, 0L);
// Subsume
checkpoint.markAsDiscardedOnSubsume().discard();
verify(state, times(1)).discardState();
assertThat(location.isDisposed()).isTrue();
assertThat(metadata.isDisposed()).isTrue();
} | Tests that the garbage collection properties are respected when subsuming checkpoints. | testCleanUpOnSubsume | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointTest.java | Apache-2.0 |
@Test
void testCleanUpOnShutdown() throws Exception {
JobStatus[] terminalStates =
new JobStatus[] {
JobStatus.FINISHED, JobStatus.CANCELED, JobStatus.FAILED, JobStatus.SUSPENDED
};
for (JobStatus status : terminalStates) {
OperatorState state = mock(OperatorState.class);
Map<OperatorID, OperatorState> operatorStates = new HashMap<>();
operatorStates.put(new OperatorID(), state);
EmptyStreamStateHandle retainedHandle = new EmptyStreamStateHandle();
TestCompletedCheckpointStorageLocation retainedLocation =
new TestCompletedCheckpointStorageLocation(retainedHandle, "ptr");
// Keep
CheckpointProperties retainProps =
new CheckpointProperties(
false,
CheckpointType.CHECKPOINT,
false,
false,
false,
false,
false,
false);
CompletedCheckpoint checkpoint =
new CompletedCheckpoint(
new JobID(),
0,
0,
1,
new HashMap<>(operatorStates),
Collections.emptyList(),
retainProps,
retainedLocation,
null);
checkpoint.markAsDiscardedOnShutdown(status).discard();
verify(state, times(0)).discardState();
assertThat(retainedLocation.isDisposed()).isFalse();
assertThat(retainedHandle.isDisposed()).isFalse();
// Discard
EmptyStreamStateHandle discardHandle = new EmptyStreamStateHandle();
TestCompletedCheckpointStorageLocation discardLocation =
new TestCompletedCheckpointStorageLocation(discardHandle, "ptr");
// Keep
CheckpointProperties discardProps =
new CheckpointProperties(
false, CheckpointType.CHECKPOINT, true, true, true, true, true, false);
checkpoint =
new CompletedCheckpoint(
new JobID(),
0,
0,
1,
new HashMap<>(operatorStates),
Collections.emptyList(),
discardProps,
discardLocation,
null);
checkpoint.markAsDiscardedOnShutdown(status).discard();
verify(state, times(1)).discardState();
assertThat(discardLocation.isDisposed()).isTrue();
assertThat(discardHandle.isDisposed()).isTrue();
}
} | Tests that the garbage collection properties are respected when shutting down. | testCleanUpOnShutdown | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CompletedCheckpointTest.java | Apache-2.0 |
@Test
void testTrackerWithoutHistory() throws Exception {
JobVertexID jobVertexID = new JobVertexID();
ExecutionGraph graph =
new CheckpointCoordinatorTestingUtils.CheckpointExecutionGraphBuilder()
.addJobVertex(jobVertexID, 3, 256)
.build(EXECUTOR_RESOURCE.getExecutor());
ExecutionJobVertex jobVertex = graph.getJobVertex(jobVertexID);
CheckpointStatsTracker tracker =
new DefaultCheckpointStatsTracker(
0, UnregisteredMetricGroups.createUnregisteredJobManagerJobMetricGroup());
PendingCheckpointStats pending =
tracker.reportPendingCheckpoint(
0,
1,
CheckpointProperties.forCheckpoint(
CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION),
singletonMap(jobVertexID, jobVertex.getParallelism()));
pending.reportSubtaskStats(jobVertexID, createSubtaskStats(0));
pending.reportSubtaskStats(jobVertexID, createSubtaskStats(1));
pending.reportSubtaskStats(jobVertexID, createSubtaskStats(2));
tracker.reportCompletedCheckpoint(pending.toCompletedCheckpointStats(null));
CheckpointStatsSnapshot snapshot = tracker.createSnapshot();
// History should be empty
assertThat(snapshot.getHistory().getCheckpoints().iterator()).isExhausted();
// Counts should be available
CheckpointStatsCounts counts = snapshot.getCounts();
assertThat(counts.getNumberOfCompletedCheckpoints()).isOne();
assertThat(counts.getTotalNumberOfCheckpoints()).isOne();
// Summary should be available
CompletedCheckpointStatsSummarySnapshot summary = snapshot.getSummaryStats();
assertThat(summary.getStateSizeStats().getCount()).isOne();
assertThat(summary.getEndToEndDurationStats().getCount()).isOne();
// Latest completed checkpoint
assertThat(snapshot.getHistory().getLatestCompletedCheckpoint()).isNotNull();
assertThat(snapshot.getHistory().getLatestCompletedCheckpoint().getCheckpointId()).isZero();
} | Tests that the number of remembered checkpoints configuration is respected. | testTrackerWithoutHistory | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultCheckpointStatsTrackerTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultCheckpointStatsTrackerTest.java | Apache-2.0 |
@Test
void testRecoverSortedCheckpoints() throws Exception {
final TestingStateHandleStore<CompletedCheckpoint> stateHandleStore =
builder.setGetAllSupplier(() -> createStateHandles(3)).build();
final CompletedCheckpointStore completedCheckpointStore =
createCompletedCheckpointStore(stateHandleStore);
final List<CompletedCheckpoint> recoveredCompletedCheckpoint =
completedCheckpointStore.getAllCheckpoints();
assertThat(recoveredCompletedCheckpoint).hasSize(3);
final List<Long> checkpointIds =
recoveredCompletedCheckpoint.stream()
.map(CompletedCheckpoint::getCheckpointID)
.collect(Collectors.toList());
assertThat(checkpointIds).containsExactly(1L, 2L, 3L);
} | We have three completed checkpoints(1, 2, 3) in the state handle store. We expect that {@link
DefaultCompletedCheckpointStoreUtils#retrieveCompletedCheckpoints(StateHandleStore,
CheckpointStoreUtil)} should recover the sorted checkpoints by name. | testRecoverSortedCheckpoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultCompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultCompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testClosingSchedulerShutsDownCheckpointCoordinatorOnFailedExecutionGraph()
throws Exception {
final CompletableFuture<JobStatus> counterShutdownFuture = new CompletableFuture<>();
CheckpointIDCounter counter =
TestingCheckpointIDCounter.createStoreWithShutdownCheckAndNoStartAction(
counterShutdownFuture);
final CompletableFuture<JobStatus> storeShutdownFuture = new CompletableFuture<>();
CompletedCheckpointStore store =
TestingCompletedCheckpointStore
.createStoreWithShutdownCheckAndNoCompletedCheckpoints(storeShutdownFuture);
final SchedulerBase scheduler = createSchedulerAndEnableCheckpointing(counter, store);
final ExecutionGraph graph = scheduler.getExecutionGraph();
final CheckpointCoordinator checkpointCoordinator = graph.getCheckpointCoordinator();
assertThat(checkpointCoordinator).isNotNull();
assertThat(checkpointCoordinator.isShutdown()).isFalse();
graph.failJob(new Exception("Test Exception"), System.currentTimeMillis());
scheduler.closeAsync().get();
assertThat(checkpointCoordinator.isShutdown()).isTrue();
assertThat(counterShutdownFuture).isCompletedWithValue(JobStatus.FAILED);
assertThat(storeShutdownFuture).isCompletedWithValue(JobStatus.FAILED);
} | Tests that the checkpoint coordinator is shut down if the execution graph is failed. | testClosingSchedulerShutsDownCheckpointCoordinatorOnFailedExecutionGraph | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultSchedulerCheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultSchedulerCheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testClosingSchedulerShutsDownCheckpointCoordinatorOnFinishedExecutionGraph()
throws Exception {
final CompletableFuture<JobStatus> counterShutdownFuture = new CompletableFuture<>();
CheckpointIDCounter counter =
TestingCheckpointIDCounter.createStoreWithShutdownCheckAndNoStartAction(
counterShutdownFuture);
final CompletableFuture<JobStatus> storeShutdownFuture = new CompletableFuture<>();
CompletedCheckpointStore store =
TestingCompletedCheckpointStore
.createStoreWithShutdownCheckAndNoCompletedCheckpoints(storeShutdownFuture);
final SchedulerBase scheduler = createSchedulerAndEnableCheckpointing(counter, store);
final ExecutionGraph graph = scheduler.getExecutionGraph();
final CheckpointCoordinator checkpointCoordinator = graph.getCheckpointCoordinator();
assertThat(checkpointCoordinator).isNotNull();
assertThat(checkpointCoordinator.isShutdown()).isFalse();
scheduler.startScheduling();
for (ExecutionVertex executionVertex : graph.getAllExecutionVertices()) {
final Execution currentExecutionAttempt = executionVertex.getCurrentExecutionAttempt();
scheduler.updateTaskExecutionState(
new TaskExecutionState(
currentExecutionAttempt.getAttemptId(), ExecutionState.FINISHED));
}
assertThat(graph.getTerminationFuture()).isCompletedWithValue(JobStatus.FINISHED);
scheduler.closeAsync().get();
assertThat(checkpointCoordinator.isShutdown()).isTrue();
assertThat(counterShutdownFuture).isCompletedWithValue(JobStatus.FINISHED);
assertThat(storeShutdownFuture).isCompletedWithValue(JobStatus.FINISHED);
} | Tests that the checkpoint coordinator is shut down if the execution graph is finished. | testClosingSchedulerShutsDownCheckpointCoordinatorOnFinishedExecutionGraph | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultSchedulerCheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/DefaultSchedulerCheckpointCoordinatorTest.java | Apache-2.0 |
@BeforeEach
void setUp() {
manualThreadExecutor = new ManuallyTriggeredScheduledExecutor();
} | Tests for actions of {@link CheckpointCoordinator} on task failures. | setUp | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/FailoverStrategyCheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/FailoverStrategyCheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testAbortPendingCheckpointsWithTriggerValidation() throws Exception {
final int maxConcurrentCheckpoints = ThreadLocalRandom.current().nextInt(10) + 1;
ExecutionGraph graph =
new CheckpointCoordinatorTestingUtils.CheckpointExecutionGraphBuilder()
.addJobVertex(new JobVertexID())
.setTransitToRunning(false)
.build(EXECUTOR_RESOURCE.getExecutor());
CheckpointCoordinatorConfiguration checkpointCoordinatorConfiguration =
new CheckpointCoordinatorConfiguration(
Integer.MAX_VALUE,
Integer.MAX_VALUE,
0,
maxConcurrentCheckpoints,
CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION,
true,
false,
0,
0);
CheckpointCoordinator checkpointCoordinator =
new CheckpointCoordinator(
graph.getJobID(),
checkpointCoordinatorConfiguration,
Collections.emptyList(),
new StandaloneCheckpointIDCounter(),
new StandaloneCompletedCheckpointStore(1),
new JobManagerCheckpointStorage(),
Executors.directExecutor(),
new CheckpointsCleaner(),
manualThreadExecutor,
mock(CheckpointFailureManager.class),
new DefaultCheckpointPlanCalculator(
graph.getJobID(),
new ExecutionGraphCheckpointPlanCalculatorContext(graph),
graph.getVerticesTopologically(),
false),
mock(CheckpointStatsTracker.class));
// switch current execution's state to running to allow checkpoint could be triggered.
graph.transitionToRunning();
graph.getAllExecutionVertices()
.forEach(
task ->
task.getCurrentExecutionAttempt()
.transitionState(ExecutionState.RUNNING));
checkpointCoordinator.startCheckpointScheduler();
assertThat(checkpointCoordinator.isCurrentPeriodicTriggerAvailable()).isTrue();
// only trigger the periodic scheduling
// we can't trigger all scheduled task, because there is also a cancellation scheduled
manualThreadExecutor.triggerNonPeriodicScheduledTasks(
CheckpointCoordinator.ScheduledTrigger.class);
manualThreadExecutor.triggerAll();
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints()).isOne();
for (int i = 1; i < maxConcurrentCheckpoints; i++) {
checkpointCoordinator.triggerCheckpoint(false);
manualThreadExecutor.triggerAll();
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints()).isEqualTo(i + 1);
assertThat(checkpointCoordinator.isCurrentPeriodicTriggerAvailable()).isTrue();
}
// as we only support limited concurrent checkpoints, after checkpoint triggered more than
// the limits,
// the currentPeriodicTrigger would been assigned as null.
checkpointCoordinator.triggerCheckpoint(false);
manualThreadExecutor.triggerAll();
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints())
.isEqualTo(maxConcurrentCheckpoints);
checkpointCoordinator.abortPendingCheckpoints(
new CheckpointException(CheckpointFailureReason.JOB_FAILOVER_REGION));
// after aborting checkpoints, we ensure currentPeriodicTrigger still available.
assertThat(checkpointCoordinator.isCurrentPeriodicTriggerAvailable()).isTrue();
assertThat(checkpointCoordinator.getNumberOfPendingCheckpoints()).isZero();
} | Tests that {@link CheckpointCoordinator#abortPendingCheckpoints(CheckpointException)} called
on job failover could handle the {@code currentPeriodicTrigger} null case well. | testAbortPendingCheckpointsWithTriggerValidation | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/FailoverStrategyCheckpointCoordinatorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/FailoverStrategyCheckpointCoordinatorTest.java | Apache-2.0 |
@Test
void testCompletionFuture() throws Exception {
CheckpointProperties props =
new CheckpointProperties(
false,
SavepointType.savepoint(SavepointFormatType.CANONICAL),
false,
false,
false,
false,
false,
false);
// Abort declined
PendingCheckpoint pending = createPendingCheckpoint(props);
CompletableFuture<CompletedCheckpoint> future = pending.getCompletionFuture();
assertThat(future.isDone()).isFalse();
abort(pending, CheckpointFailureReason.CHECKPOINT_DECLINED);
assertThat(future.isDone()).isTrue();
// Abort expired
pending = createPendingCheckpoint(props);
future = pending.getCompletionFuture();
assertThat(future.isDone()).isFalse();
abort(pending, CheckpointFailureReason.CHECKPOINT_DECLINED);
assertThat(future.isDone()).isTrue();
// Abort subsumed
pending = createPendingCheckpoint(props);
future = pending.getCompletionFuture();
assertThat(future.isDone()).isFalse();
abort(pending, CheckpointFailureReason.CHECKPOINT_DECLINED);
assertThat(future.isDone()).isTrue();
// Finalize (all ACK'd)
pending = createPendingCheckpoint(props);
future = pending.getCompletionFuture();
assertThat(future.isDone()).isFalse();
pending.acknowledgeTask(ATTEMPT_ID, null, new CheckpointMetrics());
assertThat(pending.areTasksFullyAcknowledged()).isTrue();
pending.finalizeCheckpoint(new CheckpointsCleaner(), () -> {}, Executors.directExecutor());
assertThat(future.isDone()).isFalse();
// Finalize (missing ACKs)
PendingCheckpoint pendingCheckpoint = createPendingCheckpoint(props);
future = pending.getCompletionFuture();
assertThat(future.isDone()).isFalse();
assertThatThrownBy(
() ->
pendingCheckpoint.finalizeCheckpoint(
new CheckpointsCleaner(),
() -> {},
Executors.directExecutor()),
"Did not throw expected Exception")
.isInstanceOf(IllegalStateException.class);
} | Tests that the completion future is succeeded on finalize and failed on abort and failures
during finalize. | testCompletionFuture | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PendingCheckpointTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PendingCheckpointTest.java | Apache-2.0 |
@Test
void testNullSubtaskStateLeadsToStatelessTask() throws Exception {
PendingCheckpoint pending =
createPendingCheckpoint(
CheckpointProperties.forCheckpoint(
CheckpointRetentionPolicy.NEVER_RETAIN_AFTER_TERMINATION));
pending.acknowledgeTask(ATTEMPT_ID, null, mock(CheckpointMetrics.class));
final OperatorState expectedState =
new OperatorState(null, null, OPERATOR_ID, PARALLELISM, MAX_PARALLELISM);
assertThat(Collections.singletonMap(OPERATOR_ID, expectedState))
.isEqualTo(pending.getOperatorStates());
} | FLINK-5985.
<p>Ensures that subtasks that acknowledge their state as 'null' are considered stateless.
This means that they should not appear in the task states map of the checkpoint. | testNullSubtaskStateLeadsToStatelessTask | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PendingCheckpointTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PendingCheckpointTest.java | Apache-2.0 |
@Test
void testPrioritization() {
for (int i = 0; i < 81; ++i) { // 3^4 possible configurations.
OperatorSubtaskState primaryAndFallback = generateForConfiguration(i);
for (int j = 0; j < 9; ++j) { // we test 3^2 configurations.
CreateAltSubtaskStateMode modeFirst = CreateAltSubtaskStateMode.byCode(j % 3);
OperatorSubtaskState bestAlternative =
modeFirst.createAlternativeSubtaskState(primaryAndFallback);
CreateAltSubtaskStateMode modeSecond =
CreateAltSubtaskStateMode.byCode((j / 3) % 3);
OperatorSubtaskState secondBestAlternative =
modeSecond.createAlternativeSubtaskState(primaryAndFallback);
List<OperatorSubtaskState> orderedAlternativesList =
Arrays.asList(bestAlternative, secondBestAlternative);
List<OperatorSubtaskState> validAlternativesList = new ArrayList<>(3);
if (modeFirst == CreateAltSubtaskStateMode.ONE_VALID_STATE_HANDLE) {
validAlternativesList.add(bestAlternative);
}
if (modeSecond == CreateAltSubtaskStateMode.ONE_VALID_STATE_HANDLE) {
validAlternativesList.add(secondBestAlternative);
}
validAlternativesList.add(primaryAndFallback);
PrioritizedOperatorSubtaskState.Builder builder =
new PrioritizedOperatorSubtaskState.Builder(
primaryAndFallback, orderedAlternativesList);
PrioritizedOperatorSubtaskState prioritizedOperatorSubtaskState = builder.build();
OperatorSubtaskState[] validAlternatives =
validAlternativesList.toArray(new OperatorSubtaskState[0]);
OperatorSubtaskState[] onlyPrimary =
new OperatorSubtaskState[] {primaryAndFallback};
assertThat(
checkResultAsExpected(
OperatorSubtaskState::getManagedOperatorState,
PrioritizedOperatorSubtaskState
::getPrioritizedManagedOperatorState,
prioritizedOperatorSubtaskState,
primaryAndFallback.getManagedOperatorState().size() == 1
? validAlternatives
: onlyPrimary))
.isTrue();
StateObjectCollection<KeyedStateHandle> expManagedKeyed =
computeExpectedMixedState(
orderedAlternativesList,
primaryAndFallback,
OperatorSubtaskState::getManagedKeyedState,
KeyedStateHandle::getKeyGroupRange);
assertResultAsExpected(
expManagedKeyed,
primaryAndFallback.getManagedKeyedState(),
prioritizedOperatorSubtaskState.getPrioritizedManagedKeyedState());
assertThat(
checkResultAsExpected(
OperatorSubtaskState::getRawOperatorState,
PrioritizedOperatorSubtaskState
::getPrioritizedRawOperatorState,
prioritizedOperatorSubtaskState,
primaryAndFallback.getRawOperatorState().size() == 1
? validAlternatives
: onlyPrimary))
.isTrue();
StateObjectCollection<KeyedStateHandle> expRawKeyed =
computeExpectedMixedState(
orderedAlternativesList,
primaryAndFallback,
OperatorSubtaskState::getRawKeyedState,
KeyedStateHandle::getKeyGroupRange);
assertResultAsExpected(
expRawKeyed,
primaryAndFallback.getRawKeyedState(),
prioritizedOperatorSubtaskState.getPrioritizedRawKeyedState());
}
}
} | This tests attempts to test (almost) the full space of significantly different options for
verifying and prioritizing {@link OperatorSubtaskState} options for local recovery over
primary/remote state handles. | testPrioritization | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | Apache-2.0 |
private OperatorSubtaskState generateForConfiguration(int conf) {
Preconditions.checkState(conf >= 0 && conf <= 80); // 3^4
final int numModes = 3;
KeyGroupRange keyGroupRange = new KeyGroupRange(0, 4);
KeyGroupRange keyGroupRange1 = new KeyGroupRange(0, 2);
KeyGroupRange keyGroupRange2 = new KeyGroupRange(3, 4);
int div = 1;
int mode = (conf / div) % numModes;
StateObjectCollection<OperatorStateHandle> s1 =
mode == 0
? StateObjectCollection.empty()
: mode == 1
? new StateObjectCollection<>(
Collections.singletonList(
createNewOperatorStateHandle(2, RANDOM)))
: new StateObjectCollection<>(
Arrays.asList(
createNewOperatorStateHandle(2, RANDOM),
createNewOperatorStateHandle(2, RANDOM)));
div *= numModes;
mode = (conf / div) % numModes;
StateObjectCollection<OperatorStateHandle> s2 =
mode == 0
? StateObjectCollection.empty()
: mode == 1
? new StateObjectCollection<>(
Collections.singletonList(
createNewOperatorStateHandle(2, RANDOM)))
: new StateObjectCollection<>(
Arrays.asList(
createNewOperatorStateHandle(2, RANDOM),
createNewOperatorStateHandle(2, RANDOM)));
div *= numModes;
mode = (conf / div) % numModes;
StateObjectCollection<KeyedStateHandle> s3 =
mode == 0
? StateObjectCollection.empty()
: mode == 1
? new StateObjectCollection<>(
Collections.singletonList(
createNewKeyedStateHandle(keyGroupRange)))
: new StateObjectCollection<>(
Arrays.asList(
createNewKeyedStateHandle(keyGroupRange1),
createNewKeyedStateHandle(keyGroupRange2)));
div *= numModes;
mode = (conf / div) % numModes;
StateObjectCollection<KeyedStateHandle> s4 =
mode == 0
? StateObjectCollection.empty()
: mode == 1
? new StateObjectCollection<>(
Collections.singletonList(
createNewKeyedStateHandle(keyGroupRange)))
: new StateObjectCollection<>(
Arrays.asList(
createNewKeyedStateHandle(keyGroupRange1),
createNewKeyedStateHandle(keyGroupRange2)));
return OperatorSubtaskState.builder()
.setManagedOperatorState(s1)
.setRawOperatorState(s2)
.setManagedKeyedState(s3)
.setRawKeyedState(s4)
.build();
} | Generator for all 3^4 = 81 possible configurations of a OperatorSubtaskState: - 4 different
sub-states: managed/raw + operator/keyed. - 3 different options per sub-state: empty
(simulate no state), single handle (simulate recovery), 2 handles (simulate e.g. rescaling) | generateForConfiguration | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | Apache-2.0 |
@Override
public OperatorSubtaskState createAlternativeSubtaskState(
OperatorSubtaskState primaryOriginal) {
return OperatorSubtaskState.builder()
.setManagedOperatorState(
deepCopyFirstElement(primaryOriginal.getManagedOperatorState()))
.setRawOperatorState(
deepCopyFirstElement(primaryOriginal.getRawOperatorState()))
.setManagedKeyedState(
deepCopyFirstElement(primaryOriginal.getManagedKeyedState()))
.setRawKeyedState(deepCopyFirstElement(primaryOriginal.getRawKeyedState()))
.setInputChannelState(deepCopy(primaryOriginal.getInputChannelState()))
.setResultSubpartitionState(
deepCopy(primaryOriginal.getResultSubpartitionState()))
.build();
} | mode 0: one valid state handle (deep copy of original). | createAlternativeSubtaskState | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | Apache-2.0 |
public boolean checkContainedObjectsReferentialEquality(
StateObjectCollection<?> a, StateObjectCollection<?> b) {
if (a == b) {
return true;
}
if (a == null || b == null) {
return false;
}
if (a.size() != b.size()) {
return false;
}
Iterator<?> bIter = b.iterator();
for (StateObject stateObject : a) {
if (!bIter.hasNext() || bIter.next() != stateObject) {
return false;
}
}
return true;
} | Returns true iff, in iteration order, all objects in the first collection are equal by
reference to their corresponding object (by order) in the second collection and the size of
the collections is equal. | checkContainedObjectsReferentialEquality | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/PrioritizedOperatorSubtaskStateTest.java | Apache-2.0 |
@Test
void testSimpleAccess() {
long checkpointId = Integer.MAX_VALUE + 1L;
CheckpointProperties props =
new CheckpointProperties(
true,
SavepointType.savepoint(SavepointFormatType.CANONICAL),
false,
false,
true,
false,
true,
false);
long restoreTimestamp = Integer.MAX_VALUE + 1L;
String externalPath = "external-path";
RestoredCheckpointStats restored =
new RestoredCheckpointStats(
checkpointId, props, restoreTimestamp, externalPath, 42);
assertThat(restored.getCheckpointId()).isEqualTo(checkpointId);
assertThat(restored.getProperties()).isEqualTo(props);
assertThat(restored.getRestoreTimestamp()).isEqualTo(restoreTimestamp);
assertThat(restored.getExternalPath()).isEqualTo(externalPath);
assertThat(restored.getStateSize()).isEqualTo(42);
} | Tests simple access to restore properties. | testSimpleAccess | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/RestoredCheckpointStatsTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/RestoredCheckpointStatsTest.java | Apache-2.0 |
@Override
protected CheckpointIDCounter createCheckpointIdCounter() throws Exception {
return new StandaloneCheckpointIDCounter();
} | Unit tests for the {@link StandaloneCheckpointIDCounter}. The tests are inherited from the test
base class {@link CheckpointIDCounterTestBase}. | createCheckpointIdCounter | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounterTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StandaloneCheckpointIDCounterTest.java | Apache-2.0 |
@Override
protected CompletedCheckpointStore createRecoveredCompletedCheckpointStore(
int maxNumberOfCheckpointsToRetain, Executor executor) throws Exception {
return new StandaloneCompletedCheckpointStore(maxNumberOfCheckpointsToRetain);
} | Tests for basic {@link CompletedCheckpointStore} contract. | createRecoveredCompletedCheckpointStore | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testSuspendDiscardsCheckpoints() throws Exception {
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
CompletedCheckpointStore store = createRecoveredCompletedCheckpointStore(1);
TestCompletedCheckpoint checkpoint = createCheckpoint(0, sharedStateRegistry);
Collection<OperatorState> taskStates = checkpoint.getOperatorStates().values();
store.addCheckpointAndSubsumeOldestOne(checkpoint, new CheckpointsCleaner(), () -> {});
assertThat(store.getNumberOfRetainedCheckpoints()).isOne();
verifyCheckpointRegistered(taskStates);
store.shutdown(JobStatus.SUSPENDED, new CheckpointsCleaner());
assertThat(store.getNumberOfRetainedCheckpoints()).isZero();
assertThat(checkpoint.isDiscarded()).isTrue();
verifyCheckpointDiscarded(taskStates);
} | Tests that suspends discards all checkpoints (as they cannot be recovered later in standalone
recovery mode). | testSuspendDiscardsCheckpoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StandaloneCompletedCheckpointStoreTest.java | Apache-2.0 |
@Test
void testReDistributeCombinedPartitionableStates() {
OperatorID operatorID = new OperatorID();
OperatorState operatorState = new OperatorState(null, null, operatorID, 2, 4);
Map<String, OperatorStateHandle.StateMetaInfo> metaInfoMap1 = new HashMap<>(6);
metaInfoMap1.put(
"t-1",
new OperatorStateHandle.StateMetaInfo(
new long[] {0}, OperatorStateHandle.Mode.UNION));
metaInfoMap1.put(
"t-2",
new OperatorStateHandle.StateMetaInfo(
new long[] {22, 44}, OperatorStateHandle.Mode.UNION));
metaInfoMap1.put(
"t-3",
new OperatorStateHandle.StateMetaInfo(
new long[] {52, 63}, OperatorStateHandle.Mode.SPLIT_DISTRIBUTE));
metaInfoMap1.put(
"t-4",
new OperatorStateHandle.StateMetaInfo(
new long[] {67, 74, 75}, OperatorStateHandle.Mode.BROADCAST));
metaInfoMap1.put(
"t-5",
new OperatorStateHandle.StateMetaInfo(
new long[] {77, 88, 92}, OperatorStateHandle.Mode.BROADCAST));
metaInfoMap1.put(
"t-6",
new OperatorStateHandle.StateMetaInfo(
new long[] {101, 123, 127}, OperatorStateHandle.Mode.BROADCAST));
OperatorStateHandle osh1 =
new OperatorStreamStateHandle(
metaInfoMap1, new ByteStreamStateHandle("test1", new byte[130]));
operatorState.putState(
0, OperatorSubtaskState.builder().setManagedOperatorState(osh1).build());
Map<String, OperatorStateHandle.StateMetaInfo> metaInfoMap2 = new HashMap<>(3);
metaInfoMap2.put(
"t-1",
new OperatorStateHandle.StateMetaInfo(
new long[] {0}, OperatorStateHandle.Mode.UNION));
metaInfoMap2.put(
"t-4",
new OperatorStateHandle.StateMetaInfo(
new long[] {20, 27, 28}, OperatorStateHandle.Mode.BROADCAST));
metaInfoMap2.put(
"t-5",
new OperatorStateHandle.StateMetaInfo(
new long[] {30, 44, 48}, OperatorStateHandle.Mode.BROADCAST));
metaInfoMap2.put(
"t-6",
new OperatorStateHandle.StateMetaInfo(
new long[] {57, 79, 83}, OperatorStateHandle.Mode.BROADCAST));
OperatorStateHandle osh2 =
new OperatorStreamStateHandle(
metaInfoMap2, new ByteStreamStateHandle("test2", new byte[86]));
operatorState.putState(
1, OperatorSubtaskState.builder().setManagedOperatorState(osh2).build());
// rescale up case, parallelism 2 --> 3
verifyCombinedPartitionableStateRescale(operatorState, operatorID, 2, 3);
// rescale down case, parallelism 2 --> 1
verifyCombinedPartitionableStateRescale(operatorState, operatorID, 2, 1);
// not rescale
verifyCombinedPartitionableStateRescale(operatorState, operatorID, 2, 2);
} | Verify repartition logic on partitionable states with all modes. | testReDistributeCombinedPartitionableStates | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperationTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperationTest.java | Apache-2.0 |
@Test
void testOnlyUpstreamChannelRescaleStateAssignment()
throws JobException, JobExecutionException {
Random random = new Random();
OperatorSubtaskState upstreamOpState =
OperatorSubtaskState.builder()
.setResultSubpartitionState(
new StateObjectCollection<>(
asList(
createNewResultSubpartitionStateHandle(10, random),
createNewResultSubpartitionStateHandle(
10, random))))
.build();
testOnlyUpstreamOrDownstreamRescalingInternal(upstreamOpState, null, 5, 7);
} | FLINK-31963: Tests rescaling for stateless operators and upstream result partition state. | testOnlyUpstreamChannelRescaleStateAssignment | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperationTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperationTest.java | Apache-2.0 |
public static KeyedStateHandle createNewKeyedStateHandle(KeyGroupRange keyGroupRange) {
return new DummyKeyedStateHandle(keyGroupRange);
} | Creates a new test {@link KeyedStateHandle} for the given key-group. | createNewKeyedStateHandle | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateHandleDummyUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateHandleDummyUtil.java | Apache-2.0 |
public static OperatorStateHandle deepDummyCopy(OperatorStateHandle original) {
if (original == null) {
return null;
}
ByteStreamStateHandle stateHandleCopy =
cloneByteStreamStateHandle(
(ByteStreamStateHandle) original.getDelegateStateHandle());
Map<String, OperatorStateHandle.StateMetaInfo> offsets =
original.getStateNameToPartitionOffsets();
Map<String, OperatorStateHandle.StateMetaInfo> offsetsCopy = new HashMap<>(offsets.size());
for (Map.Entry<String, OperatorStateHandle.StateMetaInfo> entry : offsets.entrySet()) {
OperatorStateHandle.StateMetaInfo metaInfo = entry.getValue();
OperatorStateHandle.StateMetaInfo metaInfoCopy =
new OperatorStateHandle.StateMetaInfo(
metaInfo.getOffsets(), metaInfo.getDistributionMode());
offsetsCopy.put(String.valueOf(entry.getKey()), metaInfoCopy);
}
return new OperatorStreamStateHandle(offsetsCopy, stateHandleCopy);
} | Creates a deep copy of the given {@link OperatorStreamStateHandle}. | deepDummyCopy | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateHandleDummyUtil.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StateHandleDummyUtil.java | Apache-2.0 |
@Test
void testAddNonPositiveStats() {
StatsSummary mma = new StatsSummary();
mma.add(-1);
assertThat(mma.getMinimum()).isZero();
assertThat(mma.getMaximum()).isZero();
assertThat(mma.getSum()).isZero();
assertThat(mma.getCount()).isZero();
assertThat(mma.getAverage()).isZero();
mma.add(0);
assertThat(mma.getMinimum()).isZero();
assertThat(mma.getMaximum()).isZero();
assertThat(mma.getSum()).isZero();
assertThat(mma.getCount()).isOne();
assertThat(mma.getAverage()).isZero();
} | Test that non-positive numbers are not counted. | testAddNonPositiveStats | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StatsSummaryTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/StatsSummaryTest.java | Apache-2.0 |
@Test
void testSimpleAccess() throws Exception {
test(false);
} | Tests simple access via the getters. | testSimpleAccess | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/SubtaskStateStatsTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/SubtaskStateStatsTest.java | Apache-2.0 |
private CuratorFramework getZooKeeperClient() {
return zooKeeperExtension.getZooKeeperClient(
testingFatalErrorHandlerResource.getTestingFatalErrorHandler());
} | Unit tests for the {@link ZooKeeperCheckpointIDCounter}. The tests are inherited from the test
base class {@link CheckpointIDCounterTestBase}. | getZooKeeperClient | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounterITCase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounterITCase.java | Apache-2.0 |
@Test
void testRecover() throws Exception {
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
CompletedCheckpointStore checkpoints = createRecoveredCompletedCheckpointStore(3);
TestCompletedCheckpoint[] expected =
new TestCompletedCheckpoint[] {
createCheckpoint(0, sharedStateRegistry),
createCheckpoint(1, sharedStateRegistry),
createCheckpoint(2, sharedStateRegistry)
};
// Add multiple checkpoints
checkpoints.addCheckpointAndSubsumeOldestOne(
expected[0], new CheckpointsCleaner(), () -> {});
checkpoints.addCheckpointAndSubsumeOldestOne(
expected[1], new CheckpointsCleaner(), () -> {});
checkpoints.addCheckpointAndSubsumeOldestOne(
expected[2], new CheckpointsCleaner(), () -> {});
verifyCheckpointRegistered(expected[0].getOperatorStates().values());
verifyCheckpointRegistered(expected[1].getOperatorStates().values());
verifyCheckpointRegistered(expected[2].getOperatorStates().values());
// All three should be in ZK
assertThat(getZooKeeperClient().getChildren().forPath(CHECKPOINT_PATH)).hasSize(3);
assertThat(checkpoints.getNumberOfRetainedCheckpoints()).isEqualTo(3);
// Recover
sharedStateRegistry.close();
sharedStateRegistry = new SharedStateRegistryImpl();
assertThat(getZooKeeperClient().getChildren().forPath(CHECKPOINT_PATH)).hasSize(3);
assertThat(checkpoints.getNumberOfRetainedCheckpoints()).isEqualTo(3);
assertThat(checkpoints.getLatestCheckpoint()).isEqualTo(expected[2]);
List<CompletedCheckpoint> expectedCheckpoints = new ArrayList<>(3);
expectedCheckpoints.add(expected[1]);
expectedCheckpoints.add(expected[2]);
expectedCheckpoints.add(createCheckpoint(3, sharedStateRegistry));
checkpoints.addCheckpointAndSubsumeOldestOne(
expectedCheckpoints.get(2), new CheckpointsCleaner(), () -> {});
List<CompletedCheckpoint> actualCheckpoints = checkpoints.getAllCheckpoints();
assertThat(actualCheckpoints).isEqualTo(expectedCheckpoints);
for (CompletedCheckpoint actualCheckpoint : actualCheckpoints) {
verifyCheckpointRegistered(actualCheckpoint.getOperatorStates().values());
}
} | Tests that older checkpoints are not cleaned up right away when recovering. Only after
another checkpoint has been completed the old checkpoints exceeding the number of checkpoints
to retain will be removed. | testRecover | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | Apache-2.0 |
@Test
void testSuspendKeepsCheckpoints() throws Exception {
CuratorFramework client = getZooKeeperClient();
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
CompletedCheckpointStore store = createRecoveredCompletedCheckpointStore(1);
TestCompletedCheckpoint checkpoint = createCheckpoint(0, sharedStateRegistry);
store.addCheckpointAndSubsumeOldestOne(checkpoint, new CheckpointsCleaner(), () -> {});
assertThat(store.getNumberOfRetainedCheckpoints()).isOne();
assertThat(
client.checkExists()
.forPath(
CHECKPOINT_PATH
+ checkpointStoreUtil.checkpointIDToName(
checkpoint.getCheckpointID())))
.isNotNull();
store.shutdown(JobStatus.SUSPENDED, new CheckpointsCleaner());
assertThat(store.getNumberOfRetainedCheckpoints()).isZero();
final String checkpointPath =
CHECKPOINT_PATH
+ checkpointStoreUtil.checkpointIDToName(checkpoint.getCheckpointID());
final List<String> checkpointPathChildren = client.getChildren().forPath(checkpointPath);
assertThat(checkpointPathChildren)
.as("The checkpoint node should not be marked for deletion.")
.hasSize(1);
final String locksNodeName = Iterables.getOnlyElement(checkpointPathChildren);
final String locksNodePath =
ZooKeeperUtils.generateZookeeperPath(checkpointPath, locksNodeName);
final Stat locksStat = client.checkExists().forPath(locksNodePath);
assertThat(locksStat.getNumChildren())
.as("There shouldn't be any lock node available for the checkpoint")
.isZero();
// Recover again
sharedStateRegistry.close();
store = createRecoveredCompletedCheckpointStore(1);
CompletedCheckpoint recovered = store.getLatestCheckpoint();
assertThat(recovered).isEqualTo(checkpoint);
} | Tests that suspends keeps all checkpoints (so that they can be recovered later by the
ZooKeeper store). Furthermore, suspending a job should release all locks. | testSuspendKeepsCheckpoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | Apache-2.0 |
@Test
void testLatestCheckpointRecovery() throws Exception {
final int numCheckpoints = 3;
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
CompletedCheckpointStore checkpointStore =
createRecoveredCompletedCheckpointStore(numCheckpoints);
List<CompletedCheckpoint> checkpoints = new ArrayList<>(numCheckpoints);
checkpoints.add(createCheckpoint(9, sharedStateRegistry));
checkpoints.add(createCheckpoint(10, sharedStateRegistry));
checkpoints.add(createCheckpoint(11, sharedStateRegistry));
for (CompletedCheckpoint checkpoint : checkpoints) {
checkpointStore.addCheckpointAndSubsumeOldestOne(
checkpoint, new CheckpointsCleaner(), () -> {});
}
sharedStateRegistry.close();
final CompletedCheckpoint latestCheckpoint =
createRecoveredCompletedCheckpointStore(numCheckpoints).getLatestCheckpoint();
assertThat(latestCheckpoint).isEqualTo(checkpoints.get(checkpoints.size() - 1));
} | FLINK-6284.
<p>Tests that the latest recovered checkpoint is the one with the highest checkpoint id | testLatestCheckpointRecovery | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | Apache-2.0 |
@Test
void testConcurrentCheckpointOperations() throws Exception {
final int numberOfCheckpoints = 1;
final long waitingTimeout = 50L;
final CompletedCheckpointStore zkCheckpointStore1 =
createRecoveredCompletedCheckpointStore(numberOfCheckpoints);
SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
TestCompletedCheckpoint completedCheckpoint = createCheckpoint(1, sharedStateRegistry);
// complete the first checkpoint
zkCheckpointStore1.addCheckpointAndSubsumeOldestOne(
completedCheckpoint, new CheckpointsCleaner(), () -> {});
// recover the checkpoint by a different checkpoint store
sharedStateRegistry.close();
sharedStateRegistry = new SharedStateRegistryImpl();
final CompletedCheckpointStore zkCheckpointStore2 =
createRecoveredCompletedCheckpointStore(numberOfCheckpoints);
CompletedCheckpoint recoveredCheckpoint = zkCheckpointStore2.getLatestCheckpoint();
assertThat(recoveredCheckpoint).isInstanceOf(TestCompletedCheckpoint.class);
TestCompletedCheckpoint recoveredTestCheckpoint =
(TestCompletedCheckpoint) recoveredCheckpoint;
// Check that the recovered checkpoint is not yet discarded
assertThat(recoveredTestCheckpoint.isDiscarded()).isFalse();
// complete another checkpoint --> this should remove the first checkpoint from the store
// because the number of retained checkpoints == 1
TestCompletedCheckpoint completedCheckpoint2 = createCheckpoint(2, sharedStateRegistry);
zkCheckpointStore1.addCheckpointAndSubsumeOldestOne(
completedCheckpoint2, new CheckpointsCleaner(), () -> {});
List<CompletedCheckpoint> allCheckpoints = zkCheckpointStore1.getAllCheckpoints();
// check that we have removed the first checkpoint from zkCompletedStore1
assertThat(allCheckpoints).isEqualTo(Collections.singletonList(completedCheckpoint2));
// lets wait a little bit to see that no discard operation will be executed
assertThat(recoveredTestCheckpoint.awaitDiscard(waitingTimeout))
.as("The checkpoint should not have been discarded.")
.isFalse();
// check that we have not discarded the first completed checkpoint
assertThat(recoveredTestCheckpoint.isDiscarded()).isFalse();
TestCompletedCheckpoint completedCheckpoint3 = createCheckpoint(3, sharedStateRegistry);
// this should release the last lock on completedCheckpoint and thus discard it
zkCheckpointStore2.addCheckpointAndSubsumeOldestOne(
completedCheckpoint3, new CheckpointsCleaner(), () -> {});
// the checkpoint should be discarded eventually because there is no lock on it anymore
recoveredTestCheckpoint.awaitDiscard();
} | FLINK-6612
<p>Checks that a concurrent checkpoint completion won't discard a checkpoint which has been
recovered by a different completed checkpoint store. | testConcurrentCheckpointOperations | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | Apache-2.0 |
@Test
void testChekpointingPausesAndResumeWhenTooManyCheckpoints() throws Exception {
ManualClock clock = new ManualClock();
clock.advanceTime(1, TimeUnit.DAYS);
int maxCleaningCheckpoints = 1;
CheckpointsCleaner checkpointsCleaner = new CheckpointsCleaner();
CheckpointRequestDecider checkpointRequestDecider =
new CheckpointRequestDecider(
maxCleaningCheckpoints,
(currentTimeMillis, tillNextMillis) -> {},
clock,
1,
new AtomicInteger(0)::get,
checkpointsCleaner::getNumberOfCheckpointsToClean);
final int maxCheckpointsToRetain = 1;
ManuallyTriggeredScheduledExecutor executor = new ManuallyTriggeredScheduledExecutor();
CompletedCheckpointStore checkpointStore =
createRecoveredCompletedCheckpointStore(maxCheckpointsToRetain, executor);
int nbCheckpointsToInject = 3;
for (int i = 1; i <= nbCheckpointsToInject; i++) {
// add checkpoints to clean, the ManuallyTriggeredScheduledExecutor.execute() just
// queues the runnables but does not execute them.
TestCompletedCheckpoint completedCheckpoint =
new TestCompletedCheckpoint(
new JobID(),
i,
i,
Collections.emptyMap(),
CheckpointProperties.forCheckpoint(
CheckpointRetentionPolicy.RETAIN_ON_FAILURE));
checkpointStore.addCheckpointAndSubsumeOldestOne(
completedCheckpoint, checkpointsCleaner, () -> {});
}
int nbCheckpointsSubmittedForCleaning = nbCheckpointsToInject - maxCheckpointsToRetain;
// wait for cleaning request submission by checkpointsStore
CommonTestUtils.waitUntilCondition(
() ->
checkpointsCleaner.getNumberOfCheckpointsToClean()
== nbCheckpointsSubmittedForCleaning);
assertThat(checkpointsCleaner.getNumberOfCheckpointsToClean())
.isEqualTo(nbCheckpointsSubmittedForCleaning);
// checkpointing is on hold because checkpointsCleaner.getNumberOfCheckpointsToClean() >
// maxCleaningCheckpoints
assertThat(checkpointRequestDecider.chooseRequestToExecute(regularCheckpoint(), false, 0))
.isNotPresent();
// make the executor execute checkpoint requests.
executor.triggerAll();
// wait for a checkpoint to be cleaned
CommonTestUtils.waitUntilCondition(
() ->
checkpointsCleaner.getNumberOfCheckpointsToClean()
< nbCheckpointsSubmittedForCleaning);
// some checkpoints were cleaned
assertThat(checkpointsCleaner.getNumberOfCheckpointsToClean())
.isLessThan(nbCheckpointsSubmittedForCleaning);
// checkpointing is resumed because checkpointsCleaner.getNumberOfCheckpointsToClean() <=
// maxCleaningCheckpoints
assertThat(checkpointRequestDecider.chooseRequestToExecute(regularCheckpoint(), false, 0))
.isPresent();
checkpointStore.shutdown(JobStatus.FINISHED, checkpointsCleaner);
} | FLINK-17073 tests that there is no request triggered when there are too many checkpoints
waiting to clean and that it resumes when the number of waiting checkpoints as gone below the
threshold. | testChekpointingPausesAndResumeWhenTooManyCheckpoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreITCase.java | Apache-2.0 |
@Test
void testDiscardingSubsumedCheckpoints() throws Exception {
final SharedStateRegistry sharedStateRegistry = new SharedStateRegistryImpl();
final Configuration configuration = new Configuration();
configuration.set(
HighAvailabilityOptions.HA_ZOOKEEPER_QUORUM,
zooKeeperExtensionWrapper.getCustomExtension().getConnectString());
final CuratorFrameworkWithUnhandledErrorListener curatorFrameworkWrapper =
ZooKeeperUtils.startCuratorFramework(configuration, NoOpFatalErrorHandler.INSTANCE);
final CompletedCheckpointStore checkpointStore =
createZooKeeperCheckpointStore(curatorFrameworkWrapper.asCuratorFramework());
try {
final CompletedCheckpointStoreTest.TestCompletedCheckpoint checkpoint1 =
CompletedCheckpointStoreTest.createCheckpoint(0, sharedStateRegistry);
checkpointStore.addCheckpointAndSubsumeOldestOne(
checkpoint1, new CheckpointsCleaner(), () -> {});
assertThat(checkpointStore.getAllCheckpoints()).containsExactly(checkpoint1);
final CompletedCheckpointStoreTest.TestCompletedCheckpoint checkpoint2 =
CompletedCheckpointStoreTest.createCheckpoint(1, sharedStateRegistry);
checkpointStore.addCheckpointAndSubsumeOldestOne(
checkpoint2, new CheckpointsCleaner(), () -> {});
final List<CompletedCheckpoint> allCheckpoints = checkpointStore.getAllCheckpoints();
assertThat(allCheckpoints).containsExactly(checkpoint2);
// verify that the subsumed checkpoint is discarded
CompletedCheckpointStoreTest.verifyCheckpointDiscarded(checkpoint1);
} finally {
curatorFrameworkWrapper.close();
}
} | Tests that subsumed checkpoints are discarded. | testDiscardingSubsumedCheckpoints | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStoreTest.java | Apache-2.0 |
@Nullable
@Override
public StreamStateHandle closeAndGetHandle() throws IOException {
throw new IOException("Test closeAndGetHandle exception.");
} | The output stream that throws an exception when close or closeAndGetHandle. | closeAndGetHandle | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateCheckpointWriterTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateCheckpointWriterTest.java | Apache-2.0 |
@Test
void testConstructAndDispose() throws Exception {
final Random rnd = new Random();
final long checkpointId = rnd.nextInt(Integer.MAX_VALUE) + 1;
final int numTaskStates = 4;
final int numSubtaskStates = 16;
final int numMasterStates = 7;
Collection<OperatorState> taskStates =
CheckpointTestUtils.createOperatorStates(
rnd, null, numTaskStates, 0, 0, numSubtaskStates);
Collection<MasterState> masterStates =
CheckpointTestUtils.createRandomMasterStates(rnd, numMasterStates);
CheckpointMetadata checkpoint =
new CheckpointMetadata(checkpointId, taskStates, masterStates);
assertThat(checkpoint.getCheckpointId()).isEqualTo(checkpointId);
assertThat(checkpoint.getOperatorStates()).isEqualTo(taskStates);
assertThat(checkpoint.getMasterStates()).isEqualTo(masterStates);
assertThat(checkpoint.getOperatorStates()).isNotEmpty();
assertThat(checkpoint.getMasterStates()).isNotEmpty();
checkpoint.dispose();
assertThat(checkpoint.getOperatorStates()).isEmpty();
assertThat(checkpoint.getMasterStates()).isEmpty();
} | Simple tests for the {@link CheckpointMetadata} data holder class. | testConstructAndDispose | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/metadata/CheckpointMetadataTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/metadata/CheckpointMetadataTest.java | Apache-2.0 |
public static Collection<MasterState> createRandomMasterStates(Random random, int num) {
final ArrayList<MasterState> states = new ArrayList<>(num);
for (int i = 0; i < num; i++) {
int version = random.nextInt(10);
String name = StringUtils.getRandomString(random, 5, 500);
byte[] bytes = new byte[random.nextInt(5000) + 1];
random.nextBytes(bytes);
states.add(new MasterState(name, bytes, version));
}
return states;
} | Creates a bunch of random master states. | createRandomMasterStates | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/metadata/CheckpointTestUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/metadata/CheckpointTestUtils.java | Apache-2.0 |
public static void assertMasterStateEquality(MasterState a, MasterState b) {
assertThat(b.version()).isEqualTo(a.version());
assertThat(b.name()).isEqualTo(a.name());
assertThat(b.bytes()).isEqualTo(a.bytes());
} | Asserts that two MasterStates are equal.
<p>The MasterState avoids overriding {@code equals()} on purpose, because equality is not
well defined in the raw contents. | assertMasterStateEquality | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/metadata/CheckpointTestUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/metadata/CheckpointTestUtils.java | Apache-2.0 |
@VisibleForTesting
public static SlotProfile noRequirements() {
return noLocality(ResourceProfile.UNKNOWN);
} | Returns a slot profile that has no requirements. | noRequirements | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/SlotProfileTestingUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/SlotProfileTestingUtils.java | Apache-2.0 |
@VisibleForTesting
public static SlotProfile noLocality(ResourceProfile resourceProfile) {
return preferredLocality(resourceProfile, Collections.emptyList());
} | Returns a slot profile for the given resource profile, without any locality requirements. | noLocality | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/SlotProfileTestingUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/SlotProfileTestingUtils.java | Apache-2.0 |
@VisibleForTesting
public static SlotProfile preferredLocality(
final ResourceProfile resourceProfile,
final Collection<TaskManagerLocation> preferredLocations) {
return SlotProfile.priorAllocation(
resourceProfile,
resourceProfile,
preferredLocations,
Collections.emptyList(),
Collections.emptySet());
} | Returns a slot profile for the given resource profile and the preferred locations.
@param resourceProfile specifying the slot requirements
@param preferredLocations specifying the preferred locations
@return Slot profile with the given resource profile and preferred locations | preferredLocality | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/SlotProfileTestingUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/types/SlotProfileTestingUtils.java | Apache-2.0 |
@Test
void testSerializationOfUnknownShuffleDescriptor() throws IOException {
ShuffleDescriptor shuffleDescriptor = new UnknownShuffleDescriptor(resultPartitionID);
ShuffleDescriptor shuffleDescriptorCopy =
CommonTestUtils.createCopySerializable(shuffleDescriptor);
assertThat(shuffleDescriptorCopy).isInstanceOf(UnknownShuffleDescriptor.class);
assertThat(resultPartitionID).isEqualTo(shuffleDescriptorCopy.getResultPartitionID());
assertThat(shuffleDescriptorCopy.isUnknown()).isTrue();
} | Tests simple de/serialization with {@link UnknownShuffleDescriptor}. | testSerializationOfUnknownShuffleDescriptor | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/deployment/ResultPartitionDeploymentDescriptorTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/deployment/ResultPartitionDeploymentDescriptorTest.java | Apache-2.0 |
public static ShuffleDescriptor[] deserializeShuffleDescriptors(
List<MaybeOffloaded<ShuffleDescriptorGroup>> maybeOffloaded,
JobID jobId,
TestingBlobWriter blobWriter)
throws IOException, ClassNotFoundException {
Map<Integer, ShuffleDescriptor> shuffleDescriptorsMap = new HashMap<>();
int maxIndex = 0;
for (MaybeOffloaded<ShuffleDescriptorGroup> sd : maybeOffloaded) {
ShuffleDescriptorGroup shuffleDescriptorGroup;
if (sd instanceof NonOffloaded) {
shuffleDescriptorGroup =
((NonOffloaded<ShuffleDescriptorGroup>) sd)
.serializedValue.deserializeValue(
ClassLoader.getSystemClassLoader());
} else {
final CompressedSerializedValue<ShuffleDescriptorGroup> compressedSerializedValue =
CompressedSerializedValue.fromBytes(
blobWriter.getBlob(
jobId,
((Offloaded<ShuffleDescriptorGroup>) sd)
.serializedValueKey));
shuffleDescriptorGroup =
compressedSerializedValue.deserializeValue(
ClassLoader.getSystemClassLoader());
}
for (ShuffleDescriptorAndIndex shuffleDescriptorAndIndex :
shuffleDescriptorGroup.getShuffleDescriptors()) {
int index = shuffleDescriptorAndIndex.getIndex();
maxIndex = Math.max(maxIndex, shuffleDescriptorAndIndex.getIndex());
shuffleDescriptorsMap.put(index, shuffleDescriptorAndIndex.getShuffleDescriptor());
}
}
ShuffleDescriptor[] shuffleDescriptors = new ShuffleDescriptor[maxIndex + 1];
shuffleDescriptorsMap.forEach((key, value) -> shuffleDescriptors[key] = value);
return shuffleDescriptors;
} | A collection of utility methods for testing the TaskDeploymentDescriptor and its related classes. | deserializeShuffleDescriptors | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/deployment/TaskDeploymentDescriptorTestUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/deployment/TaskDeploymentDescriptorTestUtils.java | Apache-2.0 |
@Test
public void testJobSubmissionUnderSameJobId() throws Exception {
final TestingJobManagerRunnerFactory jobManagerRunnerFactory =
startDispatcherAndSubmitJob(1);
final TestingJobManagerRunner testingJobManagerRunner =
jobManagerRunnerFactory.takeCreatedJobManagerRunner();
suspendJob(testingJobManagerRunner);
// wait until termination JobManagerRunner closeAsync has been called.
// this is necessary to avoid race conditions with completion of the 1st job and the
// submission of the 2nd job (DuplicateJobSubmissionException).
testingJobManagerRunner.getCloseAsyncCalledLatch().await();
final CompletableFuture<Acknowledge> submissionFuture =
dispatcherGateway.submitJob(jobGraph, timeout);
try {
submissionFuture.get(10L, TimeUnit.MILLISECONDS);
fail(
"The job submission future should not complete until the previous JobManager "
+ "termination future has been completed.");
} catch (TimeoutException ignored) {
// expected
} finally {
testingJobManagerRunner.completeTerminationFuture();
}
assertThat(submissionFuture.get(), equalTo(Acknowledge.get()));
} | Tests that the previous JobManager needs to be completely terminated before a new job with
the same {@link JobID} is started. | testJobSubmissionUnderSameJobId | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/DispatcherResourceCleanupTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/DispatcherResourceCleanupTest.java | Apache-2.0 |
static Collection<ExecutionGraphInfo> generateTerminalExecutionGraphInfos(int number) {
final Collection<ExecutionGraphInfo> executionGraphInfos = new ArrayList<>(number);
for (int i = 0; i < number; i++) {
final JobStatus state =
GLOBALLY_TERMINAL_JOB_STATUS.get(
ThreadLocalRandom.current()
.nextInt(GLOBALLY_TERMINAL_JOB_STATUS.size()));
executionGraphInfos.add(
new ExecutionGraphInfo(
new ArchivedExecutionGraphBuilder().setState(state).build()));
}
return executionGraphInfos;
} | Generate a specified of ExecutionGraphInfo.
@param number the given number
@return the result ExecutionGraphInfo collection | generateTerminalExecutionGraphInfos | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/ExecutionGraphInfoStoreTestUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/ExecutionGraphInfoStoreTestUtils.java | Apache-2.0 |
@Override
public void invoke() throws Exception {
LATCH.trigger();
Thread.sleep(Long.MAX_VALUE);
} | Latch used to signal an initial invocation. | invoke | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/ExecutionGraphInfoStoreTestUtils.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/ExecutionGraphInfoStoreTestUtils.java | Apache-2.0 |
@Test
public void testPut() throws IOException {
assertPutJobGraphWithStatus(JobStatus.FINISHED);
} | Tests that we can put {@link ExecutionGraphInfo} into the {@link FileExecutionGraphInfoStore}
and that the graph is persisted. | testPut | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | Apache-2.0 |
@Test
public void testCloseCleansUp() throws IOException {
final File rootDir = temporaryFolder.newFolder();
assertThat(rootDir.listFiles().length, Matchers.equalTo(0));
try (final FileExecutionGraphInfoStore executionGraphInfoStore =
createDefaultExecutionGraphInfoStore(
rootDir,
new ScheduledExecutorServiceAdapter(EXECUTOR_RESOURCE.getExecutor()))) {
assertThat(rootDir.listFiles().length, Matchers.equalTo(1));
final File storageDirectory = executionGraphInfoStore.getStorageDir();
assertThat(storageDirectory.listFiles().length, Matchers.equalTo(0));
executionGraphInfoStore.put(
new ExecutionGraphInfo(
new ArchivedExecutionGraphBuilder()
.setState(JobStatus.FINISHED)
.build()));
assertThat(storageDirectory.listFiles().length, Matchers.equalTo(1));
}
assertThat(rootDir.listFiles().length, Matchers.equalTo(0));
} | Tests that all persisted files are cleaned up after closing the store. | testCloseCleansUp | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | Apache-2.0 |
@Test
public void testCacheLoading() throws IOException {
final File rootDir = temporaryFolder.newFolder();
try (final FileExecutionGraphInfoStore executionGraphInfoStore =
new FileExecutionGraphInfoStore(
rootDir,
Duration.ofHours(1L),
Integer.MAX_VALUE,
100L << 10,
new ScheduledExecutorServiceAdapter(EXECUTOR_RESOURCE.getExecutor()),
Ticker.systemTicker())) {
final LoadingCache<JobID, ExecutionGraphInfo> executionGraphInfoCache =
executionGraphInfoStore.getExecutionGraphInfoCache();
Collection<ExecutionGraphInfo> executionGraphInfos = new ArrayList<>(64);
boolean continueInserting = true;
// insert execution graphs until the first one got evicted
while (continueInserting) {
// has roughly a size of 1.4 KB
final ExecutionGraphInfo executionGraphInfo =
new ExecutionGraphInfo(
new ArchivedExecutionGraphBuilder()
.setState(JobStatus.FINISHED)
.build());
executionGraphInfoStore.put(executionGraphInfo);
executionGraphInfos.add(executionGraphInfo);
continueInserting = executionGraphInfoCache.size() == executionGraphInfos.size();
}
final File storageDirectory = executionGraphInfoStore.getStorageDir();
assertThat(
storageDirectory.listFiles().length,
Matchers.equalTo(executionGraphInfos.size()));
for (ExecutionGraphInfo executionGraphInfo : executionGraphInfos) {
assertThat(
executionGraphInfoStore.get(executionGraphInfo.getJobId()),
matchesPartiallyWith(executionGraphInfo));
}
}
} | Tests that evicted {@link ExecutionGraphInfo} are loaded from disk again. | testCacheLoading | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | Apache-2.0 |
@Test
public void testMaximumCapacity() throws IOException {
final File rootDir = temporaryFolder.newFolder();
final int maxCapacity = 10;
final int numberExecutionGraphs = 10;
final Collection<ExecutionGraphInfo> oldExecutionGraphInfos =
generateTerminalExecutionGraphInfos(numberExecutionGraphs);
final Collection<ExecutionGraphInfo> newExecutionGraphInfos =
generateTerminalExecutionGraphInfos(numberExecutionGraphs);
final Collection<JobDetails> jobDetails = generateJobDetails(newExecutionGraphInfos);
try (final FileExecutionGraphInfoStore executionGraphInfoStore =
new FileExecutionGraphInfoStore(
rootDir,
Duration.ofHours(1L),
maxCapacity,
10000L,
new ScheduledExecutorServiceAdapter(EXECUTOR_RESOURCE.getExecutor()),
Ticker.systemTicker())) {
for (ExecutionGraphInfo executionGraphInfo : oldExecutionGraphInfos) {
executionGraphInfoStore.put(executionGraphInfo);
// no more than the configured maximum capacity
assertTrue(executionGraphInfoStore.size() <= maxCapacity);
}
for (ExecutionGraphInfo executionGraphInfo : newExecutionGraphInfos) {
executionGraphInfoStore.put(executionGraphInfo);
// equals to the configured maximum capacity
assertEquals(maxCapacity, executionGraphInfoStore.size());
}
// the older execution graphs are purged
assertThat(
executionGraphInfoStore.getAvailableJobDetails(),
Matchers.containsInAnyOrder(jobDetails.toArray()));
}
} | Tests that the size of {@link FileExecutionGraphInfoStore} is no more than the configured max
capacity and the old execution graphs will be purged if the total added number exceeds the
max capacity. | testMaximumCapacity | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/FileExecutionGraphInfoStoreTest.java | Apache-2.0 |
@Test
public void testCloseCleansUp() throws IOException {
try (final MemoryExecutionGraphInfoStore executionGraphInfoStore =
createMemoryExecutionGraphInfoStore()) {
assertThat(executionGraphInfoStore.size(), Matchers.equalTo(0));
executionGraphInfoStore.put(
new ExecutionGraphInfo(
new ArchivedExecutionGraphBuilder()
.setState(JobStatus.FINISHED)
.build()));
assertThat(executionGraphInfoStore.size(), Matchers.equalTo(1));
executionGraphInfoStore.close();
assertThat(executionGraphInfoStore.size(), Matchers.equalTo(0));
}
} | Tests that all job graphs are cleaned up after closing the store. | testCloseCleansUp | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/MemoryExecutionGraphInfoStoreTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/MemoryExecutionGraphInfoStoreTest.java | Apache-2.0 |
@Override
public TestingJobManagerRunner create(
JobResult jobResult,
CheckpointRecoveryFactory checkpointRecoveryFactory,
Configuration configuration,
Executor cleanupExecutor) {
try {
return offerTestingJobManagerRunner(jobResult.getJobId());
} catch (Exception e) {
throw new RuntimeException(e);
}
} | {@code TestingCleanupRunnerFactory} implements {@link CleanupRunnerFactory} providing a factory
method usually used for {@link CheckpointResourcesCleanupRunner} creations. | create | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/cleanup/TestingCleanupRunnerFactory.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/cleanup/TestingCleanupRunnerFactory.java | Apache-2.0 |
public static RetryStrategy createWithNumberOfRetries(int retryCount) {
return new FixedRetryStrategy(retryCount, TESTING_DEFAULT_RETRY_DELAY);
} | {@code TestingRetryStrategies} collects common {@link RetryStrategy} variants. | createWithNumberOfRetries | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/cleanup/TestingRetryStrategies.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/dispatcher/cleanup/TestingRetryStrategies.java | Apache-2.0 |
@Test
public void testIntegerTaskEvent() {
try {
final IntegerTaskEvent orig = new IntegerTaskEvent(11);
final IntegerTaskEvent copy = InstantiationUtil.createCopyWritable(orig);
assertEquals(orig.getInteger(), copy.getInteger());
assertEquals(orig.hashCode(), copy.hashCode());
assertTrue(orig.equals(copy));
} catch (IOException ioe) {
fail(ioe.getMessage());
}
} | This test checks the serialization/deserialization of {@link IntegerTaskEvent} objects. | testIntegerTaskEvent | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/event/task/TaskEventTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/event/task/TaskEventTest.java | Apache-2.0 |
@Test
public void testStringTaskEvent() {
try {
final StringTaskEvent orig = new StringTaskEvent("Test");
final StringTaskEvent copy = InstantiationUtil.createCopyWritable(orig);
assertEquals(orig.getString(), copy.getString());
assertEquals(orig.hashCode(), copy.hashCode());
assertTrue(orig.equals(copy));
} catch (IOException ioe) {
fail(ioe.getMessage());
}
} | This test checks the serialization/deserialization of {@link StringTaskEvent} objects. | testStringTaskEvent | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/event/task/TaskEventTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/event/task/TaskEventTest.java | Apache-2.0 |
@Test
public void testLibraryCacheManagerDifferentJobsCleanup() throws Exception {
JobID jobId1 = new JobID();
JobID jobId2 = new JobID();
List<PermanentBlobKey> keys1 = new ArrayList<>();
List<PermanentBlobKey> keys2 = new ArrayList<>();
BlobServer server = null;
PermanentBlobCache cache = null;
BlobLibraryCacheManager libCache = null;
final byte[] buf = new byte[128];
try {
Configuration config = new Configuration();
config.set(BlobServerOptions.CLEANUP_INTERVAL, 1L);
server = new BlobServer(config, temporaryFolder.newFolder(), new VoidBlobStore());
server.start();
InetSocketAddress serverAddress = new InetSocketAddress("localhost", server.getPort());
cache =
new PermanentBlobCache(
config,
temporaryFolder.newFolder(),
new VoidBlobStore(),
serverAddress);
keys1.add(server.putPermanent(jobId1, buf));
buf[0] += 1;
keys1.add(server.putPermanent(jobId1, buf));
keys2.add(server.putPermanent(jobId2, buf));
libCache = createBlobLibraryCacheManager(cache);
cache.registerJob(jobId1);
cache.registerJob(jobId2);
assertEquals(0, libCache.getNumberOfManagedJobs());
assertEquals(0, libCache.getNumberOfReferenceHolders(jobId1));
checkFileCountForJob(2, jobId1, server);
checkFileCountForJob(0, jobId1, cache);
checkFileCountForJob(1, jobId2, server);
checkFileCountForJob(0, jobId2, cache);
final LibraryCacheManager.ClassLoaderLease classLoaderLeaseJob1 =
libCache.registerClassLoaderLease(jobId1);
final UserCodeClassLoader classLoader1 =
classLoaderLeaseJob1.getOrResolveClassLoader(keys1, Collections.emptyList());
assertEquals(1, libCache.getNumberOfManagedJobs());
assertEquals(1, libCache.getNumberOfReferenceHolders(jobId1));
assertEquals(0, libCache.getNumberOfReferenceHolders(jobId2));
assertEquals(2, checkFilesExist(jobId1, keys1, cache, false));
checkFileCountForJob(2, jobId1, server);
checkFileCountForJob(2, jobId1, cache);
assertEquals(0, checkFilesExist(jobId2, keys2, cache, false));
checkFileCountForJob(1, jobId2, server);
checkFileCountForJob(0, jobId2, cache);
final LibraryCacheManager.ClassLoaderLease classLoaderLeaseJob2 =
libCache.registerClassLoaderLease(jobId2);
final UserCodeClassLoader classLoader2 =
classLoaderLeaseJob2.getOrResolveClassLoader(keys2, Collections.emptyList());
assertThat(classLoader1, not(sameInstance(classLoader2)));
try {
classLoaderLeaseJob2.getOrResolveClassLoader(keys1, Collections.<URL>emptyList());
fail("Should fail with an IllegalStateException");
} catch (IllegalStateException e) {
// that's what we want
}
try {
classLoaderLeaseJob2.getOrResolveClassLoader(
keys2, Collections.singletonList(new URL("file:///tmp/does-not-exist")));
fail("Should fail with an IllegalStateException");
} catch (IllegalStateException e) {
// that's what we want
}
assertEquals(2, libCache.getNumberOfManagedJobs());
assertEquals(1, libCache.getNumberOfReferenceHolders(jobId1));
assertEquals(1, libCache.getNumberOfReferenceHolders(jobId2));
assertEquals(2, checkFilesExist(jobId1, keys1, cache, false));
checkFileCountForJob(2, jobId1, server);
checkFileCountForJob(2, jobId1, cache);
assertEquals(1, checkFilesExist(jobId2, keys2, cache, false));
checkFileCountForJob(1, jobId2, server);
checkFileCountForJob(1, jobId2, cache);
classLoaderLeaseJob1.release();
assertEquals(1, libCache.getNumberOfManagedJobs());
assertEquals(0, libCache.getNumberOfReferenceHolders(jobId1));
assertEquals(1, libCache.getNumberOfReferenceHolders(jobId2));
assertEquals(2, checkFilesExist(jobId1, keys1, cache, false));
checkFileCountForJob(2, jobId1, server);
checkFileCountForJob(2, jobId1, cache);
assertEquals(1, checkFilesExist(jobId2, keys2, cache, false));
checkFileCountForJob(1, jobId2, server);
checkFileCountForJob(1, jobId2, cache);
classLoaderLeaseJob2.release();
assertEquals(0, libCache.getNumberOfManagedJobs());
assertEquals(0, libCache.getNumberOfReferenceHolders(jobId1));
assertEquals(0, libCache.getNumberOfReferenceHolders(jobId2));
assertEquals(2, checkFilesExist(jobId1, keys1, cache, false));
checkFileCountForJob(2, jobId1, server);
checkFileCountForJob(2, jobId1, cache);
assertEquals(1, checkFilesExist(jobId2, keys2, cache, false));
checkFileCountForJob(1, jobId2, server);
checkFileCountForJob(1, jobId2, cache);
// only PermanentBlobCache#releaseJob() calls clean up files (tested in
// BlobCacheCleanupTest etc.
} finally {
if (libCache != null) {
libCache.shutdown();
}
// should have been closed by the libraryCacheManager, but just in case
if (cache != null) {
cache.close();
}
if (server != null) {
server.close();
}
}
} | Tests that the {@link BlobLibraryCacheManager} cleans up after the class loader leases for
different jobs are closed. | testLibraryCacheManagerDifferentJobsCleanup | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheManagerTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheManagerTest.java | Apache-2.0 |
@Test
public void testRecoveryRegisterAndDownload() throws Exception {
Random rand = new Random();
BlobServer[] server = new BlobServer[2];
InetSocketAddress[] serverAddress = new InetSocketAddress[2];
BlobLibraryCacheManager[] libServer = new BlobLibraryCacheManager[2];
PermanentBlobCache cache = null;
BlobStoreService blobStoreService = null;
Configuration config = new Configuration();
config.set(HighAvailabilityOptions.HA_MODE, "ZOOKEEPER");
config.set(
HighAvailabilityOptions.HA_STORAGE_PATH,
temporaryFolder.newFolder().getAbsolutePath());
config.set(BlobServerOptions.CLEANUP_INTERVAL, 3_600L);
final ExecutorService executorService = Executors.newSingleThreadExecutor();
try {
blobStoreService = BlobUtils.createBlobStoreFromConfig(config);
final BlobLibraryCacheManager.ClassLoaderFactory classLoaderFactory =
BlobLibraryCacheManager.defaultClassLoaderFactory(
FlinkUserCodeClassLoaders.ResolveOrder.CHILD_FIRST,
new String[0],
null,
true);
for (int i = 0; i < server.length; i++) {
server[i] = new BlobServer(config, temporaryFolder.newFolder(), blobStoreService);
server[i].start();
serverAddress[i] = new InetSocketAddress("localhost", server[i].getPort());
libServer[i] =
new BlobLibraryCacheManager(
server[i], classLoaderFactory, wrapsSystemClassLoader);
}
// Random data
byte[] expected = new byte[1024];
rand.nextBytes(expected);
ArrayList<PermanentBlobKey> keys = new ArrayList<>(2);
JobID jobId = new JobID();
// Upload some data (libraries)
keys.add(server[0].putPermanent(jobId, expected)); // Request 1
byte[] expected2 = Arrays.copyOfRange(expected, 32, 288);
keys.add(server[0].putPermanent(jobId, expected2)); // Request 2
// The cache
cache =
new PermanentBlobCache(
config,
temporaryFolder.newFolder(),
blobStoreService,
serverAddress[0]);
// Register uploaded libraries
final LibraryCacheManager.ClassLoaderLease classLoaderLease =
libServer[0].registerClassLoaderLease(jobId);
classLoaderLease.getOrResolveClassLoader(keys, Collections.emptyList());
// Verify key 1
File f = cache.getFile(jobId, keys.get(0));
assertEquals(expected.length, f.length());
try (FileInputStream fis = new FileInputStream(f)) {
for (int i = 0; i < expected.length && fis.available() > 0; i++) {
assertEquals(expected[i], (byte) fis.read());
}
assertEquals(0, fis.available());
}
// Shutdown cache and start with other server
cache.close();
cache =
new PermanentBlobCache(
config,
temporaryFolder.newFolder(),
blobStoreService,
serverAddress[1]);
// Verify key 1
f = cache.getFile(jobId, keys.get(0));
assertEquals(expected.length, f.length());
try (FileInputStream fis = new FileInputStream(f)) {
for (int i = 0; i < expected.length && fis.available() > 0; i++) {
assertEquals(expected[i], (byte) fis.read());
}
assertEquals(0, fis.available());
}
// Verify key 2
f = cache.getFile(jobId, keys.get(1));
assertEquals(expected2.length, f.length());
try (FileInputStream fis = new FileInputStream(f)) {
for (int i = 0; i < 256 && fis.available() > 0; i++) {
assertEquals(expected2[i], (byte) fis.read());
}
assertEquals(0, fis.available());
}
// Remove blobs again
server[1].globalCleanupAsync(jobId, executorService).join();
// Verify everything is clean below recoveryDir/<cluster_id>
final String clusterId = config.get(HighAvailabilityOptions.HA_CLUSTER_ID);
String haBlobStorePath = config.get(HighAvailabilityOptions.HA_STORAGE_PATH);
File haBlobStoreDir = new File(haBlobStorePath, clusterId);
File[] recoveryFiles = haBlobStoreDir.listFiles();
assertNotNull("HA storage directory does not exist", recoveryFiles);
assertEquals(
"Unclean state backend: " + Arrays.toString(recoveryFiles),
0,
recoveryFiles.length);
} finally {
assertThat(executorService.shutdownNow(), IsEmptyCollection.empty());
for (BlobLibraryCacheManager s : libServer) {
if (s != null) {
s.shutdown();
}
}
for (BlobServer s : server) {
if (s != null) {
s.close();
}
}
if (cache != null) {
cache.close();
}
if (blobStoreService != null) {
blobStoreService.cleanupAllData();
blobStoreService.close();
}
}
} | Tests that with {@link HighAvailabilityMode#ZOOKEEPER} distributed JARs are recoverable from
any participating BlobLibraryCacheManager. | testRecoveryRegisterAndDownload | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheRecoveryITCase.java | Apache-2.0 |
protected void checkJobOffloaded(DefaultExecutionGraph eg) throws Exception {
assertThat(eg.getTaskDeploymentDescriptorFactory().getSerializedJobInformation())
.isInstanceOf(TaskDeploymentDescriptor.NonOffloaded.class);
} | Checks that the job information for the given ID has been offloaded successfully (if
offloading is used).
@param eg the execution graph that was created | checkJobOffloaded | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphDeploymentTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphDeploymentTest.java | Apache-2.0 |
@Test
void testAccumulatorsAndMetricsForwarding() throws Exception {
final JobVertexID jid1 = new JobVertexID();
final JobVertexID jid2 = new JobVertexID();
JobVertex v1 = new JobVertex("v1", jid1);
JobVertex v2 = new JobVertex("v2", jid2);
SchedulerBase scheduler = setupScheduler(v1, 1, v2, 1);
ExecutionGraph graph = scheduler.getExecutionGraph();
Map<ExecutionAttemptID, Execution> executions = graph.getRegisteredExecutions();
// verify behavior for canceled executions
Execution execution1 = executions.values().iterator().next();
IOMetrics ioMetrics = new IOMetrics(0, 0, 0, 0, 0, 0, 0);
Map<String, Accumulator<?, ?>> accumulators = new HashMap<>();
accumulators.put("acc", new IntCounter(4));
AccumulatorSnapshot accumulatorSnapshot =
new AccumulatorSnapshot(graph.getJobID(), execution1.getAttemptId(), accumulators);
TaskExecutionState state =
new TaskExecutionState(
execution1.getAttemptId(),
ExecutionState.CANCELED,
null,
accumulatorSnapshot,
ioMetrics);
scheduler.updateTaskExecutionState(state);
assertIOMetricsEqual(execution1.getIOMetrics(), ioMetrics);
assertThat(execution1.getUserAccumulators()).isNotNull();
assertThat(execution1.getUserAccumulators().get("acc").getLocalValue()).isEqualTo(4);
// verify behavior for failed executions
Execution execution2 = executions.values().iterator().next();
IOMetrics ioMetrics2 = new IOMetrics(0, 0, 0, 0, 0, 0, 0);
Map<String, Accumulator<?, ?>> accumulators2 = new HashMap<>();
accumulators2.put("acc", new IntCounter(8));
AccumulatorSnapshot accumulatorSnapshot2 =
new AccumulatorSnapshot(graph.getJobID(), execution2.getAttemptId(), accumulators2);
TaskExecutionState state2 =
new TaskExecutionState(
execution2.getAttemptId(),
ExecutionState.FAILED,
null,
accumulatorSnapshot2,
ioMetrics2);
scheduler.updateTaskExecutionState(state2);
assertIOMetricsEqual(execution2.getIOMetrics(), ioMetrics2);
assertThat(execution2.getUserAccumulators()).isNotNull();
assertThat(execution2.getUserAccumulators().get("acc").getLocalValue()).isEqualTo(8);
} | Verifies that {@link SchedulerNG#updateTaskExecutionState(TaskExecutionState)} updates the
accumulators and metrics for an execution that failed or was canceled. | testAccumulatorsAndMetricsForwarding | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphDeploymentTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphDeploymentTest.java | Apache-2.0 |
@Test
void testExecutionGraphArbitraryDopConstructionTest() throws Exception {
final int initialParallelism = 5;
final int maxParallelism = 10;
final JobVertex[] jobVertices =
createVerticesForSimpleBipartiteJobGraph(initialParallelism, maxParallelism);
final JobGraph jobGraph = JobGraphTestUtils.streamingJobGraph(jobVertices);
ExecutionGraph eg =
TestingDefaultExecutionGraphBuilder.newBuilder()
.setJobGraph(jobGraph)
.build(EXECUTOR_RESOURCE.getExecutor());
for (JobVertex jv : jobVertices) {
assertThat(jv.getParallelism()).isEqualTo(initialParallelism);
}
verifyGeneratedExecutionGraphOfSimpleBitartiteJobGraph(eg, jobVertices);
// --- verify scaling down works correctly ---
final int scaleDownParallelism = 1;
for (JobVertex jv : jobVertices) {
jv.setParallelism(scaleDownParallelism);
}
eg =
TestingDefaultExecutionGraphBuilder.newBuilder()
.setJobGraph(jobGraph)
.build(EXECUTOR_RESOURCE.getExecutor());
for (JobVertex jv : jobVertices) {
assertThat(jv.getParallelism()).isOne();
}
verifyGeneratedExecutionGraphOfSimpleBitartiteJobGraph(eg, jobVertices);
// --- verify scaling up works correctly ---
final int scaleUpParallelism = 10;
for (JobVertex jv : jobVertices) {
jv.setParallelism(scaleUpParallelism);
}
eg =
TestingDefaultExecutionGraphBuilder.newBuilder()
.setJobGraph(jobGraph)
.build(EXECUTOR_RESOURCE.getExecutor());
for (JobVertex jv : jobVertices) {
assertThat(jv.getParallelism()).isEqualTo(scaleUpParallelism);
}
verifyGeneratedExecutionGraphOfSimpleBitartiteJobGraph(eg, jobVertices);
} | This class contains tests that verify when rescaling a {@link JobGraph}, constructed {@link
ExecutionGraph}s are correct. | testExecutionGraphArbitraryDopConstructionTest | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphRescalingTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/DefaultExecutionGraphRescalingTest.java | Apache-2.0 |
@Test
void testSerializationWithExceptionOutsideClassLoader() throws Exception {
final ErrorInfo error =
new ErrorInfo(new ExceptionWithCustomClassLoader(), System.currentTimeMillis());
final ErrorInfo copy = CommonTestUtils.createCopySerializable(error);
assertThat(copy.getTimestamp()).isEqualTo(error.getTimestamp());
assertThat(copy.getExceptionAsString()).isEqualTo(error.getExceptionAsString());
assertThat(copy.getException().getMessage()).isEqualTo(error.getException().getMessage());
} | Simple test for the {@link ErrorInfo}. | testSerializationWithExceptionOutsideClassLoader | java | apache/flink | flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ErrorInfoTest.java | https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ErrorInfoTest.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.