code
stringlengths 25
201k
| docstring
stringlengths 19
96.2k
| func_name
stringlengths 0
235
| language
stringclasses 1
value | repo
stringlengths 8
51
| path
stringlengths 11
314
| url
stringlengths 62
377
| license
stringclasses 7
values |
|---|---|---|---|---|---|---|---|
private static FileSystemFactory loadHadoopFsFactory() {
final ClassLoader cl = FileSystem.class.getClassLoader();
// first, see if the Flink runtime classes are available
final Class<? extends FileSystemFactory> factoryClass;
try {
factoryClass =
Class.forName("org.apache.flink.runtime.fs.hdfs.HadoopFsFactory", false, cl)
.asSubclass(FileSystemFactory.class);
} catch (ClassNotFoundException e) {
LOG.info(
"No Flink runtime dependency present. "
+ "The extended set of supported File Systems via Hadoop is not available.");
return new UnsupportedSchemeFactory(
"Flink runtime classes missing in classpath/dependencies.");
} catch (Exception | LinkageError e) {
LOG.warn("Flink's Hadoop file system factory could not be loaded", e);
return new UnsupportedSchemeFactory(
"Flink's Hadoop file system factory could not be loaded", e);
}
// check (for eager and better exception messages) if the Hadoop classes are available here
try {
Class.forName("org.apache.hadoop.conf.Configuration", false, cl);
Class.forName("org.apache.hadoop.fs.FileSystem", false, cl);
} catch (ClassNotFoundException e) {
LOG.info(
"Hadoop is not in the classpath/dependencies. "
+ "The extended set of supported File Systems via Hadoop is not available.");
return new UnsupportedSchemeFactory("Hadoop is not in the classpath/dependencies.");
}
// Create the factory.
try {
return factoryClass.newInstance();
} catch (Exception | LinkageError e) {
LOG.warn("Flink's Hadoop file system factory could not be created", e);
return new UnsupportedSchemeFactory(
"Flink's Hadoop file system factory could not be created", e);
}
}
|
Utility loader for the Hadoop file system factory. We treat the Hadoop FS factory in a
special way, because we use it as a catch all for file systems schemes not supported directly
in Flink.
<p>This method does a set of eager checks for availability of certain classes, to be able to
give better error messages.
|
loadHadoopFsFactory
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java
|
Apache-2.0
|
@Internal
public static void initializeSafetyNetForThread() {
SafetyNetCloseableRegistry oldRegistry = REGISTRIES.get();
checkState(
null == oldRegistry,
"Found an existing FileSystem safety net for this thread: %s "
+ "This may indicate an accidental repeated initialization, or a leak of the"
+ "(Inheritable)ThreadLocal through a ThreadPool.",
oldRegistry);
SafetyNetCloseableRegistry newRegistry = new SafetyNetCloseableRegistry();
REGISTRIES.set(newRegistry);
}
|
Activates the safety net for a thread. {@link FileSystem} instances obtained by the thread
that called this method will be guarded, meaning that their created streams are tracked and
can be closed via the safety net closing hook.
<p>This method should be called at the beginning of a thread that should be guarded.
@throws IllegalStateException Thrown, if a safety net was already registered for the thread.
|
initializeSafetyNetForThread
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/FileSystemSafetyNet.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/FileSystemSafetyNet.java
|
Apache-2.0
|
default Closeable registerCloseableTemporarily(Closeable closeable) throws IOException {
registerCloseable(closeable);
return () -> unregisterCloseable(closeable);
}
|
Same as {@link #registerCloseable(Closeable)} but allows to {@link
#unregisterCloseable(Closeable) unregister} the passed closeable by closing the returned
closeable.
@param closeable Closeable to register.
@return another Closeable that unregisters the passed closeable.
@throws IOException exception when the registry was closed before.
|
registerCloseableTemporarily
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/ICloseableRegistry.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/ICloseableRegistry.java
|
Apache-2.0
|
default RecoverableWriter createRecoverableWriter() throws IOException {
throw new UnsupportedOperationException(
"This file system does not support recoverable writers.");
}
|
Creates a new {@link RecoverableWriter}. A recoverable writer creates streams that can
persist and recover their intermediate state. Persisting and recovering intermediate state is
a core building block for writing to files that span multiple checkpoints.
<p>The returned object can act as a shared factory to open and recover multiple streams.
<p>This method is optional on file systems and various file system implementations may not
support this method, throwing an {@code UnsupportedOperationException}.
@return A RecoverableWriter for this file system.
@throws IOException Thrown, if the recoverable writer cannot be instantiated.
|
createRecoverableWriter
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
Apache-2.0
|
default RecoverableWriter createRecoverableWriter(Map<String, String> conf) throws IOException {
return createRecoverableWriter();
}
|
Creates a new {@link RecoverableWriter}. A recoverable writer creates streams that can
persist and recover their intermediate state. Persisting and recovering intermediate state is
a core building block for writing to files that span multiple checkpoints.
<p>The returned object can act as a shared factory to open and recover multiple streams.
<p>This method is optional on file systems and various file system implementations may not
support this method, throwing an {@code UnsupportedOperationException}.
@param conf Map contains a flag to indicate whether the writer should not write to local
storage. and can provide more information to instantiate the writer.
@return A RecoverableWriter for this file system.
@throws IOException Thrown, if the recoverable writer cannot be instantiated.
|
createRecoverableWriter
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
Apache-2.0
|
default boolean exists(final Path f) throws IOException {
try {
return (getFileStatus(f) != null);
} catch (FileNotFoundException e) {
return false;
}
}
|
Check if exists.
@param f source file
|
exists
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
Apache-2.0
|
default boolean canCopyPaths(Path source, Path destination) throws IOException {
return false;
}
|
Tells if this {@link FileSystem} supports an optimised way to directly copy between given
paths. In other words if it implements {@link PathsCopyingFileSystem}.
<p>At least one of, either source or destination belongs to this {@link IFileSystem}. One of
them can point to the local file system. In other words this request can correspond to
either: downloading a file from the remote file system, uploading a file to the remote file
system or duplicating a file in the remote file system.
@param source The path of the source file to duplicate
@param destination The path where to duplicate the source file
@return true, if this {@link IFileSystem} can perform this operation more quickly compared to
the generic code path of using streams.
|
canCopyPaths
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/IFileSystem.java
|
Apache-2.0
|
public int getMaxNumOpenOutputStreams() {
return maxNumOpenOutputStreams;
}
|
Gets the maximum number of concurrently open output streams.
|
getMaxNumOpenOutputStreams
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public int getMaxNumOpenInputStreams() {
return maxNumOpenInputStreams;
}
|
Gets the maximum number of concurrently open input streams.
|
getMaxNumOpenInputStreams
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public int getMaxNumOpenStreamsTotal() {
return maxNumOpenStreamsTotal;
}
|
Gets the maximum number of concurrently open streams (input + output).
|
getMaxNumOpenStreamsTotal
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public long getStreamOpenTimeout() {
return streamOpenTimeoutNanos / 1_000_000;
}
|
Gets the number of milliseconds that a opening a stream may wait for availability in the
connection pool.
|
getStreamOpenTimeout
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public long getStreamInactivityTimeout() {
return streamInactivityTimeoutNanos / 1_000_000;
}
|
Gets the milliseconds that a stream may spend not writing any bytes before it is closed as
inactive.
|
getStreamInactivityTimeout
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public int getTotalNumberOfOpenStreams() {
lock.lock();
try {
return numReservedOutputStreams + numReservedInputStreams;
} finally {
lock.unlock();
}
}
|
Gets the total number of open streams (input plus output).
|
getTotalNumberOfOpenStreams
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public int getNumberOfOpenOutputStreams() {
lock.lock();
try {
return numReservedOutputStreams;
} finally {
lock.unlock();
}
}
|
Gets the number of currently open output streams.
|
getNumberOfOpenOutputStreams
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
void unregisterOutputStream(OutStream stream) {
lock.lock();
try {
// only decrement if we actually remove the stream
if (openOutputStreams.remove(stream)) {
numReservedOutputStreams--;
available.signalAll();
}
} finally {
lock.unlock();
}
}
|
Atomically removes the given output stream from the set of currently open output streams, and
signals that new stream can now be opened.
|
unregisterOutputStream
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
void unregisterInputStream(InStream stream) {
lock.lock();
try {
// only decrement if we actually remove the stream
if (openInputStreams.remove(stream)) {
numReservedInputStreams--;
available.signalAll();
}
} finally {
lock.unlock();
}
}
|
Atomically removes the given input stream from the set of currently open input streams, and
signals that new stream can now be opened.
|
unregisterInputStream
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public long getLastCheckTimestampNanos() {
return lastCheckTimestampNanos;
}
|
Gets the timestamp when the last inactivity evaluation was made.
|
getLastCheckTimestampNanos
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
public boolean checkNewBytesAndMark(long timestamp) throws IOException {
// remember the time when checked
lastCheckTimestampNanos = timestamp;
final long bytesNow = stream.getPos();
if (bytesNow > lastCheckBytes) {
lastCheckBytes = bytesNow;
return true;
} else {
return false;
}
}
|
Checks whether there were new bytes since the last time this method was invoked. This
also sets the given timestamp, to be read via {@link #getLastCheckTimestampNanos()}.
@return True, if there were new bytes, false if not.
|
checkNewBytesAndMark
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/LimitedConnectionsFileSystem.java
|
Apache-2.0
|
private String checkPathArg(String path) {
// disallow construction of a Path from an empty string
if (path == null) {
throw new IllegalArgumentException("Can not create a Path from a null string");
}
if (path.length() == 0) {
throw new IllegalArgumentException("Can not create a Path from an empty string");
}
return path;
}
|
Checks if the provided path string is either null or has zero length and throws a {@link
IllegalArgumentException} if any of the two conditions apply.
@param path the path string to be checked
@return The checked path.
|
checkPathArg
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
private void initialize(String scheme, String authority, String path) {
try {
this.uri = new URI(scheme, authority, normalizePath(path), null, null).normalize();
} catch (URISyntaxException e) {
throw new IllegalArgumentException(e);
}
}
|
Initializes a path object given the scheme, authority and path string.
@param scheme the scheme string.
@param authority the authority string.
@param path the path string.
|
initialize
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
private String normalizePath(String path) {
// remove consecutive slashes & backslashes
path = path.replace("\\", "/");
if (path.contains("//")) {
path = DUPLICATE_CONSECUTIVE_SLASHES.matcher(path).replaceAll("/");
}
// remove tailing separator
if (path.endsWith(SEPARATOR)
&& !path.equals(SEPARATOR)
&& // UNIX root path
!WINDOWS_ROOT_DIR_REGEX.matcher(path).matches()) { // Windows root path)
// remove tailing slash
path = path.substring(0, path.length() - SEPARATOR.length());
}
return path;
}
|
Normalizes a path string.
@param path the path string to normalize
@return the normalized path string
|
normalizePath
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public URI toUri() {
return uri;
}
|
Converts the path object to a {@link URI}.
@return the {@link URI} object converted from the path object
|
toUri
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public FileSystem getFileSystem() throws IOException {
return FileSystem.get(this.toUri());
}
|
Returns the FileSystem that owns this Path.
@return the FileSystem that owns this Path
@throws IOException thrown if the file system could not be retrieved
|
getFileSystem
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public boolean isAbsolute() {
final int start = hasWindowsDrive(uri.getPath(), true) ? 3 : 0;
return uri.getPath().startsWith(SEPARATOR, start);
}
|
Checks if the directory of this path is absolute.
@return <code>true</code> if the directory of this path is absolute, <code>false</code>
otherwise
|
isAbsolute
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public String getName() {
final String path = uri.getPath();
final int slash = path.lastIndexOf(SEPARATOR);
return path.substring(slash + 1);
}
|
Returns the final component of this path, i.e., everything that follows the last separator.
@return the final component of the path
|
getName
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public String getPath() {
return uri.getPath();
}
|
Return full path.
@return full path
|
getPath
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public Path getParent() {
final String path = uri.getPath();
final int lastSlash = path.lastIndexOf('/');
final int start = hasWindowsDrive(path, true) ? 3 : 0;
if ((path.length() == start)
|| // empty path
(lastSlash == start && path.length() == start + 1)) { // at root
return null;
}
String parent;
if (lastSlash == -1) {
parent = CUR_DIR;
} else {
final int end = hasWindowsDrive(path, true) ? 3 : 0;
parent = path.substring(0, lastSlash == end ? end + 1 : lastSlash);
}
return new Path(uri.getScheme(), uri.getAuthority(), parent);
}
|
Returns the parent of a path, i.e., everything that precedes the last separator or <code>null
</code> if at root.
@return the parent of a path or <code>null</code> if at root.
|
getParent
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public int depth() {
String path = uri.getPath();
int depth = 0;
int slash = path.length() == 1 && path.charAt(0) == '/' ? -1 : 0;
while (slash != -1) {
depth++;
slash = path.indexOf(SEPARATOR, slash + 1);
}
return depth;
}
|
Returns the number of elements in this path.
@return the number of elements in this path
|
depth
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public Path makeQualified(FileSystem fs) {
Path path = this;
if (!isAbsolute()) {
path = new Path(fs.getWorkingDirectory(), this);
}
final URI pathUri = path.toUri();
final URI fsUri = fs.getUri();
String scheme = pathUri.getScheme();
String authority = pathUri.getAuthority();
if (scheme != null && (authority != null || fsUri.getAuthority() == null)) {
return path;
}
if (scheme == null) {
scheme = fsUri.getScheme();
}
if (authority == null) {
authority = fsUri.getAuthority();
if (authority == null) {
authority = "";
}
}
return new Path(scheme + ":" + "//" + authority + pathUri.getPath());
}
|
Returns a qualified path object.
@param fs the FileSystem that should be used to obtain the current working directory
@return the qualified path object
|
makeQualified
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public boolean hasWindowsDrive() {
return hasWindowsDrive(uri.getPath(), true);
}
|
Checks if the provided path string contains a windows drive letter.
@return True, if the path string contains a windows drive letter, false otherwise.
|
hasWindowsDrive
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
private boolean hasWindowsDrive(String path, boolean slashed) {
final int start = slashed ? 1 : 0;
return path.length() >= start + 2
&& (!slashed || path.charAt(0) == '/')
&& path.charAt(start + 1) == ':'
&& ((path.charAt(start) >= 'A' && path.charAt(start) <= 'Z')
|| (path.charAt(start) >= 'a' && path.charAt(start) <= 'z'));
}
|
Checks if the provided path string contains a windows drive letter.
@param path the path to check
@param slashed true to indicate the first character of the string is a slash, false otherwise
@return <code>true</code> if the path string contains a windows drive letter, false otherwise
|
hasWindowsDrive
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public static Path fromLocalFile(File file) {
return new Path(file.toURI());
}
|
Creates a path for the given local file.
<p>This method is useful to make sure the path creation for local files works seamlessly
across different operating systems. Especially Windows has slightly different rules for
slashes between schema and a local file path, making it sometimes tricky to produce
cross-platform URIs for local files.
@param file The file that the path should represent.
@return A path representing the local file URI of the given file.
|
fromLocalFile
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
@Nullable
public static Path deserializeFromDataInputView(DataInputView in) throws IOException {
final boolean isNotNull = in.readBoolean();
Path result = null;
if (isNotNull) {
final String scheme = StringUtils.readNullableString(in);
final String userInfo = StringUtils.readNullableString(in);
final String host = StringUtils.readNullableString(in);
final int port = in.readInt();
final String path = StringUtils.readNullableString(in);
final String query = StringUtils.readNullableString(in);
final String fragment = StringUtils.readNullableString(in);
try {
result = new Path(new URI(scheme, userInfo, host, port, path, query, fragment));
} catch (URISyntaxException e) {
throw new IOException("Error reconstructing URI", e);
}
}
return result;
}
|
Deserialize the Path from {@link DataInputView}.
@param in the data input view.
@return the path
@throws IOException if an error happened.
|
deserializeFromDataInputView
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
public static void serializeToDataOutputView(Path path, DataOutputView out) throws IOException {
URI uri = path.toUri();
if (uri == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
StringUtils.writeNullableString(uri.getScheme(), out);
StringUtils.writeNullableString(uri.getUserInfo(), out);
StringUtils.writeNullableString(uri.getHost(), out);
out.writeInt(uri.getPort());
StringUtils.writeNullableString(uri.getPath(), out);
StringUtils.writeNullableString(uri.getQuery(), out);
StringUtils.writeNullableString(uri.getFragment(), out);
}
}
|
Serialize the path to {@link DataInputView}.
@param path the file path.
@param out the data out put view.
@throws IOException if an error happened.
|
serializeToDataOutputView
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/Path.java
|
Apache-2.0
|
@Override
default boolean canCopyPaths(Path source, Path destination) throws IOException {
return true;
}
|
List of {@link CopyRequest} to copy in batch by this {@link PathsCopyingFileSystem}. In case
of an exception some files might have been already copied fully or partially. Caller should
clean this up. Copy can be interrupted by the {@link CloseableRegistry}.
|
canCopyPaths
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/PathsCopyingFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/PathsCopyingFileSystem.java
|
Apache-2.0
|
private boolean delete(final File f) throws IOException {
if (f.isDirectory()) {
final File[] files = f.listFiles();
if (files != null) {
for (File file : files) {
final boolean del = delete(file);
if (!del) {
return false;
}
}
}
} else {
return f.delete();
}
// Now directory is empty
return f.delete();
}
|
Deletes the given file or directory.
@param f the file to be deleted
@return <code>true</code> if all files were deleted successfully, <code>false</code>
otherwise
@throws IOException thrown if an error occurred while deleting the files/directories
|
delete
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileSystem.java
|
Apache-2.0
|
public static URI getLocalFsURI() {
return LOCAL_URI;
}
|
Gets the URI that represents the local file system. That URI is {@code "file:/"} on Windows
platforms and {@code "file:///"} on other UNIX family platforms.
@return The URI that represents the local file system.
|
getLocalFsURI
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileSystem.java
|
Apache-2.0
|
public static LocalFileSystem getSharedInstance() {
return INSTANCE;
}
|
Gets the shared instance of this file system.
@return The shared instance of this file system.
|
getSharedInstance
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileSystem.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/fs/local/LocalFileSystem.java
|
Apache-2.0
|
@Override
public int getSplitNumber() {
return this.splitNumber;
}
|
Creates a new locatable input split that refers to a single host as its data location.
@param splitNumber The number of the split.
@param hostname The names of the host storing the data this input split refers to.
|
getSplitNumber
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/LocatableInputSplit.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/LocatableInputSplit.java
|
Apache-2.0
|
@Override
public void write(DataOutputView out) throws IOException {
out.write(VERSIONED_IDENTIFIER);
super.write(out);
}
|
Read from the provided {@link DataInputView in}. A flag {@code wasVersioned} can be used to
determine whether or not the data to read was previously written by a {@link
VersionedIOReadableWritable}.
|
write
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/PostVersionedIOReadableWritable.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/PostVersionedIOReadableWritable.java
|
Apache-2.0
|
public final void read(InputStream inputStream) throws IOException {
byte[] tmp = new byte[VERSIONED_IDENTIFIER.length];
int totalRead = IOUtils.tryReadFully(inputStream, tmp);
if (Arrays.equals(tmp, VERSIONED_IDENTIFIER)) {
DataInputView inputView = new DataInputViewStreamWrapper(inputStream);
super.read(inputView);
read(inputView, true);
} else {
InputStream streamToRead = inputStream;
if (totalRead > 0) {
PushbackInputStream resetStream = new PushbackInputStream(inputStream, totalRead);
resetStream.unread(tmp, 0, totalRead);
streamToRead = resetStream;
}
read(new DataInputViewStreamWrapper(streamToRead), false);
}
}
|
This read attempts to first identify if the input view contains the special {@link
#VERSIONED_IDENTIFIER} by reading and buffering the first few bytes. If identified to be
versioned, the usual version resolution read path in {@link
VersionedIOReadableWritable#read(DataInputView)} is invoked. Otherwise, we "reset" the input
stream by pushing back the read buffered bytes into the stream.
|
read
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/PostVersionedIOReadableWritable.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/PostVersionedIOReadableWritable.java
|
Apache-2.0
|
@Override
public final void read(DataInputView in) throws IOException {
throw new UnsupportedOperationException(
"PostVersionedIOReadableWritable cannot read from a DataInputView.");
}
|
We do not support reading from a {@link DataInputView}, because it does not support pushing
back already read bytes.
|
read
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/PostVersionedIOReadableWritable.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/PostVersionedIOReadableWritable.java
|
Apache-2.0
|
public static <T> void writeVersionAndSerialize(
SimpleVersionedSerializer<T> serializer, T datum, DataOutputView out)
throws IOException {
checkNotNull(serializer, "serializer");
checkNotNull(datum, "datum");
checkNotNull(out, "out");
final byte[] data = serializer.serialize(datum);
out.writeInt(serializer.getVersion());
out.writeInt(data.length);
out.write(data);
}
|
Serializes the version and datum into a stream.
<p>Data serialized via this method can be deserialized via {@link
#readVersionAndDeSerialize(SimpleVersionedSerializer, DataInputView)}.
<p>The first four bytes will be occupied by the version, as returned by {@link
SimpleVersionedSerializer#getVersion()}. The remaining bytes will be the serialized datum, as
produced by {@link SimpleVersionedSerializer#serialize(Object)}, plus its length. The
resulting array will hence be eight bytes larger than the serialized datum.
@param serializer The serializer to serialize the datum with.
@param datum The datum to serialize.
@param out The stream to serialize to.
|
writeVersionAndSerialize
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
Apache-2.0
|
public static <T> void writeVersionAndSerializeList(
SimpleVersionedSerializer<T> serializer, List<T> data, DataOutputView out)
throws IOException {
checkNotNull(serializer);
checkNotNull(data);
checkNotNull(out);
out.writeInt(serializer.getVersion());
out.writeInt(data.size());
for (final T datum : data) {
final byte[] serializedDatum = serializer.serialize(datum);
out.writeInt(serializedDatum.length);
out.write(serializer.serialize(datum));
}
}
|
Serializes the version and data into a stream.
<p>Data serialized via this method can be deserialized via {@link
#readVersionAndDeserializeList(SimpleVersionedSerializer, DataInputView)}.
<p>The first eight bytes will be occupied by the version, as returned by {@link
SimpleVersionedSerializer#getVersion()} and the length of the list. The remaining bytes will
be the serialized data, as produced by {@link SimpleVersionedSerializer#serialize(Object)},
plus its length.
@param serializer The serializer to serialize the datum with.
@param data list of datum to serialize.
@param out The stream to serialize to.
|
writeVersionAndSerializeList
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
Apache-2.0
|
public static <T> T readVersionAndDeSerialize(
SimpleVersionedSerializer<T> serializer, DataInputView in) throws IOException {
checkNotNull(serializer, "serializer");
checkNotNull(in, "in");
final int version = in.readInt();
final int length = in.readInt();
final byte[] data = new byte[length];
in.readFully(data);
return serializer.deserialize(version, data);
}
|
Deserializes the version and datum from a stream.
<p>This method deserializes data serialized via {@link
#writeVersionAndSerialize(SimpleVersionedSerializer, Object, DataOutputView)}.
<p>The first four bytes will be interpreted as the version. The next four bytes will be
interpreted as the length of the datum bytes, then length-many bytes will be read. Finally,
the datum is deserialized via the {@link SimpleVersionedSerializer#deserialize(int, byte[])}
method.
@param serializer The serializer to serialize the datum with.
@param in The stream to deserialize from.
|
readVersionAndDeSerialize
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
Apache-2.0
|
public static <T> List<T> readVersionAndDeserializeList(
SimpleVersionedSerializer<T> serializer, DataInputView in) throws IOException {
checkNotNull(serializer);
checkNotNull(in);
final int serializerVersion = in.readInt();
final int dataSize = in.readInt();
final List<T> data = new ArrayList<>();
for (int ignored = 0; ignored < dataSize; ignored++) {
final int datumSize = in.readInt();
final byte[] datum = new byte[datumSize];
in.readFully(datum);
data.add(serializer.deserialize(serializerVersion, datum));
}
return data;
}
|
Deserializes the version and data from a stream.
<p>This method deserializes data serialized via {@link
#writeVersionAndSerializeList(SimpleVersionedSerializer, List, DataOutputView)} .
<p>The first four bytes will be interpreted as the version. The next four bytes will be
interpreted as the length of the list, then length-many data will be read and deserialized
via the {@link SimpleVersionedSerializer#deserialize(int, byte[])} method.
@param serializer The serializer to serialize the datum with.
@param in The stream to deserialize from.
|
readVersionAndDeserializeList
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
Apache-2.0
|
public static <T> byte[] writeVersionAndSerialize(
SimpleVersionedSerializer<T> serializer, T datum) throws IOException {
checkNotNull(serializer, "serializer");
checkNotNull(datum, "datum");
final byte[] data = serializer.serialize(datum);
final byte[] versionAndData = new byte[data.length + 8];
final int version = serializer.getVersion();
versionAndData[0] = (byte) (version >> 24);
versionAndData[1] = (byte) (version >> 16);
versionAndData[2] = (byte) (version >> 8);
versionAndData[3] = (byte) version;
final int length = data.length;
versionAndData[4] = (byte) (length >> 24);
versionAndData[5] = (byte) (length >> 16);
versionAndData[6] = (byte) (length >> 8);
versionAndData[7] = (byte) length;
// move the data to the array
System.arraycopy(data, 0, versionAndData, 8, data.length);
return versionAndData;
}
|
Serializes the version and datum into a byte array. The first four bytes will be occupied by
the version (as returned by {@link SimpleVersionedSerializer#getVersion()}), written in
<i>big-endian</i> encoding. The remaining bytes will be the serialized datum, as produced by
{@link SimpleVersionedSerializer#serialize(Object)}. The resulting array will hence be four
bytes larger than the serialized datum.
<p>Data serialized via this method can be deserialized via {@link
#readVersionAndDeSerialize(SimpleVersionedSerializer, byte[])}.
@param serializer The serializer to serialize the datum with.
@param datum The datum to serialize.
@return A byte array containing the serialized version and serialized datum.
@throws IOException Exceptions from the {@link SimpleVersionedSerializer#serialize(Object)}
method are forwarded.
|
writeVersionAndSerialize
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
Apache-2.0
|
public static <T> T readVersionAndDeSerialize(
SimpleVersionedSerializer<T> serializer, byte[] bytes) throws IOException {
checkNotNull(serializer, "serializer");
checkNotNull(bytes, "bytes");
checkArgument(bytes.length >= 8, "byte array below minimum length (8 bytes)");
final byte[] dataOnly = Arrays.copyOfRange(bytes, 8, bytes.length);
final int version =
((bytes[0] & 0xff) << 24)
| ((bytes[1] & 0xff) << 16)
| ((bytes[2] & 0xff) << 8)
| (bytes[3] & 0xff);
final int length =
((bytes[4] & 0xff) << 24)
| ((bytes[5] & 0xff) << 16)
| ((bytes[6] & 0xff) << 8)
| (bytes[7] & 0xff);
if (length == dataOnly.length) {
return serializer.deserialize(version, dataOnly);
} else {
throw new IOException(
"Corrupt data, conflicting lengths. Length fields: "
+ length
+ ", data: "
+ dataOnly.length);
}
}
|
Deserializes the version and datum from a byte array. The first four bytes will be read as
the version, in <i>big-endian</i> encoding. The remaining bytes will be passed to the
serializer for deserialization, via {@link SimpleVersionedSerializer#deserialize(int,
byte[])}.
@param serializer The serializer to deserialize the datum with.
@param bytes The bytes to deserialize from.
@return The deserialized datum.
@throws IOException Exceptions from the {@link SimpleVersionedSerializer#deserialize(int,
byte[])} method are forwarded.
|
readVersionAndDeSerialize
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/SimpleVersionedSerialization.java
|
Apache-2.0
|
public int getReadVersion() {
return (readVersion == Integer.MIN_VALUE) ? getVersion() : readVersion;
}
|
Returns the found serialization version. If this instance was not read from serialized bytes
but simply instantiated, then the current version is returned.
@return the read serialization version, or the current version if the instance was not read
from bytes.
|
getReadVersion
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/io/VersionedIOReadableWritable.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/io/VersionedIOReadableWritable.java
|
Apache-2.0
|
public void setBuffer(byte[] buffer, int off, int len) {
setSegment(MemorySegmentFactory.wrap(buffer), off, len);
}
|
Un-synchronized stream similar to Java's ByteArrayInputStream that also exposes the current
position.
|
setBuffer
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/ByteArrayInputStreamWithPos.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/ByteArrayInputStreamWithPos.java
|
Apache-2.0
|
@Override
public void skipBytesToRead(int numBytes) throws IOException {
if (skipBytes(numBytes) != numBytes) {
throw new EOFException("Could not skip " + numBytes + " bytes.");
}
}
|
Utility class that turns an {@link InputStream} into a {@link DataInputView}.
|
skipBytesToRead
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/DataInputViewStreamWrapper.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/DataInputViewStreamWrapper.java
|
Apache-2.0
|
public ByteBuffer wrapAsByteBuffer() {
this.wrapper.position(0);
this.wrapper.limit(this.position);
return this.wrapper;
}
|
A simple and efficient serializer for the {@link java.io.DataOutput} interface.
|
wrapAsByteBuffer
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/DataOutputSerializer.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/DataOutputSerializer.java
|
Apache-2.0
|
public byte[] getSharedBuffer() {
return buffer;
}
|
Gets a reference to the internal byte buffer. This buffer may be larger than the actual
serialized data. Only the bytes from zero to {@link #length()} are valid. The buffer will
also be overwritten with the next write calls.
<p>This method is useful when trying to avid byte copies, but should be used carefully.
@return A reference to the internal shared and reused buffer.
|
getSharedBuffer
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/DataOutputSerializer.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/DataOutputSerializer.java
|
Apache-2.0
|
public int size() {
return size;
}
|
Gets the size of the memory segment, in bytes.
@return The size of the memory segment.
|
size
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
@VisibleForTesting
public boolean isFreed() {
return address > addressLimit;
}
|
Checks whether the memory segment was freed.
@return <tt>true</tt>, if the memory segment has been freed, <tt>false</tt> otherwise.
|
isFreed
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void free() {
if (isFreedAtomic.getAndSet(true)) {
// the segment has already been freed
if (checkMultipleFree) {
throw new IllegalStateException("MemorySegment can be freed only once!");
}
} else {
// this ensures we can place no more data and trigger
// the checks for the freed segment
address = addressLimit + 1;
offHeapBuffer = null; // to enable GC of unsafe memory
if (cleaner != null) {
cleaner.run();
cleaner = null;
}
}
}
|
Frees this memory segment.
<p>After this operation has been called, no further operations are possible on the memory
segment and will fail. The actual memory (heap or off-heap) will only be released after this
memory segment object has become garbage collected.
|
free
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public boolean isOffHeap() {
return heapMemory == null;
}
|
Checks whether this memory segment is backed by off-heap memory.
@return <tt>true</tt>, if the memory segment is backed by off-heap memory, <tt>false</tt> if
it is backed by heap memory.
|
isOffHeap
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public byte[] getArray() {
if (heapMemory != null) {
return heapMemory;
} else {
throw new IllegalStateException("Memory segment does not represent heap memory");
}
}
|
Returns the byte array of on-heap memory segments.
@return underlying byte array
@throws IllegalStateException if the memory segment does not represent on-heap memory
|
getArray
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public ByteBuffer getOffHeapBuffer() {
if (offHeapBuffer != null) {
return offHeapBuffer;
} else {
throw new IllegalStateException("Memory segment does not represent off-heap buffer");
}
}
|
Returns the off-heap buffer of memory segments.
@return underlying off-heap buffer
@throws IllegalStateException if the memory segment does not represent off-heap buffer
|
getOffHeapBuffer
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public long getAddress() {
if (heapMemory == null) {
return address;
} else {
throw new IllegalStateException("Memory segment does not represent off heap memory");
}
}
|
Returns the memory address of off-heap memory segments.
@return absolute memory address outside the heap
@throws IllegalStateException if the memory segment does not represent off-heap memory
|
getAddress
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
@Nullable
public Object getOwner() {
return owner;
}
|
Gets the owner of this memory segment. Returns null, if the owner was not set.
@return The owner of the memory segment, or null, if it does not have an owner.
|
getOwner
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public byte get(int index) {
final long pos = address + index;
if (index >= 0 && pos < addressLimit) {
return UNSAFE.getByte(heapMemory, pos);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Reads the byte at the given position.
@param index The position from which the byte will be read
@return The byte at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger or equal to the
size of the memory segment.
|
get
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void put(int index, byte b) {
final long pos = address + index;
if (index >= 0 && pos < addressLimit) {
UNSAFE.putByte(heapMemory, pos, b);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Writes the given byte into this buffer at the given position.
@param index The index at which the byte will be written.
@param b The byte value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger or equal to the
size of the memory segment.
|
put
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void get(int index, byte[] dst) {
get(index, dst, 0, dst.length);
}
|
Bulk get method. Copies dst.length memory from the specified position to the destination
memory.
@param index The position at which the first byte will be read.
@param dst The memory into which the memory will be copied.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or too large that the
data between the index and the memory segment end is not enough to fill the destination
array.
|
get
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void put(int index, byte[] src) {
put(index, src, 0, src.length);
}
|
Bulk put method. Copies src.length memory from the source memory into the memory segment
beginning at the specified position.
@param index The index in the memory segment array, where the data is put.
@param src The source array to copy the data from.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or too large such that
the array size exceed the amount of memory between the index and the memory segment's
end.
|
put
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void get(int index, byte[] dst, int offset, int length) {
// check the byte array offset and length and the status
if ((offset | length | (offset + length) | (dst.length - (offset + length))) < 0) {
throw new IndexOutOfBoundsException();
}
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - length) {
final long arrayAddress = BYTE_ARRAY_BASE_OFFSET + offset;
UNSAFE.copyMemory(heapMemory, pos, dst, arrayAddress, length);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
throw new IndexOutOfBoundsException(
String.format(
"pos: %d, length: %d, index: %d, offset: %d",
pos, length, index, offset));
}
}
|
Bulk get method. Copies length memory from the specified position to the destination memory,
beginning at the given offset.
@param index The position at which the first byte will be read.
@param dst The memory into which the memory will be copied.
@param offset The copying offset in the destination memory.
@param length The number of bytes to be copied.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or too large that the
requested number of bytes exceed the amount of memory between the index and the memory
segment's end.
|
get
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void put(int index, byte[] src, int offset, int length) {
// check the byte array offset and length
if ((offset | length | (offset + length) | (src.length - (offset + length))) < 0) {
throw new IndexOutOfBoundsException();
}
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - length) {
final long arrayAddress = BYTE_ARRAY_BASE_OFFSET + offset;
UNSAFE.copyMemory(src, arrayAddress, heapMemory, pos, length);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Bulk put method. Copies length memory starting at position offset from the source memory into
the memory segment starting at the specified index.
@param index The position in the memory segment array, where the data is put.
@param src The source array to copy the data from.
@param offset The offset in the source array where the copying is started.
@param length The number of bytes to copy.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or too large such that
the array portion to copy exceed the amount of memory between the index and the memory
segment's end.
|
put
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public boolean getBoolean(int index) {
return get(index) != 0;
}
|
Reads one byte at the given position and returns its boolean representation.
@param index The position from which the memory will be read.
@return The boolean value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 1.
|
getBoolean
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putBoolean(int index, boolean value) {
put(index, (byte) (value ? 1 : 0));
}
|
Writes one byte containing the byte value into this buffer at the given position.
@param index The position at which the memory will be written.
@param value The char value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 1.
|
putBoolean
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
@SuppressWarnings("restriction")
public char getChar(int index) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 2) {
return UNSAFE.getChar(heapMemory, pos);
} else if (address > addressLimit) {
throw new IllegalStateException("This segment has been freed.");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Reads a char value from the given position, in the system's native byte order.
@param index The position from which the memory will be read.
@return The char value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
getChar
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public char getCharLittleEndian(int index) {
if (LITTLE_ENDIAN) {
return getChar(index);
} else {
return Character.reverseBytes(getChar(index));
}
}
|
Reads a character value (16 bit, 2 bytes) from the given position, in little-endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #getChar(int)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #getChar(int)} is the
preferable choice.
@param index The position from which the value will be read.
@return The character value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
getCharLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public char getCharBigEndian(int index) {
if (LITTLE_ENDIAN) {
return Character.reverseBytes(getChar(index));
} else {
return getChar(index);
}
}
|
Reads a character value (16 bit, 2 bytes) from the given position, in big-endian byte order.
This method's speed depends on the system's native byte order, and it is possibly slower than
{@link #getChar(int)}. For most cases (such as transient storage in memory or serialization
for I/O and network), it suffices to know that the byte order in which the value is written
is the same as the one in which it is read, and {@link #getChar(int)} is the preferable
choice.
@param index The position from which the value will be read.
@return The character value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
getCharBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
@SuppressWarnings("restriction")
public void putChar(int index, char value) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 2) {
UNSAFE.putChar(heapMemory, pos, value);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Writes a char value to the given position, in the system's native byte order.
@param index The position at which the memory will be written.
@param value The char value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
putChar
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putCharLittleEndian(int index, char value) {
if (LITTLE_ENDIAN) {
putChar(index, value);
} else {
putChar(index, Character.reverseBytes(value));
}
}
|
Writes the given character (16 bit, 2 bytes) to the given position in little-endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #putChar(int, char)}. For most cases (such as transient storage in memory
or serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #putChar(int, char)}
is the preferable choice.
@param index The position at which the value will be written.
@param value The char value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
putCharLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putCharBigEndian(int index, char value) {
if (LITTLE_ENDIAN) {
putChar(index, Character.reverseBytes(value));
} else {
putChar(index, value);
}
}
|
Writes the given character (16 bit, 2 bytes) to the given position in big-endian byte order.
This method's speed depends on the system's native byte order, and it is possibly slower than
{@link #putChar(int, char)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #putChar(int, char)}
is the preferable choice.
@param index The position at which the value will be written.
@param value The char value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
putCharBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public short getShort(int index) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 2) {
return UNSAFE.getShort(heapMemory, pos);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Reads a short integer value (16 bit, 2 bytes) from the given position, composing them into a
short value according to the current byte order.
@param index The position from which the memory will be read.
@return The short value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
getShort
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public short getShortLittleEndian(int index) {
if (LITTLE_ENDIAN) {
return getShort(index);
} else {
return Short.reverseBytes(getShort(index));
}
}
|
Reads a short integer value (16 bit, 2 bytes) from the given position, in little-endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #getShort(int)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #getShort(int)} is
the preferable choice.
@param index The position from which the value will be read.
@return The short value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
getShortLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public short getShortBigEndian(int index) {
if (LITTLE_ENDIAN) {
return Short.reverseBytes(getShort(index));
} else {
return getShort(index);
}
}
|
Reads a short integer value (16 bit, 2 bytes) from the given position, in big-endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #getShort(int)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #getShort(int)} is
the preferable choice.
@param index The position from which the value will be read.
@return The short value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
getShortBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putShort(int index, short value) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 2) {
UNSAFE.putShort(heapMemory, pos, value);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Writes the given short value into this buffer at the given position, using the native byte
order of the system.
@param index The position at which the value will be written.
@param value The short value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
putShort
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putShortLittleEndian(int index, short value) {
if (LITTLE_ENDIAN) {
putShort(index, value);
} else {
putShort(index, Short.reverseBytes(value));
}
}
|
Writes the given short integer value (16 bit, 2 bytes) to the given position in little-endian
byte order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #putShort(int, short)}. For most cases (such as transient storage in
memory or serialization for I/O and network), it suffices to know that the byte order in
which the value is written is the same as the one in which it is read, and {@link
#putShort(int, short)} is the preferable choice.
@param index The position at which the value will be written.
@param value The short value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
putShortLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putShortBigEndian(int index, short value) {
if (LITTLE_ENDIAN) {
putShort(index, Short.reverseBytes(value));
} else {
putShort(index, value);
}
}
|
Writes the given short integer value (16 bit, 2 bytes) to the given position in big-endian
byte order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #putShort(int, short)}. For most cases (such as transient storage in
memory or serialization for I/O and network), it suffices to know that the byte order in
which the value is written is the same as the one in which it is read, and {@link
#putShort(int, short)} is the preferable choice.
@param index The position at which the value will be written.
@param value The short value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 2.
|
putShortBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public int getInt(int index) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 4) {
return UNSAFE.getInt(heapMemory, pos);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Reads an int value (32bit, 4 bytes) from the given position, in the system's native byte
order. This method offers the best speed for integer reading and should be used unless a
specific byte order is required. In most cases, it suffices to know that the byte order in
which the value is written is the same as the one in which it is read (such as transient
storage in memory, or serialization for I/O and network), making this method the preferable
choice.
@param index The position from which the value will be read.
@return The int value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
getInt
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public int getIntLittleEndian(int index) {
if (LITTLE_ENDIAN) {
return getInt(index);
} else {
return Integer.reverseBytes(getInt(index));
}
}
|
Reads an int value (32bit, 4 bytes) from the given position, in little-endian byte order.
This method's speed depends on the system's native byte order, and it is possibly slower than
{@link #getInt(int)}. For most cases (such as transient storage in memory or serialization
for I/O and network), it suffices to know that the byte order in which the value is written
is the same as the one in which it is read, and {@link #getInt(int)} is the preferable
choice.
@param index The position from which the value will be read.
@return The int value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
getIntLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public int getIntBigEndian(int index) {
if (LITTLE_ENDIAN) {
return Integer.reverseBytes(getInt(index));
} else {
return getInt(index);
}
}
|
Reads an int value (32bit, 4 bytes) from the given position, in big-endian byte order. This
method's speed depends on the system's native byte order, and it is possibly slower than
{@link #getInt(int)}. For most cases (such as transient storage in memory or serialization
for I/O and network), it suffices to know that the byte order in which the value is written
is the same as the one in which it is read, and {@link #getInt(int)} is the preferable
choice.
@param index The position from which the value will be read.
@return The int value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
getIntBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putInt(int index, int value) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 4) {
UNSAFE.putInt(heapMemory, pos, value);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Writes the given int value (32bit, 4 bytes) to the given position in the system's native byte
order. This method offers the best speed for integer writing and should be used unless a
specific byte order is required. In most cases, it suffices to know that the byte order in
which the value is written is the same as the one in which it is read (such as transient
storage in memory, or serialization for I/O and network), making this method the preferable
choice.
@param index The position at which the value will be written.
@param value The int value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
putInt
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putIntLittleEndian(int index, int value) {
if (LITTLE_ENDIAN) {
putInt(index, value);
} else {
putInt(index, Integer.reverseBytes(value));
}
}
|
Writes the given int value (32bit, 4 bytes) to the given position in little endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #putInt(int, int)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #putInt(int, int)} is
the preferable choice.
@param index The position at which the value will be written.
@param value The int value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
putIntLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putIntBigEndian(int index, int value) {
if (LITTLE_ENDIAN) {
putInt(index, Integer.reverseBytes(value));
} else {
putInt(index, value);
}
}
|
Writes the given int value (32bit, 4 bytes) to the given position in big endian byte order.
This method's speed depends on the system's native byte order, and it is possibly slower than
{@link #putInt(int, int)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #putInt(int, int)} is
the preferable choice.
@param index The position at which the value will be written.
@param value The int value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
putIntBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public long getLong(int index) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 8) {
return UNSAFE.getLong(heapMemory, pos);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Reads a long value (64bit, 8 bytes) from the given position, in the system's native byte
order. This method offers the best speed for long integer reading and should be used unless a
specific byte order is required. In most cases, it suffices to know that the byte order in
which the value is written is the same as the one in which it is read (such as transient
storage in memory, or serialization for I/O and network), making this method the preferable
choice.
@param index The position from which the value will be read.
@return The long value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
getLong
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public long getLongLittleEndian(int index) {
if (LITTLE_ENDIAN) {
return getLong(index);
} else {
return Long.reverseBytes(getLong(index));
}
}
|
Reads a long integer value (64bit, 8 bytes) from the given position, in little endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #getLong(int)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #getLong(int)} is the
preferable choice.
@param index The position from which the value will be read.
@return The long value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
getLongLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public long getLongBigEndian(int index) {
if (LITTLE_ENDIAN) {
return Long.reverseBytes(getLong(index));
} else {
return getLong(index);
}
}
|
Reads a long integer value (64bit, 8 bytes) from the given position, in big endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #getLong(int)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #getLong(int)} is the
preferable choice.
@param index The position from which the value will be read.
@return The long value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
getLongBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putLong(int index, long value) {
final long pos = address + index;
if (index >= 0 && pos <= addressLimit - 8) {
UNSAFE.putLong(heapMemory, pos, value);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
// index is in fact invalid
throw new IndexOutOfBoundsException();
}
}
|
Writes the given long value (64bit, 8 bytes) to the given position in the system's native
byte order. This method offers the best speed for long integer writing and should be used
unless a specific byte order is required. In most cases, it suffices to know that the byte
order in which the value is written is the same as the one in which it is read (such as
transient storage in memory, or serialization for I/O and network), making this method the
preferable choice.
@param index The position at which the value will be written.
@param value The long value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
putLong
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putLongLittleEndian(int index, long value) {
if (LITTLE_ENDIAN) {
putLong(index, value);
} else {
putLong(index, Long.reverseBytes(value));
}
}
|
Writes the given long value (64bit, 8 bytes) to the given position in little endian byte
order. This method's speed depends on the system's native byte order, and it is possibly
slower than {@link #putLong(int, long)}. For most cases (such as transient storage in memory
or serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #putLong(int, long)}
is the preferable choice.
@param index The position at which the value will be written.
@param value The long value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
putLongLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putLongBigEndian(int index, long value) {
if (LITTLE_ENDIAN) {
putLong(index, Long.reverseBytes(value));
} else {
putLong(index, value);
}
}
|
Writes the given long value (64bit, 8 bytes) to the given position in big endian byte order.
This method's speed depends on the system's native byte order, and it is possibly slower than
{@link #putLong(int, long)}. For most cases (such as transient storage in memory or
serialization for I/O and network), it suffices to know that the byte order in which the
value is written is the same as the one in which it is read, and {@link #putLong(int, long)}
is the preferable choice.
@param index The position at which the value will be written.
@param value The long value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
putLongBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public float getFloat(int index) {
return Float.intBitsToFloat(getInt(index));
}
|
Reads a single-precision floating point value (32bit, 4 bytes) from the given position, in
the system's native byte order. This method offers the best speed for float reading and
should be used unless a specific byte order is required. In most cases, it suffices to know
that the byte order in which the value is written is the same as the one in which it is read
(such as transient storage in memory, or serialization for I/O and network), making this
method the preferable choice.
@param index The position from which the value will be read.
@return The float value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
getFloat
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public float getFloatLittleEndian(int index) {
return Float.intBitsToFloat(getIntLittleEndian(index));
}
|
Reads a single-precision floating point value (32bit, 4 bytes) from the given position, in
little endian byte order. This method's speed depends on the system's native byte order, and
it is possibly slower than {@link #getFloat(int)}. For most cases (such as transient storage
in memory or serialization for I/O and network), it suffices to know that the byte order in
which the value is written is the same as the one in which it is read, and {@link
#getFloat(int)} is the preferable choice.
@param index The position from which the value will be read.
@return The long value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
getFloatLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public float getFloatBigEndian(int index) {
return Float.intBitsToFloat(getIntBigEndian(index));
}
|
Reads a single-precision floating point value (32bit, 4 bytes) from the given position, in
big endian byte order. This method's speed depends on the system's native byte order, and it
is possibly slower than {@link #getFloat(int)}. For most cases (such as transient storage in
memory or serialization for I/O and network), it suffices to know that the byte order in
which the value is written is the same as the one in which it is read, and {@link
#getFloat(int)} is the preferable choice.
@param index The position from which the value will be read.
@return The long value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
getFloatBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putFloat(int index, float value) {
putInt(index, Float.floatToRawIntBits(value));
}
|
Writes the given single-precision float value (32bit, 4 bytes) to the given position in the
system's native byte order. This method offers the best speed for float writing and should be
used unless a specific byte order is required. In most cases, it suffices to know that the
byte order in which the value is written is the same as the one in which it is read (such as
transient storage in memory, or serialization for I/O and network), making this method the
preferable choice.
@param index The position at which the value will be written.
@param value The float value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
putFloat
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putFloatLittleEndian(int index, float value) {
putIntLittleEndian(index, Float.floatToRawIntBits(value));
}
|
Writes the given single-precision float value (32bit, 4 bytes) to the given position in
little endian byte order. This method's speed depends on the system's native byte order, and
it is possibly slower than {@link #putFloat(int, float)}. For most cases (such as transient
storage in memory or serialization for I/O and network), it suffices to know that the byte
order in which the value is written is the same as the one in which it is read, and {@link
#putFloat(int, float)} is the preferable choice.
@param index The position at which the value will be written.
@param value The long value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
putFloatLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putFloatBigEndian(int index, float value) {
putIntBigEndian(index, Float.floatToRawIntBits(value));
}
|
Writes the given single-precision float value (32bit, 4 bytes) to the given position in big
endian byte order. This method's speed depends on the system's native byte order, and it is
possibly slower than {@link #putFloat(int, float)}. For most cases (such as transient storage
in memory or serialization for I/O and network), it suffices to know that the byte order in
which the value is written is the same as the one in which it is read, and {@link
#putFloat(int, float)} is the preferable choice.
@param index The position at which the value will be written.
@param value The long value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 4.
|
putFloatBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public double getDouble(int index) {
return Double.longBitsToDouble(getLong(index));
}
|
Reads a double-precision floating point value (64bit, 8 bytes) from the given position, in
the system's native byte order. This method offers the best speed for double reading and
should be used unless a specific byte order is required. In most cases, it suffices to know
that the byte order in which the value is written is the same as the one in which it is read
(such as transient storage in memory, or serialization for I/O and network), making this
method the preferable choice.
@param index The position from which the value will be read.
@return The double value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
getDouble
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.