code
stringlengths 25
201k
| docstring
stringlengths 19
96.2k
| func_name
stringlengths 0
235
| language
stringclasses 1
value | repo
stringlengths 8
51
| path
stringlengths 11
314
| url
stringlengths 62
377
| license
stringclasses 7
values |
|---|---|---|---|---|---|---|---|
public double getDoubleLittleEndian(int index) {
return Double.longBitsToDouble(getLongLittleEndian(index));
}
|
Reads a double-precision floating point value (64bit, 8 bytes) from the given position, in
little endian byte order. This method's speed depends on the system's native byte order, and
it is possibly slower than {@link #getDouble(int)}. For most cases (such as transient storage
in memory or serialization for I/O and network), it suffices to know that the byte order in
which the value is written is the same as the one in which it is read, and {@link
#getDouble(int)} is the preferable choice.
@param index The position from which the value will be read.
@return The long value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
getDoubleLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public double getDoubleBigEndian(int index) {
return Double.longBitsToDouble(getLongBigEndian(index));
}
|
Reads a double-precision floating point value (64bit, 8 bytes) from the given position, in
big endian byte order. This method's speed depends on the system's native byte order, and it
is possibly slower than {@link #getDouble(int)}. For most cases (such as transient storage in
memory or serialization for I/O and network), it suffices to know that the byte order in
which the value is written is the same as the one in which it is read, and {@link
#getDouble(int)} is the preferable choice.
@param index The position from which the value will be read.
@return The long value at the given position.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
getDoubleBigEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putDouble(int index, double value) {
putLong(index, Double.doubleToRawLongBits(value));
}
|
Writes the given double-precision floating-point value (64bit, 8 bytes) to the given position
in the system's native byte order. This method offers the best speed for double writing and
should be used unless a specific byte order is required. In most cases, it suffices to know
that the byte order in which the value is written is the same as the one in which it is read
(such as transient storage in memory, or serialization for I/O and network), making this
method the preferable choice.
@param index The position at which the memory will be written.
@param value The double value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
putDouble
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void putDoubleLittleEndian(int index, double value) {
putLongLittleEndian(index, Double.doubleToRawLongBits(value));
}
|
Writes the given double-precision floating-point value (64bit, 8 bytes) to the given position
in little endian byte order. This method's speed depends on the system's native byte order,
and it is possibly slower than {@link #putDouble(int, double)}. For most cases (such as
transient storage in memory or serialization for I/O and network), it suffices to know that
the byte order in which the value is written is the same as the one in which it is read, and
{@link #putDouble(int, double)} is the preferable choice.
@param index The position at which the value will be written.
@param value The long value to be written.
@throws IndexOutOfBoundsException Thrown, if the index is negative, or larger than the
segment size minus 8.
|
putDoubleLittleEndian
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void put(DataInput in, int offset, int length) throws IOException {
if (address <= addressLimit) {
if (heapMemory != null) {
in.readFully(heapMemory, offset, length);
} else {
while (length >= 8) {
putLongBigEndian(offset, in.readLong());
offset += 8;
length -= 8;
}
while (length > 0) {
put(offset, in.readByte());
offset++;
length--;
}
}
} else {
throw new IllegalStateException("segment has been freed");
}
}
|
Bulk put method. Copies length memory from the given DataInput to the memory starting at
position offset.
@param in The DataInput to get the data from.
@param offset The position in the memory segment to copy the chunk to.
@param length The number of bytes to get.
@throws IOException Thrown, if the DataInput encountered a problem upon reading, such as an
End-Of-File.
|
put
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void get(int offset, ByteBuffer target, int numBytes) {
// check the byte array offset and length
if ((offset | numBytes | (offset + numBytes)) < 0) {
throw new IndexOutOfBoundsException();
}
if (target.isReadOnly()) {
throw new ReadOnlyBufferException();
}
final int targetOffset = target.position();
final int remaining = target.remaining();
if (remaining < numBytes) {
throw new BufferOverflowException();
}
if (target.isDirect()) {
// copy to the target memory directly
final long targetPointer = getByteBufferAddress(target) + targetOffset;
final long sourcePointer = address + offset;
if (sourcePointer <= addressLimit - numBytes) {
UNSAFE.copyMemory(heapMemory, sourcePointer, null, targetPointer, numBytes);
target.position(targetOffset + numBytes);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
throw new IndexOutOfBoundsException();
}
} else if (target.hasArray()) {
// move directly into the byte array
get(offset, target.array(), targetOffset + target.arrayOffset(), numBytes);
// this must be after the get() call to ensue that the byte buffer is not
// modified in case the call fails
target.position(targetOffset + numBytes);
} else {
// other types of byte buffers
throw new IllegalArgumentException(
"The target buffer is not direct, and has no array.");
}
}
|
Bulk get method. Copies {@code numBytes} bytes from this memory segment, starting at position
{@code offset} to the target {@code ByteBuffer}. The bytes will be put into the target buffer
starting at the buffer's current position. If this method attempts to write more bytes than
the target byte buffer has remaining (with respect to {@link ByteBuffer#remaining()}), this
method will cause a {@link java.nio.BufferOverflowException}.
@param offset The position where the bytes are started to be read from in this memory
segment.
@param target The ByteBuffer to copy the bytes to.
@param numBytes The number of bytes to copy.
@throws IndexOutOfBoundsException If the offset is invalid, or this segment does not contain
the given number of bytes (starting from offset), or the target byte buffer does not have
enough space for the bytes.
@throws ReadOnlyBufferException If the target buffer is read-only.
|
get
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void put(int offset, ByteBuffer source, int numBytes) {
// check the byte array offset and length
if ((offset | numBytes | (offset + numBytes)) < 0) {
throw new IndexOutOfBoundsException();
}
final int sourceOffset = source.position();
final int remaining = source.remaining();
if (remaining < numBytes) {
throw new BufferUnderflowException();
}
if (source.isDirect()) {
// copy to the target memory directly
final long sourcePointer = getByteBufferAddress(source) + sourceOffset;
final long targetPointer = address + offset;
if (targetPointer <= addressLimit - numBytes) {
UNSAFE.copyMemory(null, sourcePointer, heapMemory, targetPointer, numBytes);
source.position(sourceOffset + numBytes);
} else if (address > addressLimit) {
throw new IllegalStateException("segment has been freed");
} else {
throw new IndexOutOfBoundsException();
}
} else if (source.hasArray()) {
// move directly into the byte array
put(offset, source.array(), sourceOffset + source.arrayOffset(), numBytes);
// this must be after the get() call to ensue that the byte buffer is not
// modified in case the call fails
source.position(sourceOffset + numBytes);
} else {
// other types of byte buffers
for (int i = 0; i < numBytes; i++) {
put(offset++, source.get());
}
}
}
|
Bulk put method. Copies {@code numBytes} bytes from the given {@code ByteBuffer}, into this
memory segment. The bytes will be read from the target buffer starting at the buffer's
current position, and will be written to this memory segment starting at {@code offset}. If
this method attempts to read more bytes than the target byte buffer has remaining (with
respect to {@link ByteBuffer#remaining()}), this method will cause a {@link
java.nio.BufferUnderflowException}.
@param offset The position where the bytes are started to be written to in this memory
segment.
@param source The ByteBuffer to copy the bytes from.
@param numBytes The number of bytes to copy.
@throws IndexOutOfBoundsException If the offset is invalid, or the source buffer does not
contain the given number of bytes, or this segment does not have enough space for the
bytes (counting from offset).
|
put
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void copyTo(int offset, MemorySegment target, int targetOffset, int numBytes) {
final byte[] thisHeapRef = this.heapMemory;
final byte[] otherHeapRef = target.heapMemory;
final long thisPointer = this.address + offset;
final long otherPointer = target.address + targetOffset;
if ((numBytes | offset | targetOffset) >= 0
&& thisPointer <= this.addressLimit - numBytes
&& otherPointer <= target.addressLimit - numBytes) {
UNSAFE.copyMemory(thisHeapRef, thisPointer, otherHeapRef, otherPointer, numBytes);
} else if (this.address > this.addressLimit) {
throw new IllegalStateException("this memory segment has been freed.");
} else if (target.address > target.addressLimit) {
throw new IllegalStateException("target memory segment has been freed.");
} else {
throw new IndexOutOfBoundsException(
String.format(
"offset=%d, targetOffset=%d, numBytes=%d, address=%d, targetAddress=%d",
offset, targetOffset, numBytes, this.address, target.address));
}
}
|
Bulk copy method. Copies {@code numBytes} bytes from this memory segment, starting at
position {@code offset} to the target memory segment. The bytes will be put into the target
segment starting at position {@code targetOffset}.
@param offset The position where the bytes are started to be read from in this memory
segment.
@param target The memory segment to copy the bytes to.
@param targetOffset The position in the target memory segment to copy the chunk to.
@param numBytes The number of bytes to copy.
@throws IndexOutOfBoundsException If either of the offsets is invalid, or the source segment
does not contain the given number of bytes (starting from offset), or the target segment
does not have enough space for the bytes (counting from targetOffset).
|
copyTo
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void copyToUnsafe(int offset, Object target, int targetPointer, int numBytes) {
final long thisPointer = this.address + offset;
if (thisPointer + numBytes > addressLimit) {
throw new IndexOutOfBoundsException(
String.format(
"offset=%d, numBytes=%d, address=%d", offset, numBytes, this.address));
}
UNSAFE.copyMemory(this.heapMemory, thisPointer, target, targetPointer, numBytes);
}
|
Bulk copy method. Copies {@code numBytes} bytes to target unsafe object and pointer. NOTE:
This is an unsafe method, no check here, please be careful.
@param offset The position where the bytes are started to be read from in this memory
segment.
@param target The unsafe memory to copy the bytes to.
@param targetPointer The position in the target unsafe memory to copy the chunk to.
@param numBytes The number of bytes to copy.
@throws IndexOutOfBoundsException If the source segment does not contain the given number of
bytes (starting from offset).
|
copyToUnsafe
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void copyFromUnsafe(int offset, Object source, int sourcePointer, int numBytes) {
final long thisPointer = this.address + offset;
if (thisPointer + numBytes > addressLimit) {
throw new IndexOutOfBoundsException(
String.format(
"offset=%d, numBytes=%d, address=%d", offset, numBytes, this.address));
}
UNSAFE.copyMemory(source, sourcePointer, this.heapMemory, thisPointer, numBytes);
}
|
Bulk copy method. Copies {@code numBytes} bytes from source unsafe object and pointer. NOTE:
This is an unsafe method, no check here, please be careful.
@param offset The position where the bytes are started to be write in this memory segment.
@param source The unsafe memory to copy the bytes from.
@param sourcePointer The position in the source unsafe memory to copy the chunk from.
@param numBytes The number of bytes to copy.
@throws IndexOutOfBoundsException If this segment can not contain the given number of bytes
(starting from offset).
|
copyFromUnsafe
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public int compare(MemorySegment seg2, int offset1, int offset2, int len) {
while (len >= 8) {
long l1 = this.getLongBigEndian(offset1);
long l2 = seg2.getLongBigEndian(offset2);
if (l1 != l2) {
return (l1 < l2) ^ (l1 < 0) ^ (l2 < 0) ? -1 : 1;
}
offset1 += 8;
offset2 += 8;
len -= 8;
}
while (len > 0) {
int b1 = this.get(offset1) & 0xff;
int b2 = seg2.get(offset2) & 0xff;
int cmp = b1 - b2;
if (cmp != 0) {
return cmp;
}
offset1++;
offset2++;
len--;
}
return 0;
}
|
Compares two memory segment regions.
@param seg2 Segment to compare this segment with
@param offset1 Offset of this segment to start comparing
@param offset2 Offset of seg2 to start comparing
@param len Length of the compared memory region
@return 0 if equal, -1 if seg1 < seg2, 1 otherwise
|
compare
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public int compare(MemorySegment seg2, int offset1, int offset2, int len1, int len2) {
final int minLength = Math.min(len1, len2);
int c = compare(seg2, offset1, offset2, minLength);
return c == 0 ? (len1 - len2) : c;
}
|
Compares two memory segment regions with different length.
@param seg2 Segment to compare this segment with
@param offset1 Offset of this segment to start comparing
@param offset2 Offset of seg2 to start comparing
@param len1 Length of this memory region to compare
@param len2 Length of seg2 to compare
@return 0 if equal, -1 if seg1 < seg2, 1 otherwise
|
compare
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void swapBytes(
byte[] tempBuffer, MemorySegment seg2, int offset1, int offset2, int len) {
if ((offset1 | offset2 | len | (tempBuffer.length - len)) >= 0) {
final long thisPos = this.address + offset1;
final long otherPos = seg2.address + offset2;
if (thisPos <= this.addressLimit - len && otherPos <= seg2.addressLimit - len) {
// this -> temp buffer
UNSAFE.copyMemory(
this.heapMemory, thisPos, tempBuffer, BYTE_ARRAY_BASE_OFFSET, len);
// other -> this
UNSAFE.copyMemory(seg2.heapMemory, otherPos, this.heapMemory, thisPos, len);
// temp buffer -> other
UNSAFE.copyMemory(
tempBuffer, BYTE_ARRAY_BASE_OFFSET, seg2.heapMemory, otherPos, len);
return;
} else if (this.address > this.addressLimit) {
throw new IllegalStateException("this memory segment has been freed.");
} else if (seg2.address > seg2.addressLimit) {
throw new IllegalStateException("other memory segment has been freed.");
}
}
// index is in fact invalid
throw new IndexOutOfBoundsException(
String.format(
"offset1=%d, offset2=%d, len=%d, bufferSize=%d, address1=%d, address2=%d",
offset1, offset2, len, tempBuffer.length, this.address, seg2.address));
}
|
Swaps bytes between two memory segments, using the given auxiliary buffer.
@param tempBuffer The auxiliary buffer in which to put data during triangle swap.
@param seg2 Segment to swap bytes with
@param offset1 Offset of this segment to start swapping
@param offset2 Offset of seg2 to start swapping
@param len Length of the swapped memory region
|
swapBytes
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public boolean equalTo(MemorySegment seg2, int offset1, int offset2, int length) {
int i = 0;
// we assume unaligned accesses are supported.
// Compare 8 bytes at a time.
while (i <= length - 8) {
if (getLong(offset1 + i) != seg2.getLong(offset2 + i)) {
return false;
}
i += 8;
}
// cover the last (length % 8) elements.
while (i < length) {
if (get(offset1 + i) != seg2.get(offset2 + i)) {
return false;
}
i += 1;
}
return true;
}
|
Equals two memory segment regions.
@param seg2 Segment to equal this segment with
@param offset1 Offset of this segment to start equaling
@param offset2 Offset of seg2 to start equaling
@param length Length of the equaled memory region
@return true if equal, false otherwise
|
equalTo
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public byte[] getHeapMemory() {
return heapMemory;
}
|
Get the heap byte array object.
@return Return non-null if the memory is on the heap, and return null if the memory if off
the heap.
|
getHeapMemory
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public <T> T processAsByteBuffer(Function<ByteBuffer, T> processFunction) {
return Preconditions.checkNotNull(processFunction).apply(wrapInternal(0, size));
}
|
Applies the given process function on a {@link ByteBuffer} that represents this entire
segment.
<p>Note: The {@link ByteBuffer} passed into the process function is temporary and could
become invalid after the processing. Thus, the process function should not try to keep any
reference of the {@link ByteBuffer}.
@param processFunction to be applied to the segment as {@link ByteBuffer}.
@return the value that the process function returns.
|
processAsByteBuffer
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public void processAsByteBuffer(Consumer<ByteBuffer> processConsumer) {
Preconditions.checkNotNull(processConsumer).accept(wrapInternal(0, size));
}
|
Supplies a {@link ByteBuffer} that represents this entire segment to the given process
consumer.
<p>Note: The {@link ByteBuffer} passed into the process consumer is temporary and could
become invalid after the processing. Thus, the process consumer should not try to keep any
reference of the {@link ByteBuffer}.
@param processConsumer to accept the segment as {@link ByteBuffer}.
|
processAsByteBuffer
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegment.java
|
Apache-2.0
|
public static MemorySegment wrap(byte[] buffer) {
return new MemorySegment(buffer, null);
}
|
Creates a new memory segment that targets the given heap memory region.
<p>This method should be used to turn short lived byte arrays into memory segments.
@param buffer The heap memory region.
@return A new memory segment that targets the given heap memory region.
|
wrap
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
Apache-2.0
|
public static MemorySegment wrapCopy(byte[] bytes, int start, int end)
throws IllegalArgumentException {
checkArgument(end >= start);
checkArgument(end <= bytes.length);
MemorySegment copy = allocateUnpooledSegment(end - start);
copy.put(0, bytes, start, copy.size());
return copy;
}
|
Copies the given heap memory region and creates a new memory segment wrapping it.
@param bytes The heap memory region.
@param start starting position, inclusive
@param end end position, exclusive
@return A new memory segment that targets a copy of the given heap memory region.
@throws IllegalArgumentException if start > end or end > bytes.length
|
wrapCopy
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
Apache-2.0
|
public static MemorySegment wrapInt(int value) {
return wrap(ByteBuffer.allocate(Integer.BYTES).putInt(value).array());
}
|
Wraps the four bytes representing the given number with a {@link MemorySegment}.
@see ByteBuffer#putInt(int)
|
wrapInt
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
Apache-2.0
|
public static MemorySegment allocateUnpooledSegment(int size) {
return allocateUnpooledSegment(size, null);
}
|
Allocates some unpooled memory and creates a new memory segment that represents that memory.
<p>This method is similar to {@link #allocateUnpooledSegment(int, Object)}, but the memory
segment will have null as the owner.
@param size The size of the memory segment to allocate.
@return A new memory segment, backed by unpooled heap memory.
|
allocateUnpooledSegment
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
Apache-2.0
|
public static MemorySegment allocateUnpooledSegment(int size, Object owner) {
return new MemorySegment(new byte[size], owner);
}
|
Allocates some unpooled memory and creates a new memory segment that represents that memory.
<p>This method is similar to {@link #allocateUnpooledSegment(int)}, but additionally sets the
owner of the memory segment.
@param size The size of the memory segment to allocate.
@param owner The owner to associate with the memory segment.
@return A new memory segment, backed by unpooled heap memory.
|
allocateUnpooledSegment
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
Apache-2.0
|
public static MemorySegment allocateUnpooledOffHeapMemory(int size) {
return allocateUnpooledOffHeapMemory(size, null);
}
|
Allocates some unpooled off-heap memory and creates a new memory segment that represents that
memory.
@param size The size of the off-heap memory segment to allocate.
@return A new memory segment, backed by unpooled off-heap memory.
|
allocateUnpooledOffHeapMemory
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
Apache-2.0
|
public static MemorySegment wrapOffHeapMemory(ByteBuffer memory) {
return new MemorySegment(memory, null);
}
|
Creates a memory segment that wraps the off-heap memory backing the given ByteBuffer. Note
that the ByteBuffer needs to be a <i>direct ByteBuffer</i>.
<p>This method is intended to be used for components which pool memory and create memory
segments around long-lived memory regions.
@param memory The byte buffer with the off-heap memory to be represented by the memory
segment.
@return A new memory segment representing the given off-heap memory.
|
wrapOffHeapMemory
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java
|
Apache-2.0
|
static long allocateUnsafe(long size) {
return UNSAFE.allocateMemory(Math.max(1L, size));
}
|
Allocates unsafe native memory.
@param size size of the unsafe memory to allocate.
@return address of the allocated unsafe memory
|
allocateUnsafe
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
Apache-2.0
|
static Runnable createMemoryCleaner(long address, Runnable customCleanup) {
return () -> {
releaseUnsafe(address);
customCleanup.run();
};
}
|
Creates a cleaner to release the unsafe memory.
@param address address of the unsafe memory to release
@param customCleanup A custom action to clean up GC
@return action to run to release the unsafe memory manually
|
createMemoryCleaner
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
Apache-2.0
|
static ByteBuffer wrapUnsafeMemoryWithByteBuffer(long address, int size) {
//noinspection OverlyBroadCatchBlock
try {
ByteBuffer buffer = (ByteBuffer) UNSAFE.allocateInstance(DIRECT_BYTE_BUFFER_CLASS);
UNSAFE.putLong(buffer, BUFFER_ADDRESS_FIELD_OFFSET, address);
UNSAFE.putInt(buffer, BUFFER_CAPACITY_FIELD_OFFSET, size);
buffer.clear();
return buffer;
} catch (Throwable t) {
throw new Error("Failed to wrap unsafe off-heap memory with ByteBuffer", t);
}
}
|
Wraps the unsafe native memory with a {@link ByteBuffer}.
@param address address of the unsafe memory to wrap
@param size size of the unsafe memory to wrap
@return a {@link ByteBuffer} which is a view of the given unsafe memory
|
wrapUnsafeMemoryWithByteBuffer
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
Apache-2.0
|
public static long getByteBufferAddress(ByteBuffer buffer) {
Preconditions.checkNotNull(buffer, "buffer is null");
Preconditions.checkArgument(
buffer.isDirect(), "Can't get address of a non-direct ByteBuffer.");
long offHeapAddress;
try {
offHeapAddress = UNSAFE.getLong(buffer, BUFFER_ADDRESS_FIELD_OFFSET);
} catch (Throwable t) {
throw new Error("Could not access direct byte buffer address field.", t);
}
Preconditions.checkState(offHeapAddress > 0, "negative pointer or size");
Preconditions.checkState(
offHeapAddress < Long.MAX_VALUE - Integer.MAX_VALUE,
"Segment initialized with too large address: %s ; Max allowed address is %d",
offHeapAddress,
(Long.MAX_VALUE - Integer.MAX_VALUE - 1));
return offHeapAddress;
}
|
Get native memory address wrapped by the given {@link ByteBuffer}.
@param buffer {@link ByteBuffer} which wraps the native memory address to get
@return native memory address wrapped by the given {@link ByteBuffer}
|
getByteBufferAddress
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/memory/MemoryUtils.java
|
Apache-2.0
|
default ClassLoader getClassLoader() {
return Preconditions.checkNotNull(
this.getClass().getClassLoader(),
"%s plugin with null class loader",
this.getClass().getName());
}
|
Helper method to get the class loader used to load the plugin. This may be needed for some
plugins that use dynamic class loading afterwards the plugin was loaded.
@return the class loader used to load the plugin.
|
getClassLoader
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/plugin/Plugin.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/plugin/Plugin.java
|
Apache-2.0
|
public static void forceProcessExit(int exitCode) {
// Unset ourselves to allow exiting in any case.
System.setSecurityManager(null);
if (flinkSecurityManager != null && flinkSecurityManager.haltOnSystemExit) {
Runtime.getRuntime().halt(exitCode);
} else {
System.exit(exitCode);
}
}
|
Use this method to circumvent the configured {@link FlinkSecurityManager} behavior, ensuring
that the current JVM process will always stop via System.exit() or
Runtime.getRuntime().halt().
|
forceProcessExit
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/security/FlinkSecurityManager.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/security/FlinkSecurityManager.java
|
Apache-2.0
|
public void callbackRegistered() {
// does nothing by default.
}
|
Will be triggered when a callback is registered.
|
callbackRegistered
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/state/StateFutureImpl.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/state/StateFutureImpl.java
|
Apache-2.0
|
public void postComplete(boolean inCallbackRunner) {
// does nothing by default.
}
|
Will be triggered when this future completes.
|
postComplete
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/state/StateFutureImpl.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/state/StateFutureImpl.java
|
Apache-2.0
|
public static <V> StateFuture<V> completedVoidFuture() {
return new CompletedStateFuture<>(null);
}
|
Returns a completed future that does nothing and return null.
|
completedVoidFuture
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
Apache-2.0
|
public static <V> StateFuture<V> completedFuture(V result) {
return new CompletedStateFuture<>(result);
}
|
Returns a completed future that does nothing and return provided result.
|
completedFuture
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
Apache-2.0
|
@SuppressWarnings("unchecked")
public static <T> StateFuture<Collection<T>> combineAll(
Collection<? extends StateFuture<? extends T>> futures) {
int count = futures.size();
if (count == 0) {
return new CompletedStateFuture<>(Collections.emptyList());
} else if (count == 1) {
StateFuture<? extends T> firstFuture = futures.stream().findFirst().get();
return firstFuture.thenCompose(
(t) -> StateFutureUtils.completedFuture(Collections.singletonList(t)));
}
// multiple futures
final T[] results = (T[]) new Object[count];
StateFutureImpl<? extends T> pendingFuture = null;
for (StateFuture<? extends T> future : futures) {
if (future instanceof StateFutureImpl) {
pendingFuture = (StateFutureImpl<? extends T>) future;
break;
}
}
if (pendingFuture == null) {
int i = 0;
for (StateFuture<? extends T> future : futures) {
final int index = i;
((InternalStateFuture<? extends T>) future)
.thenSyncAccept(
(t) -> {
results[index] = t;
});
i++;
}
return new CompletedStateFuture<>(Arrays.asList(results));
} else {
int i = 0;
AtomicInteger countDown = new AtomicInteger(count);
StateFutureImpl<Collection<T>> ret = pendingFuture.makeNewStateFuture();
for (StateFuture<? extends T> future : futures) {
final int index = i;
((InternalStateFuture<? extends T>) future)
.thenSyncAccept(
(t) -> {
results[index] = t;
if (countDown.decrementAndGet() == 0) {
ret.complete(Arrays.asList(results));
}
});
i++;
}
return ret;
}
}
|
Creates a future that is complete once multiple other futures completed. Upon successful
completion, the future returns the collection of the futures' results.
@param futures The futures that make up the conjunction. No null entries are allowed,
otherwise a IllegalArgumentException will be thrown.
@return The StateFuture that completes once all given futures are complete.
|
combineAll
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
Apache-2.0
|
public static <T> StateFuture<Iterable<T>> toIterable(StateFuture<StateIterator<T>> future) {
return future.thenCompose(
iterator -> {
if (iterator == null) {
return StateFutureUtils.completedFuture(Collections.emptyList());
}
InternalStateIterator<T> theIterator = ((InternalStateIterator<T>) iterator);
if (!theIterator.hasNextLoading()) {
return StateFutureUtils.completedFuture(theIterator.getCurrentCache());
} else {
final ArrayList<T> result = new ArrayList<>();
return theIterator
.onNext(
next -> {
result.add(next);
})
.thenApply(ignored -> result);
}
});
}
|
Convert a future of state iterator to a future of iterable. There is no good reason to do so,
since this may disable the capability of lazy loading. Only useful when the further
calculation depends on the whole data from the iterator.
|
toIterable
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/state/StateFutureUtils.java
|
Apache-2.0
|
public static Optional<JMXServer> getInstance() {
return Optional.ofNullable(jmxServer);
}
|
Acquire the global singleton JMXServer instance.
|
getInstance
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/management/jmx/JMXService.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/management/jmx/JMXService.java
|
Apache-2.0
|
public static synchronized void startInstance(String portsConfig) {
if (jmxServer == null) {
if (portsConfig != null) {
Iterator<Integer> ports = NetUtils.getPortRangeFromString(portsConfig);
if (ports.hasNext()) {
jmxServer = startJMXServerWithPortRanges(ports);
}
if (jmxServer == null) {
LOG.error(
"Could not start JMX server on any configured port(s) in: "
+ portsConfig);
}
}
} else {
LOG.warn("JVM-wide JMXServer already started at port: " + jmxServer.getPort());
}
}
|
Start the JMV-wide singleton JMX server.
<p>If JMXServer static instance is already started, it will not be started again. Instead a
warning will be logged indicating which port the existing JMXServer static instance is
exposing.
@param portsConfig port configuration of the JMX server.
|
startInstance
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/management/jmx/JMXService.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/management/jmx/JMXService.java
|
Apache-2.0
|
public byte getValue() {
return this.value;
}
|
Returns the value of the encapsulated byte.
@return the value of the encapsulated byte.
|
getValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/ByteValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/ByteValue.java
|
Apache-2.0
|
public char getValue() {
return this.value;
}
|
Returns the value of the encapsulated char.
@return the value of the encapsulated char.
|
getValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/CharValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/CharValue.java
|
Apache-2.0
|
public final boolean isLeft() {
return getClass() == Left.class;
}
|
@return true if this is a Left value, false if this is a Right value
|
isLeft
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Either.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Either.java
|
Apache-2.0
|
public final boolean isRight() {
return getClass() == Right.class;
}
|
@return true if this is a Right value, false if this is a Left value
|
isRight
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Either.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Either.java
|
Apache-2.0
|
public static <L, R> Left<L, R> of(L left) {
return new Left<L, R>(left);
}
|
Creates a left value of {@link Either}
|
of
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Either.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Either.java
|
Apache-2.0
|
public static <L, R> Right<L, R> of(R right) {
return new Right<L, R>(right);
}
|
Creates a right value of {@link Either}
|
of
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Either.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Either.java
|
Apache-2.0
|
@Internal
public static <L, R> Left<L, R> obtainLeft(
Either<L, R> input, TypeSerializer<L> leftSerializer) {
if (input.isLeft()) {
return (Left<L, R>) input;
} else {
Right<L, R> right = (Right<L, R>) input;
if (right.left == null) {
right.left = Left.of(leftSerializer.createInstance());
right.left.right = right;
}
return right.left;
}
}
|
Utility function for {@link EitherSerializer} to support object reuse.
<p>To support object reuse both subclasses of Either contain a reference to an instance of
the other type. This method provides access to and initializes the cross-reference.
@param input container for Left or Right value
@param leftSerializer for creating an instance of the left type
@param <L> the type of Left
@param <R> the type of Right
@return input if Left type else input's Left reference
|
obtainLeft
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Either.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Either.java
|
Apache-2.0
|
@Internal
public static <L, R> Right<L, R> obtainRight(
Either<L, R> input, TypeSerializer<R> rightSerializer) {
if (input.isRight()) {
return (Right<L, R>) input;
} else {
Left<L, R> left = (Left<L, R>) input;
if (left.right == null) {
left.right = Right.of(rightSerializer.createInstance());
left.right.left = left;
}
return left.right;
}
}
|
Utility function for {@link EitherSerializer} to support object reuse.
<p>To support object reuse both subclasses of Either contain a reference to an instance of
the other type. This method provides access to and initializes the cross-reference.
@param input container for Left or Right value
@param rightSerializer for creating an instance of the right type
@param <L> the type of Left
@param <R> the type of Right
@return input if Right type else input's Right reference
|
obtainRight
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Either.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Either.java
|
Apache-2.0
|
public int getValue() {
return this.value;
}
|
Returns the value of the encapsulated int.
@return the value of the encapsulated int.
|
getValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/IntValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/IntValue.java
|
Apache-2.0
|
public long getValue() {
return this.value;
}
|
Returns the value of the encapsulated long.
@return The value of the encapsulated long.
|
getValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/LongValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/LongValue.java
|
Apache-2.0
|
public static NullValue getInstance() {
return INSTANCE;
}
|
Returns the NullValue singleton instance.
@return The NullValue singleton instance.
|
getInstance
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/NullValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/NullValue.java
|
Apache-2.0
|
public int getNumFields() {
return this.numFields;
}
|
Gets the number of fields currently in the record. This also includes null fields.
@return The number of fields in the record.
|
getNumFields
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void setNumFields(final int numFields) {
final int oldNumFields = this.numFields;
// check whether we increase or decrease the fields
if (numFields > oldNumFields) {
makeSpace(numFields);
for (int i = oldNumFields; i < numFields; i++) {
this.offsets[i] = NULL_INDICATOR_OFFSET;
}
markModified(oldNumFields);
} else {
// decrease the number of fields
// we do not remove the values from the cache, as the objects (if they are there) will
// most likely
// be reused when the record is re-filled
markModified(numFields);
}
this.numFields = numFields;
}
|
Sets the number of fields in the record. If the new number of fields is longer than the
current number of fields, then null fields are appended. If the new number of fields is
smaller than the current number of fields, then the last fields are truncated.
@param numFields The new number of fields.
|
setNumFields
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void makeSpace(int numFields) {
final int oldNumFields = this.numFields;
// increase the number of fields in the arrays
if (this.offsets == null) {
this.offsets = new int[numFields];
} else if (this.offsets.length < numFields) {
int[] newOffs = new int[Math.max(numFields + 1, oldNumFields << 1)];
System.arraycopy(this.offsets, 0, newOffs, 0, oldNumFields);
this.offsets = newOffs;
}
if (this.lengths == null) {
this.lengths = new int[numFields];
} else if (this.lengths.length < numFields) {
int[] newLens = new int[Math.max(numFields + 1, oldNumFields << 1)];
System.arraycopy(this.lengths, 0, newLens, 0, oldNumFields);
this.lengths = newLens;
}
if (this.readFields == null) {
this.readFields = new Value[numFields];
} else if (this.readFields.length < numFields) {
Value[] newFields = new Value[Math.max(numFields + 1, oldNumFields << 1)];
System.arraycopy(this.readFields, 0, newFields, 0, oldNumFields);
this.readFields = newFields;
}
if (this.writeFields == null) {
this.writeFields = new Value[numFields];
} else if (this.writeFields.length < numFields) {
Value[] newFields = new Value[Math.max(numFields + 1, oldNumFields << 1)];
System.arraycopy(this.writeFields, 0, newFields, 0, oldNumFields);
this.writeFields = newFields;
}
}
|
Reserves space for at least the given number of fields in the internal arrays.
@param numFields The number of fields to reserve space for.
|
makeSpace
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
@SuppressWarnings("unchecked")
public <T extends Value> T getField(final int fieldNum, final Class<T> type) {
// range check
if (fieldNum < 0 || fieldNum >= this.numFields) {
throw new IndexOutOfBoundsException(
fieldNum + " for range [0.." + (this.numFields - 1) + "]");
}
// get offset and check for null
final int offset = this.offsets[fieldNum];
if (offset == NULL_INDICATOR_OFFSET) {
return null;
} else if (offset == MODIFIED_INDICATOR_OFFSET) {
// value that has been set is new or modified
return (T) this.writeFields[fieldNum];
}
final int limit = offset + this.lengths[fieldNum];
// get an instance, either from the instance cache or create a new one
final Value oldField = this.readFields[fieldNum];
final T field;
if (oldField != null && oldField.getClass() == type) {
field = (T) oldField;
} else {
field = InstantiationUtil.instantiate(type, Value.class);
this.readFields[fieldNum] = field;
}
// deserialize
deserialize(field, offset, limit, fieldNum);
return field;
}
|
Gets the field at the given position from the record. This method checks internally, if this
instance of the record has previously returned a value for this field. If so, it reuses the
object, if not, it creates one from the supplied class.
@param <T> The type of the field.
@param fieldNum The logical position of the field.
@param type The type of the field as a class. This class is used to instantiate a value
object, if none had previously been instantiated.
@return The field at the given position, or null, if the field was null.
@throws IndexOutOfBoundsException Thrown, if the field number is negative or larger or equal
to the number of fields in this record.
|
getField
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
@SuppressWarnings("unchecked")
public <T extends Value> T getField(int fieldNum, T target) {
// range check
if (fieldNum < 0 || fieldNum >= this.numFields) {
throw new IndexOutOfBoundsException();
}
if (target == null) {
throw new NullPointerException("The target object may not be null");
}
// get offset and check for null
final int offset = this.offsets[fieldNum];
if (offset == NULL_INDICATOR_OFFSET) {
return null;
} else if (offset == MODIFIED_INDICATOR_OFFSET) {
// value that has been set is new or modified
// bring the binary in sync so that the deserialization gives the correct result
return (T) this.writeFields[fieldNum];
}
final int limit = offset + this.lengths[fieldNum];
deserialize(target, offset, limit, fieldNum);
return target;
}
|
Gets the field at the given position. The method tries to deserialize the fields into the
given target value. If the fields has been changed since the last (de)serialization, or is
null, them the target value is left unchanged and the changed value (or null) is returned.
<p>In all cases, the returned value contains the correct data (or is correctly null).
@param fieldNum The position of the field.
@param target The value to deserialize the field into.
@return The value with the contents of the requested field, or null, if the field is null.
|
getField
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public boolean getFieldInto(int fieldNum, Value target) {
// range check
if (fieldNum < 0 || fieldNum >= this.numFields) {
throw new IndexOutOfBoundsException();
}
// get offset and check for null
int offset = this.offsets[fieldNum];
if (offset == NULL_INDICATOR_OFFSET) {
return false;
} else if (offset == MODIFIED_INDICATOR_OFFSET) {
// value that has been set is new or modified
// bring the binary in sync so that the deserialization gives the correct result
updateBinaryRepresenation();
offset = this.offsets[fieldNum];
}
final int limit = offset + this.lengths[fieldNum];
deserialize(target, offset, limit, fieldNum);
return true;
}
|
Gets the field at the given position. If the field at that position is null, then this method
leaves the target field unchanged and returns false.
@param fieldNum The position of the field.
@param target The value to deserialize the field into.
@return True, if the field was deserialized properly, false, if the field was null.
|
getFieldInto
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public boolean getFieldsInto(int[] positions, Value[] targets) {
for (int i = 0; i < positions.length; i++) {
if (!getFieldInto(positions[i], targets[i])) {
return false;
}
}
return true;
}
|
Gets the fields at the given positions into an array. If at any position a field is null,
then this method returns false. All fields that have been successfully read until the failing
read are correctly contained in the record. All other fields are not set.
@param positions The positions of the fields to get.
@param targets The values into which the content of the fields is put.
@return True if all fields were successfully read, false if some read failed.
|
getFieldsInto
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void getFieldsIntoCheckingNull(int[] positions, Value[] targets) {
for (int i = 0; i < positions.length; i++) {
if (!getFieldInto(positions[i], targets[i])) {
throw new NullKeyFieldException(i);
}
}
}
|
Gets the fields at the given positions into an array. If at any position a field is null,
then this method throws a @link NullKeyFieldException. All fields that have been successfully
read until the failing read are correctly contained in the record. All other fields are not
set.
@param positions The positions of the fields to get.
@param targets The values into which the content of the fields is put.
@throws NullKeyFieldException in case of a failing field read.
|
getFieldsIntoCheckingNull
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void setNull(int field) {
// range check
if (field < 0 || field >= this.numFields) {
throw new IndexOutOfBoundsException();
}
internallySetField(field, null);
}
|
Sets the field at the given position to <code>null</code>.
@param field The field index.
@throws IndexOutOfBoundsException Thrown, when the position is not between 0 (inclusive) and
the number of fields (exclusive).
|
setNull
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void setNull(long mask) {
for (int i = 0; i < this.numFields; i++, mask >>>= 1) {
if ((mask & 0x1) != 0) {
internallySetField(i, null);
}
}
}
|
Sets the fields to <code>null</code> using the given bit mask. The bits correspond to the
individual columns: <code>(1 == nullify, 0 == keep)</code>.
@param mask Bit mask, where the i-th least significant bit represents the i-th field in the
record.
|
setNull
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void setNull(long[] mask) {
for (int maskPos = 0, i = 0; i < this.numFields; ) {
long currMask = mask[maskPos];
for (int k = 64; i < this.numFields && k > 0; --k, i++, currMask >>>= 1) {
if ((currMask & 0x1) != 0) {
internallySetField(i, null);
}
}
}
}
|
Sets the fields to <code>null</code> using the given bit mask. The bits correspond to the
individual columns: <code>(1 == nullify, 0 == keep)</code>.
@param mask Bit mask, where the i-th least significant bit in the n-th bit mask represents
the <code>(n*64) + i</code>-th field in the record.
|
setNull
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void unionFields(Record other) {
final int minFields = Math.min(this.numFields, other.numFields);
final int maxFields = Math.max(this.numFields, other.numFields);
final int[] offsets = this.offsets.length >= maxFields ? this.offsets : new int[maxFields];
final int[] lengths = this.lengths.length >= maxFields ? this.lengths : new int[maxFields];
if (!(this.isModified() || other.isModified())) {
// handle the special (but common) case where both records have a valid binary
// representation differently
// allocate space for the switchBuffer first
final int estimatedLength = this.binaryLen + other.binaryLen;
this.serializer.memory =
(this.switchBuffer != null && this.switchBuffer.length >= estimatedLength)
? this.switchBuffer
: new byte[estimatedLength];
this.serializer.position = 0;
try {
// common loop for both records
for (int i = 0; i < minFields; i++) {
final int thisOff = this.offsets[i];
if (thisOff == NULL_INDICATOR_OFFSET) {
final int otherOff = other.offsets[i];
if (otherOff == NULL_INDICATOR_OFFSET) {
offsets[i] = NULL_INDICATOR_OFFSET;
} else {
// take field from other record
offsets[i] = this.serializer.position;
this.serializer.write(other.binaryData, otherOff, other.lengths[i]);
lengths[i] = other.lengths[i];
}
} else {
// copy field from this one
offsets[i] = this.serializer.position;
this.serializer.write(this.binaryData, thisOff, this.lengths[i]);
lengths[i] = this.lengths[i];
}
}
// add the trailing fields from one record
if (minFields != maxFields) {
final Record sourceForRemainder = this.numFields > minFields ? this : other;
int begin = -1;
int end = -1;
int offsetDelta = 0;
// go through the offsets, find the non-null fields to account for the remaining
// data
for (int k = minFields; k < maxFields; k++) {
final int off = sourceForRemainder.offsets[k];
if (off == NULL_INDICATOR_OFFSET) {
offsets[k] = NULL_INDICATOR_OFFSET;
} else {
end = sourceForRemainder.offsets[k] + sourceForRemainder.lengths[k];
if (begin == -1) {
// first non null column in the remainder
begin = sourceForRemainder.offsets[k];
offsetDelta = this.serializer.position - begin;
}
offsets[k] = sourceForRemainder.offsets[k] + offsetDelta;
}
}
// copy the remaining fields directly as binary
if (begin != -1) {
this.serializer.write(sourceForRemainder.binaryData, begin, end - begin);
}
// the lengths can be copied directly
if (lengths != sourceForRemainder.lengths) {
System.arraycopy(
sourceForRemainder.lengths,
minFields,
lengths,
minFields,
maxFields - minFields);
}
}
} catch (Exception ioex) {
throw new RuntimeException(
"Error creating field union of record data" + ioex.getMessage() == null
? "."
: ": " + ioex.getMessage(),
ioex);
}
} else {
// the general case, where at least one of the two records has a binary representation
// that is not in sync.
final int estimatedLength =
(this.binaryLen > 0
? this.binaryLen
: this.numFields * DEFAULT_FIELD_LEN_ESTIMATE)
+ (other.binaryLen > 0
? other.binaryLen
: other.numFields * DEFAULT_FIELD_LEN_ESTIMATE);
this.serializer.memory =
(this.switchBuffer != null && this.switchBuffer.length >= estimatedLength)
? this.switchBuffer
: new byte[estimatedLength];
this.serializer.position = 0;
try {
// common loop for both records
for (int i = 0; i < minFields; i++) {
final int thisOff = this.offsets[i];
if (thisOff == NULL_INDICATOR_OFFSET) {
final int otherOff = other.offsets[i];
if (otherOff == NULL_INDICATOR_OFFSET) {
offsets[i] = NULL_INDICATOR_OFFSET;
} else if (otherOff == MODIFIED_INDICATOR_OFFSET) {
// serialize modified field from other record
offsets[i] = this.serializer.position;
other.writeFields[i].write(this.serializer);
lengths[i] = this.serializer.position - offsets[i];
} else {
// take field from other record binary
offsets[i] = this.serializer.position;
this.serializer.write(other.binaryData, otherOff, other.lengths[i]);
lengths[i] = other.lengths[i];
}
} else if (thisOff == MODIFIED_INDICATOR_OFFSET) {
// serialize modified field from this record
offsets[i] = this.serializer.position;
this.writeFields[i].write(this.serializer);
lengths[i] = this.serializer.position - offsets[i];
} else {
// copy field from this one
offsets[i] = this.serializer.position;
this.serializer.write(this.binaryData, thisOff, this.lengths[i]);
lengths[i] = this.lengths[i];
}
}
// add the trailing fields from one record
if (minFields != maxFields) {
final Record sourceForRemainder = this.numFields > minFields ? this : other;
// go through the offsets, find the non-null fields
for (int k = minFields; k < maxFields; k++) {
final int off = sourceForRemainder.offsets[k];
if (off == NULL_INDICATOR_OFFSET) {
offsets[k] = NULL_INDICATOR_OFFSET;
} else if (off == MODIFIED_INDICATOR_OFFSET) {
// serialize modified field from the source record
offsets[k] = this.serializer.position;
sourceForRemainder.writeFields[k].write(this.serializer);
lengths[k] = this.serializer.position - offsets[k];
} else {
// copy field from the source record binary
offsets[k] = this.serializer.position;
final int len = sourceForRemainder.lengths[k];
this.serializer.write(sourceForRemainder.binaryData, off, len);
lengths[k] = len;
}
}
}
} catch (Exception ioex) {
throw new RuntimeException(
"Error creating field union of record data" + ioex.getMessage() == null
? "."
: ": " + ioex.getMessage(),
ioex);
}
}
serializeHeader(this.serializer, offsets, maxFields);
// set the fields
this.switchBuffer = this.binaryData;
this.binaryData = serializer.memory;
this.binaryLen = serializer.position;
this.numFields = maxFields;
this.offsets = offsets;
this.lengths = lengths;
this.firstModifiedPos = Integer.MAX_VALUE;
// make sure that the object arrays reflect the size as well
if (this.readFields == null || this.readFields.length < maxFields) {
final Value[] na = new Value[maxFields];
System.arraycopy(this.readFields, 0, na, 0, this.readFields.length);
this.readFields = na;
}
this.writeFields =
(this.writeFields == null || this.writeFields.length < maxFields)
? new Value[maxFields]
: this.writeFields;
}
|
Unions the other record's fields with this records fields. After the method invocation with
record <code>B</code> as the parameter, this record <code>A</code> will contain at field
<code>i</code>:
<ul>
<li>Field <code>i</code> from record <code>A</code>, if that field is within record <code>A
</code>'s number of fields and is not <i>null</i>.
<li>Field <code>i</code> from record <code>B</code>, if that field is within record <code>B
</code>'s number of fields.
</ul>
It is not necessary that both records have the same number of fields. This record will have
the number of fields of the larger of the two records. Naturally, if both <code>A</code> and
<code>B</code> have field <code>i</code> set to <i>null</i>, this record will have
<i>null</i> at that position.
@param other The records whose fields to union with this record's fields.
|
unionFields
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public Record createCopy() {
final Record rec = new Record();
copyTo(rec);
return rec;
}
|
Creates an exact copy of this record.
@return An exact copy of this record.
|
createCopy
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public void copyFrom(
final Record source, final int[] sourcePositions, final int[] targetPositions) {
final int[] sourceOffsets = source.offsets;
final int[] sourceLengths = source.lengths;
final byte[] sourceBuffer = source.binaryData;
final Value[] sourceFields = source.writeFields;
boolean anyFieldIsBinary = false;
int maxFieldNum = 0;
for (int i = 0; i < sourcePositions.length; i++) {
final int sourceFieldNum = sourcePositions[i];
final int sourceOffset = sourceOffsets[sourceFieldNum];
final int targetFieldNum = targetPositions[i];
maxFieldNum = Math.max(targetFieldNum, maxFieldNum);
if (sourceOffset == NULL_INDICATOR_OFFSET) {
// set null on existing field (new fields are null by default)
if (targetFieldNum < numFields) {
internallySetField(targetFieldNum, null);
}
} else if (sourceOffset != MODIFIED_INDICATOR_OFFSET) {
anyFieldIsBinary = true;
}
}
if (numFields < maxFieldNum + 1) {
setNumFields(maxFieldNum + 1);
}
final int[] targetLengths = this.lengths;
final int[] targetOffsets = this.offsets;
// reserve space in binaryData for the binary source fields
if (anyFieldIsBinary) {
for (int i = 0; i < sourcePositions.length; i++) {
final int sourceFieldNum = sourcePositions[i];
final int sourceOffset = sourceOffsets[sourceFieldNum];
if (sourceOffset != MODIFIED_INDICATOR_OFFSET
&& sourceOffset != NULL_INDICATOR_OFFSET) {
final int targetFieldNum = targetPositions[i];
targetLengths[targetFieldNum] = sourceLengths[sourceFieldNum];
internallySetField(targetFieldNum, RESERVE_SPACE);
}
}
updateBinaryRepresenation();
}
final byte[] targetBuffer = this.binaryData;
for (int i = 0; i < sourcePositions.length; i++) {
final int sourceFieldNum = sourcePositions[i];
final int sourceOffset = sourceOffsets[sourceFieldNum];
final int targetFieldNum = targetPositions[i];
if (sourceOffset == MODIFIED_INDICATOR_OFFSET) {
internallySetField(targetFieldNum, sourceFields[sourceFieldNum]);
} else if (sourceOffset != NULL_INDICATOR_OFFSET) {
// bin-copy
final int targetOffset = targetOffsets[targetFieldNum];
final int length = targetLengths[targetFieldNum];
System.arraycopy(sourceBuffer, sourceOffset, targetBuffer, targetOffset, length);
}
}
}
|
Bin-copies fields from a source record to this record. The following caveats apply:
<p>If the source field is in a modified state, no binary representation will exist yet. In
that case, this method is equivalent to {@code setField(..., source.getField(..., <class>))}.
In particular, if setValue is called on the source field Value instance, that change will
propagate to this record.
<p>If the source field has already been serialized, then the binary representation will be
copied. Further modifications to the source field will not be observable via this record, but
attempting to read the field from this record will cause it to be deserialized.
<p>Finally, bin-copying a source field requires calling updateBinaryRepresentation on this
instance in order to reserve space in the binaryData array. If none of the source fields are
actually bin-copied, then updateBinaryRepresentation won't be called.
@param source
@param sourcePositions
@param targetPositions
|
copyFrom
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public final boolean equalsFields(
int[] positions, Value[] searchValues, Value[] deserializationHolders) {
for (int i = 0; i < positions.length; i++) {
final Value v = getField(positions[i], deserializationHolders[i]);
if (v == null || (!v.equals(searchValues[i]))) {
return false;
}
}
return true;
}
|
Checks the values of this record and a given list of values at specified positions for
equality. The values of this record are deserialized and compared against the corresponding
search value. The position specify which values are compared. The method returns true if the
values on all positions are equal and false otherwise.
@param positions The positions of the values to check for equality.
@param searchValues The values against which the values of this record are compared.
@param deserializationHolders An array to hold the deserialized values of this record.
@return True if all the values on all positions are equal, false otherwise.
|
equalsFields
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Record.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Record.java
|
Apache-2.0
|
public static Row withPositions(RowKind kind, int arity) {
return new Row(kind, new Object[arity], null, null);
}
|
Creates a fixed-length row in position-based field mode.
<p>Fields can be accessed by position via {@link #setField(int, Object)} and {@link
#getField(int)}.
<p>See the class documentation of {@link Row} for more information.
@param kind kind of change a row describes in a changelog
@param arity the number of fields in the row
@return a new row instance
|
withPositions
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row withPositions(int arity) {
return withPositions(RowKind.INSERT, arity);
}
|
Creates a fixed-length row in position-based field mode.
<p>Fields can be accessed by position via {@link #setField(int, Object)} and {@link
#getField(int)}.
<p>By default, a row describes an {@link RowKind#INSERT} change.
<p>See the class documentation of {@link Row} for more information.
@param arity the number of fields in the row
@return a new row instance
|
withPositions
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row withNames(RowKind kind) {
return new Row(kind, null, new HashMap<>(), null);
}
|
Creates a variable-length row in name-based field mode.
<p>Fields can be accessed by name via {@link #setField(String, Object)} and {@link
#getField(String)}.
<p>See the class documentation of {@link Row} for more information.
@param kind kind of change a row describes in a changelog
@return a new row instance
|
withNames
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row withNames() {
return withNames(RowKind.INSERT);
}
|
Creates a variable-length row in name-based field mode.
<p>Fields can be accessed by name via {@link #setField(String, Object)} and {@link
#getField(String)}.
<p>By default, a row describes an {@link RowKind#INSERT} change.
<p>See the class documentation of {@link Row} for more information.
@return a new row instance
|
withNames
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public RowKind getKind() {
return kind;
}
|
Returns the kind of change that this row describes in a changelog.
<p>By default, a row describes an {@link RowKind#INSERT} change.
@see RowKind
|
getKind
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public void setKind(RowKind kind) {
Preconditions.checkNotNull(kind, "Row kind must not be null.");
this.kind = kind;
}
|
Sets the kind of change that this row describes in a changelog.
<p>By default, a row describes an {@link RowKind#INSERT} change.
@see RowKind
|
setKind
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public int getArity() {
if (fieldByPosition != null) {
return fieldByPosition.length;
} else {
assert fieldByName != null;
return fieldByName.size();
}
}
|
Returns the number of fields in the row.
<p>Note: The row kind is kept separate from the fields and is not included in this number.
@return the number of fields in the row
|
getArity
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public @Nullable Object getField(int pos) {
if (fieldByPosition != null) {
return fieldByPosition[pos];
} else {
throw new IllegalArgumentException(
"Accessing a field by position is not supported in name-based field mode.");
}
}
|
Returns the field's content at the specified field position.
<p>Note: The row must operate in position-based field mode.
@param pos the position of the field, 0-based
@return the field's content at the specified position
|
getField
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
@SuppressWarnings("unchecked")
public <T> T getFieldAs(int pos) {
return (T) getField(pos);
}
|
Returns the field's content at the specified field position.
<p>Note: The row must operate in position-based field mode.
<p>This method avoids a lot of manual casting in the user implementation.
@param pos the position of the field, 0-based
@return the field's content at the specified position
|
getFieldAs
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public @Nullable Object getField(String name) {
if (fieldByName != null) {
return fieldByName.get(name);
} else if (positionByName != null) {
final Integer pos = positionByName.get(name);
if (pos == null) {
throw new IllegalArgumentException(
String.format("Unknown field name '%s' for mapping to a position.", name));
}
assert fieldByPosition != null;
return fieldByPosition[pos];
} else {
throw new IllegalArgumentException(
"Accessing a field by name is not supported in position-based field mode.");
}
}
|
Returns the field's content using the specified field name.
<p>Note: The row must operate in name-based field mode.
@param name the name of the field or null if not set previously
@return the field's content
|
getField
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
@SuppressWarnings("unchecked")
public <T> T getFieldAs(String name) {
return (T) getField(name);
}
|
Returns the field's content using the specified field name.
<p>Note: The row must operate in name-based field mode.
<p>This method avoids a lot of manual casting in the user implementation.
@param name the name of the field, set previously
@return the field's content
|
getFieldAs
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public void setField(int pos, @Nullable Object value) {
if (fieldByPosition != null) {
fieldByPosition[pos] = value;
} else {
throw new IllegalArgumentException(
"Accessing a field by position is not supported in name-based field mode.");
}
}
|
Sets the field's content at the specified position.
<p>Note: The row must operate in position-based field mode.
@param pos the position of the field, 0-based
@param value the value to be assigned to the field at the specified position
|
setField
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public void setField(String name, @Nullable Object value) {
if (fieldByName != null) {
fieldByName.put(name, value);
} else if (positionByName != null) {
final Integer pos = positionByName.get(name);
if (pos == null) {
throw new IllegalArgumentException(
String.format(
"Unknown field name '%s' for mapping to a row position. "
+ "Available names are: %s",
name, positionByName.keySet()));
}
assert fieldByPosition != null;
fieldByPosition[pos] = value;
} else {
throw new IllegalArgumentException(
"Accessing a field by name is not supported in position-based field mode.");
}
}
|
Sets the field's content using the specified field name.
<p>Note: The row must operate in name-based field mode.
@param name the name of the field
@param value the value to be assigned to the field
|
setField
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public @Nullable Set<String> getFieldNames(boolean includeNamedPositions) {
if (fieldByName != null) {
return fieldByName.keySet();
}
if (includeNamedPositions && positionByName != null) {
return positionByName.keySet();
}
return null;
}
|
Returns the set of field names if this row operates in name-based field mode, otherwise null.
<p>This method is a helper method for serializers and converters but can also be useful for
other row transformations.
@param includeNamedPositions whether or not to include named positions when this row operates
in a hybrid field mode
|
getFieldNames
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row of(Object... values) {
final Row row = new Row(values.length);
for (int i = 0; i < values.length; i++) {
row.setField(i, values[i]);
}
return row;
}
|
Creates a fixed-length row in position-based field mode and assigns the given values to the
row's fields.
<p>This method should be more convenient than {@link Row#withPositions(int)} in many cases.
<p>For example:
<pre>
Row.of("hello", true, 1L);
</pre>
instead of
<pre>
Row row = Row.withPositions(3);
row.setField(0, "hello");
row.setField(1, true);
row.setField(2, 1L);
</pre>
<p>By default, a row describes an {@link RowKind#INSERT} change.
|
of
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row ofKind(RowKind kind, Object... values) {
final Row row = new Row(kind, values.length);
for (int i = 0; i < values.length; i++) {
row.setField(i, values[i]);
}
return row;
}
|
Creates a fixed-length row in position-based field mode with given kind and assigns the given
values to the row's fields.
<p>This method should be more convenient than {@link Row#withPositions(RowKind, int)} in many
cases.
<p>For example:
<pre>
Row.ofKind(RowKind.INSERT, "hello", true, 1L);
</pre>
instead of
<pre>
Row row = Row.withPositions(RowKind.INSERT, 3);
row.setField(0, "hello");
row.setField(1, true);
row.setField(2, 1L);
</pre>
|
ofKind
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row copy(Row row) {
final Object[] newFieldByPosition;
if (row.fieldByPosition != null) {
newFieldByPosition = new Object[row.fieldByPosition.length];
System.arraycopy(
row.fieldByPosition, 0, newFieldByPosition, 0, newFieldByPosition.length);
} else {
newFieldByPosition = null;
}
final Map<String, Object> newFieldByName;
if (row.fieldByName != null) {
newFieldByName = new HashMap<>(row.fieldByName);
} else {
newFieldByName = null;
}
return new Row(row.kind, newFieldByPosition, newFieldByName, row.positionByName);
}
|
Creates a new row which is copied from another row (including its {@link RowKind}).
<p>This method does not perform a deep copy. Use {@link RowSerializer#copy(Row)} if required.
|
copy
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row project(Row row, int[] fieldPositions) {
final Row newRow = Row.withPositions(row.kind, fieldPositions.length);
for (int i = 0; i < fieldPositions.length; i++) {
newRow.setField(i, row.getField(fieldPositions[i]));
}
return newRow;
}
|
Creates a new row with projected fields and identical {@link RowKind} from another row.
<p>This method does not perform a deep copy.
<p>Note: The row must operate in position-based field mode. Field names are not projected.
@param fieldPositions field indices to be projected
|
project
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row project(Row row, String[] fieldNames) {
final Row newRow = Row.withNames(row.getKind());
for (String fieldName : fieldNames) {
newRow.setField(fieldName, row.getField(fieldName));
}
return newRow;
}
|
Creates a new row with projected fields and identical {@link RowKind} from another row.
<p>This method does not perform a deep copy.
<p>Note: The row must operate in name-based field mode.
@param fieldNames field names to be projected
|
project
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public static Row join(Row first, Row... remainings) {
Preconditions.checkArgument(
first.fieldByPosition != null,
"All rows must operate in position-based field mode.");
int newLength = first.fieldByPosition.length;
for (Row remaining : remainings) {
Preconditions.checkArgument(
remaining.fieldByPosition != null,
"All rows must operate in position-based field mode.");
newLength += remaining.fieldByPosition.length;
}
final Row joinedRow = new Row(first.kind, newLength);
int index = 0;
// copy the first row
assert joinedRow.fieldByPosition != null;
System.arraycopy(
first.fieldByPosition,
0,
joinedRow.fieldByPosition,
index,
first.fieldByPosition.length);
index += first.fieldByPosition.length;
// copy the remaining rows
for (Row remaining : remainings) {
assert remaining.fieldByPosition != null;
System.arraycopy(
remaining.fieldByPosition,
0,
joinedRow.fieldByPosition,
index,
remaining.fieldByPosition.length);
index += remaining.fieldByPosition.length;
}
return joinedRow;
}
|
Creates a new row with fields that are copied from the other rows and appended to the
resulting row in the given order. The {@link RowKind} of the first row determines the {@link
RowKind} of the result.
<p>This method does not perform a deep copy.
<p>Note: All rows must operate in position-based field mode.
|
join
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/Row.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/Row.java
|
Apache-2.0
|
public String shortString() {
return shortString;
}
|
Returns a short string representation of this {@link RowKind}.
<p>
<ul>
<li>"+I" represents {@link #INSERT}.
<li>"-U" represents {@link #UPDATE_BEFORE}.
<li>"+U" represents {@link #UPDATE_AFTER}.
<li>"-D" represents {@link #DELETE}.
</ul>
|
shortString
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowKind.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowKind.java
|
Apache-2.0
|
public byte toByteValue() {
return value;
}
|
Returns the byte value representation of this {@link RowKind}. The byte value is used for
serialization and deserialization.
<p>
<ul>
<li>"0" represents {@link #INSERT}.
<li>"1" represents {@link #UPDATE_BEFORE}.
<li>"2" represents {@link #UPDATE_AFTER}.
<li>"3" represents {@link #DELETE}.
</ul>
|
toByteValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowKind.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowKind.java
|
Apache-2.0
|
public static RowKind fromByteValue(byte value) {
switch (value) {
case 0:
return INSERT;
case 1:
return UPDATE_BEFORE;
case 2:
return UPDATE_AFTER;
case 3:
return DELETE;
default:
throw new UnsupportedOperationException(
"Unsupported byte value '" + value + "' for row kind.");
}
}
|
Creates a {@link RowKind} from the given byte value. Each {@link RowKind} has a byte value
representation.
@see #toByteValue() for mapping of byte value and {@link RowKind}.
|
fromByteValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowKind.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowKind.java
|
Apache-2.0
|
public static boolean compareRows(List<Row> l1, List<Row> l2) {
return compareRows(l1, l2, false);
}
|
Compares two {@link List}s of {@link Row} for deep equality. This method supports all
conversion classes of the table ecosystem.
|
compareRows
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
Apache-2.0
|
public static boolean compareRows(List<Row> l1, List<Row> l2, boolean ignoreOrder) {
if (l1 == l2) {
return true;
} else if (l1 == null || l2 == null) {
return false;
}
if (ignoreOrder) {
return deepEqualsListUnordered(l1, l2);
} else {
return deepEqualsListOrdered(l1, l2);
}
}
|
Compares two {@link List}s of {@link Row} for deep equality. This method supports all
conversion classes of the table ecosystem. The top-level lists can be compared with or
without order.
|
compareRows
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
Apache-2.0
|
@Internal
public static Row createRowWithNamedPositions(
RowKind kind, Object[] fieldByPosition, LinkedHashMap<String, Integer> positionByName) {
return new Row(kind, fieldByPosition, null, positionByName);
}
|
Internal utility for creating a row in static named-position field mode.
|
createRowWithNamedPositions
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
Apache-2.0
|
static boolean deepEqualsRow(
RowKind kind1,
@Nullable Object[] fieldByPosition1,
@Nullable Map<String, Object> fieldByName1,
@Nullable LinkedHashMap<String, Integer> positionByName1,
RowKind kind2,
@Nullable Object[] fieldByPosition2,
@Nullable Map<String, Object> fieldByName2,
@Nullable LinkedHashMap<String, Integer> positionByName2) {
if (kind1 != kind2) {
return false;
}
// positioned == positioned
else if (fieldByPosition1 != null && fieldByPosition2 != null) {
// positionByName is not included
return deepEqualsInternal(fieldByPosition1, fieldByPosition2);
}
// named == named
else if (fieldByName1 != null && fieldByName2 != null) {
return deepEqualsInternal(fieldByName1, fieldByName2);
}
// named positioned == named
else if (positionByName1 != null && fieldByName2 != null) {
return deepEqualsNamedRows(fieldByPosition1, positionByName1, fieldByName2);
}
// named == named positioned
else if (positionByName2 != null && fieldByName1 != null) {
return deepEqualsNamedRows(fieldByPosition2, positionByName2, fieldByName1);
}
return false;
}
|
Compares two objects with proper (nested) equality semantics. This method supports all
external and most internal conversion classes of the table ecosystem.
|
deepEqualsRow
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
Apache-2.0
|
static int deepHashCodeRow(
RowKind kind,
@Nullable Object[] fieldByPosition,
@Nullable Map<String, Object> fieldByName) {
int result = kind.toByteValue(); // for stable hash across JVM instances
if (fieldByPosition != null) {
// positionByName is not included
result = 31 * result + deepHashCodeInternal(fieldByPosition);
} else {
result = 31 * result + deepHashCodeInternal(fieldByName);
}
return result;
}
|
Hashes two objects with proper (nested) equality semantics. This method supports all external
and most internal conversion classes of the table ecosystem.
|
deepHashCodeRow
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/RowUtils.java
|
Apache-2.0
|
public short getValue() {
return this.value;
}
|
Returns the value of the encapsulated short.
@return the value of the encapsulated short.
|
getValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/ShortValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/ShortValue.java
|
Apache-2.0
|
public void setLength(int len) {
if (len < 0 || len > this.len) {
throw new IllegalArgumentException("Length must be between 0 and the current length.");
}
this.len = len;
}
|
Sets a new length for the string.
@param len The new length.
|
setLength
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
Apache-2.0
|
public char[] getCharArray() {
return this.value;
}
|
Returns this StringValue's internal character data. The array might be larger than the string
which is currently stored in the StringValue.
@return The character data.
|
getCharArray
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
Apache-2.0
|
public String getValue() {
return toString();
}
|
Gets this StringValue as a String.
@return A String resembling the contents of this StringValue.
|
getValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
Apache-2.0
|
public void setValue(char[] chars, int offset, int len) {
checkNotNull(chars);
if (offset < 0 || len < 0 || offset > chars.length - len) {
throw new IndexOutOfBoundsException();
}
ensureSize(len);
System.arraycopy(chars, offset, this.value, 0, len);
this.len = len;
this.hashCode = 0;
}
|
Sets the value of the StringValue to a substring of the given value.
@param chars The new string value (as a character array).
@param offset The position to start the substring.
@param len The length of the substring.
|
setValue
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
Apache-2.0
|
public void setValueAscii(byte[] bytes, int offset, int len) {
if (bytes == null) {
throw new NullPointerException("Bytes must not be null");
}
if (len < 0 || offset < 0 || offset > bytes.length - len) {
throw new IndexOutOfBoundsException();
}
ensureSize(len);
this.len = len;
this.hashCode = 0;
final char[] chars = this.value;
for (int i = 0, limit = offset + len; offset < limit; offset++, i++) {
chars[i] = (char) (bytes[offset] & 0xff);
}
}
|
Sets the value of this <code>StringValue</code>, assuming that the binary data is ASCII
coded. The n-th character of the <code>StringValue</code> corresponds directly to the n-th
byte in the given array after the offset.
@param bytes The binary character data.
@param offset The offset in the array.
@param len The number of bytes to read from the array.
|
setValueAscii
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
Apache-2.0
|
public StringValue substring(int start) {
return substring(start, this.len);
}
|
Returns a new <tt>StringValue</tt>string that is a substring of this string. The substring
begins at the given <code>start</code> index and ends at end of the string
@param start The beginning index, inclusive.
@return The substring.
@exception IndexOutOfBoundsException Thrown, if the start is negative.
|
substring
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
Apache-2.0
|
public StringValue substring(int start, int end) {
return new StringValue(this, start, end - start);
}
|
Returns a new <tt>StringValue</tt>string that is a substring of this string. The substring
begins at the given <code>start</code> index and ends at <code>end - 1</code>.
@param start The beginning index, inclusive.
@param end The ending index, exclusive.
@return The substring.
@exception IndexOutOfBoundsException Thrown, if the start is negative, or the end is larger
than the length.
|
substring
|
java
|
apache/flink
|
flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/types/StringValue.java
|
Apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.