name stringlengths 12 178 | code_snippet stringlengths 8 36.5k | score float64 3.26 3.68 |
|---|---|---|
hbase_AggregateImplementation_getStd_rdh | /**
* Gives a Pair with first object a List containing Sum and sum of squares, and the second object
* as row count. It is computed for a given combination of column qualifier and column family in
* the given row range as defined in the Scan object. In its current implementation, it takes one
* column family and on... | 3.26 |
hbase_AggregateImplementation_start_rdh | /**
* Stores a reference to the coprocessor environment provided by the
* {@link org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost} from the region where this
* coprocessor is loaded. Since this is a coprocessor endpoint, it always expects to be loaded on
* a table region, so always expects this to be an i... | 3.26 |
hbase_AggregateImplementation_getAvg_rdh | /**
* Gives a Pair with first object as Sum and second object as row count, computed for a given
* combination of column qualifier and column family in the given row range as defined in the Scan
* object. In its current implementation, it takes one column family and one column qualifier (if
* pr... | 3.26 |
hbase_AggregateImplementation_getSum_rdh | /**
* Gives the sum for a given combination of column qualifier and column family, in the given row
* range as defined in the Scan object. In its current implementation, it takes one column family
* and one column qualifier (if provided). In case of null column qualifier, sum for the entire
* column family will be ... | 3.26 |
hbase_AggregateImplementation_constructColumnInterpreterFromRequest_rdh | // Used server-side too by Aggregation Coprocesor Endpoint. Undo this interdependence. TODO.
@SuppressWarnings("unchecked")
ColumnInterpreter<T, S, P, Q, R> constructColumnInterpreterFromRequest(AggregateRequest request) throws IOException {
String className = request.getInterpreterClassName();
try {
... | 3.26 |
hbase_AggregateImplementation_getMedian_rdh | /**
* Gives a List containing sum of values and sum of weights. It is computed for the combination of
* column family and column qualifier(s) in the given row range as defined in the Scan object. In
* its current implementation, it takes one column family and two column qualifiers. The first
* qualifier is for valu... | 3.26 |
hbase_AggregateImplementation_getRowNum_rdh | /**
* Gives the row count for the given column family and column qualifier, in the given row range as
* defined in the Scan object.
*/
@Override
public void getRowNum(RpcController controller, AggregateRequest request, RpcCallback<AggregateResponse> done) {AggregateResponse response = null;
long v42 = 0L;List<Ce... | 3.26 |
hbase_AggregateImplementation_m0_rdh | /**
* Gives the maximum for a given combination of column qualifier and column family, in the given
* row range as defined in the Scan object. In its current implementation, it takes one column
* family and one column qualifier (if provided). In case of null column qualifier, maximum value
* for the entire column f... | 3.26 |
hbase_UserPermission_getAccessScope_rdh | /**
* Get this permission access scope.
*
* @return access scope
*/
public Scope getAccessScope() {
return permission.getAccessScope();
} | 3.26 |
hbase_MetricsSnapshot_addSnapshot_rdh | /**
* Record a single instance of a snapshot
*
* @param time
* time that the snapshot took
*/
public void addSnapshot(long time) {
source.updateSnapshotTime(time);
} | 3.26 |
hbase_MetricsSnapshot_addSnapshotRestore_rdh | /**
* Record a single instance of a snapshot
*
* @param time
* time that the snapshot restore took
*/
public void addSnapshotRestore(long time) {
source.updateSnapshotRestoreTime(time);
} | 3.26 |
hbase_MetricsSnapshot_addSnapshotClone_rdh | /**
* Record a single instance of a snapshot cloned table
*
* @param time
* time that the snapshot clone took
*/
public void addSnapshotClone(long time)
{
source.updateSnapshotCloneTime(time);
} | 3.26 |
hbase_ChecksumType_nameToType_rdh | /**
* Map a checksum name to a specific type. Do our own names.
*
* @return Type associated with passed code.
*/
public static ChecksumType nameToType(final String name) {
for (ChecksumType t : ChecksumType.values()) {
if (t.getName().equals(name)) {
return t;
}
}
throw new ... | 3.26 |
hbase_ChecksumType_codeToType_rdh | /**
* Cannot rely on enum ordinals . They change if item is removed or moved. Do our own codes.
*
* @return Type associated with passed code.
*/
public static ChecksumType codeToType(final byte b) {
for (ChecksumType t : ChecksumType.values()) {
if
(t.getCode() == b) {
return t;
... | 3.26 |
hbase_EncryptionUtil_createEncryptionContext_rdh | /**
* Helper to create an encyption context.
*
* @param conf
* The current configuration.
* @param family
* The current column descriptor.
* @return The created encryption context.
* @throws IOException
* if an encryption key for the column cannot be unwrapped
* @throws IllegalStateException
* in cas... | 3.26 |
hbase_EncryptionUtil_wrapKey_rdh | /**
* Protect a key by encrypting it with the secret key of the given subject. The configuration must
* be set up correctly for key alias resolution.
*
* @param conf
* configuration
* @param subject
* subject key alias
* @param key
* the key
* @return the encrypted key bytes
*/
public static byte[] wra... | 3.26 |
hbase_EncryptionUtil_unwrapKey_rdh | /**
* Helper for {@link #unwrapKey(Configuration, String, byte[])} which automatically uses the
* configured master and alternative keys, rather than having to specify a key type to unwrap
* with. The configuration must be set up correctly for key alias resolution.
*
* @param conf
* the current configuration
*... | 3.26 |
hbase_EncryptionUtil_unwrapWALKey_rdh | /**
* Unwrap a wal key by decrypting it with the secret key of the given subject. The configuration
* must be set up correctly for key alias resolution.
*
* @param conf
* configuration
* @param subject
* subject key alias
* @param value
* the encrypted key bytes
* @return the raw key bytes
* @throws IO... | 3.26 |
hbase_ImmutableMemStoreLAB_forceCopyOfBigCellInto_rdh | /**
* The process of merging assumes all cells are allocated on mslab. There is a rare case in which
* the first immutable segment, participating in a merge, is a CSLM. Since the CSLM hasn't been
* flattened yet, and there is no point in flattening it (since it is going to be merged), its big
* ... | 3.26 |
hbase_Delete_addFamilyVersion_rdh | /**
* Delete all columns of the specified family with a timestamp equal to the specified timestamp.
*
* @param family
* family name
* @param timestamp
* version timestamp
* @return this for invocation chaining
*/
public Delete addFamilyVersion(final byte[] family, final long timestamp) {
if (timesta... | 3.26 |
hbase_Delete_addFamily_rdh | /**
* Delete all columns of the specified family with a timestamp less than or equal to the specified
* timestamp.
* <p>
* Overrides previous calls to deleteColumn and deleteColumns for the specified family.
*
* @param family
* family name
* @param timestamp
* maximum version timestamp
* @return this for ... | 3.26 |
hbase_Delete_add_rdh | /**
* Add an existing delete marker to this Delete object.
*
* @param cell
* An existing cell of type "delete".
* @return this for invocation chaining
*/
@Override
public Delete add(Cell cell) throws IOException {
super.add(cell);return this;
} | 3.26 |
hbase_Delete_m0_rdh | /**
* Delete the latest version of the specified column. This is an expensive call in that on the
* server-side, it first does a get to find the latest versions timestamp. Then it adds a delete
* using the fetched cells timestamp.
*
* @param family
* family name
* @param qualifier
* column qualifier
* @ret... | 3.26 |
hbase_Delete_addColumns_rdh | /**
* Delete all versions of the specified column with a timestamp less than or equal to the
* specified timestamp.
*
* @param family
* family name
* @param qualifier
* column qualifier
* @param timestamp
* maximum version timestamp
* @return this for invocation chaining
*/
public Delete addColumns(fin... | 3.26 |
hbase_Delete_addColumn_rdh | /**
* Delete the specified version of the specified column.
*
* @param family
* family name
* @param qualifier
* column qualifier
* @param timestamp
* version timestamp
* @return this for invocation chaining
*/
public Delete addColumn(byte[] family, byte[]
qualifier, long timestamp) {
if (timesta... | 3.26 |
hbase_SaslServerAuthenticationProvider_init_rdh | /**
* Encapsulates the server-side logic to authenticate a client over SASL. Tied one-to-one to a
* single client authentication implementation.
*/
@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.AUTHENTICATION)
@InterfaceStability.Evolvingpublic interface SaslServerAuthenticationProvider extends SaslAuth... | 3.26 |
hbase_NettyFutureUtils_safeClose_rdh | /**
* Close the channel and eat the returned future by logging the error when the future is completed
* with error.
*/
public static void safeClose(ChannelOutboundInvoker channel)
{
consume(channel.close());
} | 3.26 |
hbase_NettyFutureUtils_addListener_rdh | /**
* This is method is used when you just want to add a listener to the given netty future. Ignoring
* the return value of a Future is considered as a bad practice as it may suppress exceptions
* thrown from the code that completes the future, and this method will catch all the exception
* thrown from the {@code l... | 3.26 |
hbase_NettyFutureUtils_safeWriteAndFlush_rdh | /**
* Call writeAndFlush on the channel and eat the returned future by logging the error when the
* future is completed with error.
*/
public static void safeWriteAndFlush(ChannelOutboundInvoker channel, Object msg) {
consume(channel.writeAndFlush(msg));
} | 3.26 |
hbase_NettyFutureUtils_consume_rdh | /**
* Log the error if the future indicates any failure.
*/@SuppressWarnings("FutureReturnValueIgnored")
public static void consume(Future<?> future) {future.addListener(NettyFutureUtils::loggingWhenError);
} | 3.26 |
hbase_NettyFutureUtils_safeWrite_rdh | /**
* Call write on the channel and eat the returned future by logging the error when the future is
* completed with error.
*/
public static void safeWrite(ChannelOutboundInvoker channel, Object msg) {
consume(channel.write(msg));
} | 3.26 |
hbase_LruAdaptiveBlockCache_updateSizeMetrics_rdh | /**
* Helper function that updates the local size counter and also updates any per-cf or
* per-blocktype metrics it can discern from given {@link LruCachedBlock}
*/
private long updateSizeMetrics(LruCachedBlock cb, boolean evict) {
long heapsize = cb.heapSize(); BlockType bt = cb.getBuffer().getBlockType();
... | 3.26 |
hbase_LruAdaptiveBlockCache_cacheBlock_rdh | /**
* Cache the block with the specified name and buffer.
* <p>
* TODO after HBASE-22005, we may cache an block which allocated from off-heap, but our LRU cache
* sizing is based on heap size, so we should handle this in HBASE-22127. It will introduce an
* switch whether make the LRU on-heap or not, if so we may n... | 3.26 |
hbase_LruAdaptiveBlockCache_getBlock_rdh | /**
* Get the buffer of the block with the specified name.
*
* @param cacheKey
* block's cache key
* @param caching
* true if the caller caches blocks on cache misses
* @param repeat
* Whether this is a repeat lookup for the same block (used to avoid
* double counting cache misses when doing double-che... | 3.26 |
hbase_LruAdaptiveBlockCache_clearCache_rdh | /**
* Clears the cache. Used in tests.
*/
public void clearCache() {
this.map.clear();
this.elements.set(0);
} | 3.26 |
hbase_LruAdaptiveBlockCache_evictBlocksByHfileName_rdh | /**
* Evicts all blocks for a specific HFile. This is an expensive operation implemented as a
* linear-time search through all blocks in the cache. Ideally this should be a search in a
* log-access-time map.
* <p>
* This is used for evict-on-close to remove all blocks of a specific HFile.
*
* @return the number ... | 3.26 |
hbase_LruAdaptiveBlockCache_containsBlock_rdh | /**
* Whether the cache contains block with specified cacheKey
*
* @return true if contains the block
*/
@Override
public boolean containsBlock(BlockCacheKey cacheKey) {
return map.containsKey(cacheKey);
} | 3.26 |
hbase_LruAdaptiveBlockCache_acceptableSize_rdh | // Simple calculators of sizes given factors and maxSize
long acceptableSize() {
return ((long) (Math.floor(this.maxSize * this.acceptableFactor)));
} | 3.26 |
hbase_LruAdaptiveBlockCache_assertCounterSanity_rdh | /**
* Sanity-checking for parity between actual block cache content and metrics. Intended only for
* use with TRACE level logging and -ea JVM.
*/
private static void assertCounterSanity(long mapSize, long counterVal) {
if (counterVal < 0) {
LOG.trace((("counterVal overflow. Assertions unreliable. counte... | 3.26 |
hbase_LruAdaptiveBlockCache_evictBlock_rdh | /**
* Evict the block, and it will be cached by the victim handler if exists && block may be
* read again later
*
* @param evictedByEvictionProcess
* true if the given block is evicted by EvictionThread
* @return the heap size of evicted block
*/
protected long evictBlock(LruCachedBlock
block, boolean ... | 3.26 |
hbase_LruAdaptiveBlockCache_isEnteringRun_rdh | /**
* Used for the test.
*/
boolean isEnteringRun() {
return this.enteringRun;
} | 3.26 |
hbase_LruAdaptiveBlockCache_evict_rdh | /**
* Eviction method. Evict items in order of use, allowing delete items which haven't been used for
* the longest amount of time.
*
* @return how many bytes were freed
*/
long evict() {
// Ensure only one eviction at a time
if (!evictionLock.tryLock()) {return 0;
}
long bytesToFree = 0L;
try... | 3.26 |
hbase_LruAdaptiveBlockCache_asReferencedHeapBlock_rdh | /**
* The block cached in LruAdaptiveBlockCache will always be an heap block: on the one side, the
* heap access will be more faster then off-heap, the small index block or meta block cached in
* CombinedBlockCache will benefit a lot. on other side, the LruAdaptiveBlockCache size is always
* calculated based on the... | 3.26 |
hbase_LruAdaptiveBlockCache_getStats_rdh | /**
* Get counter statistics for this cache.
* <p>
* Includes: total accesses, hits, misses, evicted blocks, and runs of the eviction processes.
*/
@Override
public CacheStats
getStats() {
return this.stats;
} | 3.26 |
hbase_LruAdaptiveBlockCache_runEviction_rdh | /**
* Multi-threaded call to run the eviction process.
*/
private void runEviction() {
if (evictionThread == null) {
evict();
} else {
evictionThread.m0();
}} | 3.26 |
hbase_VersionResource_getVersionResource_rdh | /**
* Dispatch <tt>/version/rest</tt> to self.
*/
@Path("rest")
public VersionResource getVersionResource() {
return this;
} | 3.26 |
hbase_VersionResource_getClusterVersionResource_rdh | /**
* Dispatch to StorageClusterVersionResource
*/
@Path("cluster")
public StorageClusterVersionResource getClusterVersionResource() throws IOException
{
return new StorageClusterVersionResource();
} | 3.26 |
hbase_VersionResource_m0_rdh | /**
* Build a response for a version request.
*
* @param context
* servlet context
* @param uriInfo
* (JAX-RS context variable) request URL
* @return a response for a version request
*/
@GET
@Produces({ MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF, MIMETYPE_PROTOBUF_IETF })
public Response ... | 3.26 |
hbase_MetricsIO_getInstance_rdh | /**
* Get a static instance for the MetricsIO so that accessors access the same instance. We want to
* lazy initialize so that correct process name is in place. See HBASE-27966 for more details.
*/
public static MetricsIO getInstance() {
if (instance == null) {
synchronized(MetricsIO.class) {
... | 3.26 |
hbase_ServerName_m0_rdh | /**
* Return {@link #getServerName()} as bytes with a short-sized prefix with the {@link #VERSION} of
* this class.
*/
public synchronized byte[] m0() {
if (this.bytes == null) {
this.bytes = Bytes.add(VERSION_BYTES, Bytes.toBytes(getServerName()));
}
return this.bytes;
} | 3.26 |
hbase_ServerName_getHostNameMinusDomain_rdh | /**
*
* @param hostname
* the hostname string to get the actual hostname from
* @return hostname minus the domain, if there is one (will do pass-through on ip addresses)
*/
private static String getHostNameMinusDomain(final String hostname) {
if (InetAddresses.isInetAddress(hostname))
{
return ho... | 3.26 |
hbase_ServerName_isFullServerName_rdh | /**
* Returns true if the String follows the pattern of {@link #toString()}, false otherwise.
*/
public static boolean isFullServerName(final String str) {
if ((str == null) || str.isEmpty()) {
return false;
}
return SERVERNAME_PATTERN.matcher(str).matches();
} | 3.26 |
hbase_ServerName_parseServerName_rdh | /**
* Parse a ServerName from a string
*
* @param str
* Either an instance of {@link #toString()} or a "'<hostname>' ':'
* '<port>'".
* @return A ServerName instance.
*/
public static ServerName parseServerName(final String str) {
return SERVERNAME_PATTERN.matcher(str).matches() ? valueOf(str) : ... | 3.26 |
hbase_ServerName_isSameAddress_rdh | /**
* Compare two addresses
*
* @param left
* the first server address to compare
* @param right
* the second server address to compare
* @return {@code true} if {@code left} and {@code right} have the same hostname and port.
*/
public static boolean isSameAddress(final ServerName left, final ServerName ri... | 3.26 |
hbase_ServerName_getStartcode_rdh | /**
* Return the start code.
*
* @deprecated Since 2.5.0, will be removed in 4.0.0. Use {@link #getStartCode()} instead.
*/
@Deprecated
public long getStartcode() {
return startCode;
} | 3.26 |
hbase_ServerName_parseVersionedServerName_rdh | /**
* Use this method instantiating a {@link ServerName} from bytes gotten from a call to
* {@link #getVersionedBytes()}. Will take care of the case where bytes were written by an earlier
* version of hbase.
*
* @param versionedBytes
* Pass bytes gotten from a call to {@link #getVersionedBytes()}
* @return A S... | 3.26 |
hbase_ServerName_getStartCode_rdh | /**
* Return the start code.
*/
public long getStartCode() {
return startCode;
} | 3.26 |
hbase_ServerName_valueOf_rdh | /**
* Retrieve an instance of {@link ServerName}. Callers should use the {@link #equals(Object)}
* method to compare returned instances, though we may return a shared immutable object as an
* internal optimization.
*
* @param address
* the {@link Address} to use for getting the {@link ServerName}
* @param star... | 3.26 |
hbase_ServerName_toShortString_rdh | /**
* Return a SHORT version of {@link #toString()}, one that has the host only, minus the domain,
* and the port only -- no start code; the String is for us internally mostly tying threads to
* their server. Not for external use. It is lossy and will not work in in compares, etc.
*/
public String toShortString() {... | 3.26 |
hbase_MultiTableInputFormatBase_createRecordReader_rdh | /**
* Builds a TableRecordReader. If no TableRecordReader was provided, uses the default.
*
* @param split
* The split to work with.
* @param context
* The current context.
* @return The newly created record reader.
* @throws IOException
* When creating the reader fails.
* @throws InterruptedException
... | 3.26 |
hbase_MultiTableInputFormatBase_getScans_rdh | /**
* Allows subclasses to get the list of {@link Scan} objects.
*/
protected List<Scan> getScans() {
return this.f0;
} | 3.26 |
hbase_MultiTableInputFormatBase_setTableRecordReader_rdh | /**
* Allows subclasses to set the {@link TableRecordReader}.
*
* @param tableRecordReader
* A different {@link TableRecordReader} implementation.
*/
protected void setTableRecordReader(TableRecordReader tableRecordReader) {
this.tableRecordReader = tableRecordReader;
} | 3.26 |
hbase_MultiTableInputFormatBase_includeRegionInSplit_rdh | /**
* Test if the given region is to be included in the InputSplit while splitting the regions of a
* table.
* <p>
* This optimization is effective when there is a specific reasoning to exclude an entire region
* from the M-R job, (and hence, not contributing to the InputSplit), given the start and end keys
* of ... | 3.26 |
hbase_MultiTableInputFormatBase_getSplits_rdh | /**
* Calculates the splits that will serve as input for the map tasks. The number of splits matches
* the number of regions in a table.
*
* @param context
* The current job context.
* @return The list of input splits.
* @throws IOException
* When creating the list of splits fails.
* @see InputFormat#getSp... | 3.26 |
hbase_MultiTableInputFormatBase_setScans_rdh | /**
* Allows subclasses to set the list of {@link Scan} objects.
*
* @param scans
* The list of {@link Scan} used to define the input
*/
protected void setScans(List<Scan> scans) {
this.f0 = scans;
} | 3.26 |
hbase_ModifyPeerProcedure_nextStateAfterRefresh_rdh | /**
* Implementation class can override this method. By default we will jump to
* POST_PEER_MODIFICATION and finish the procedure.
*/
protected PeerModificationState nextStateAfterRefresh() {
return PeerModificationState.POST_PEER_MODIFICATION;
} | 3.26 |
hbase_ModifyPeerProcedure_enablePeerBeforeFinish_rdh | /**
* The implementation class should override this method if the procedure may enter the serial
* related states.
*/
protected boolean enablePeerBeforeFinish() {
throw new UnsupportedOperationException();
} | 3.26 |
hbase_ModifyPeerProcedure_needReopen_rdh | // If the table is in enabling state, we need to wait until it is enabled and then reopen all its
// regions.
private boolean needReopen(TableStateManager tsm, TableName tn) throws IOException {
for (; ;)
{
try {
TableState state = tsm.getTableState(tn);
if (state.isEnabled()) ... | 3.26 |
hbase_ModifyPeerProcedure_reopenRegions_rdh | // will be override in test to simulate error
protected void reopenRegions(MasterProcedureEnv env) throws IOException
{
ReplicationPeerConfig peerConfig = getNewPeerConfig();
ReplicationPeerConfig oldPeerConfig
= getOldPeerConfig();
TableStateManager tsm = env.getMasterServices().getTableStateManager();... | 3.26 |
hbase_SimpleServerRpcConnection_initByteBuffToReadInto_rdh | // It creates the ByteBuff and CallCleanup and assign to Connection instance.
private void initByteBuffToReadInto(int length) {
this.data = rpcServer.bbAllocator.allocate(length);
this.callCleanup = data::release;
} | 3.26 |
hbase_SimpleServerRpcConnection_readAndProcess_rdh | /**
* Read off the wire. If there is not enough data to read, update the connection state with what
* we have and returns.
*
* @return Returns -1 if failure (and caller will close connection), else zero or more.
*/
public int
readAndProcess() throws IOException, InterruptedExceptio... | 3.26 |
hbase_SimpleServerRpcConnection_decRpcCount_rdh | /* Decrement the outstanding RPC count */
protected void decRpcCount() {
rpcCount.decrement();
} | 3.26 |
hbase_SimpleServerRpcConnection_process_rdh | /**
* Process the data buffer and clean the connection state for the next call.
*/
private void process() throws IOException, InterruptedException {
data.rewind();try {
if (skipInitialSaslHandshake) {
skipInitialSaslHandshake = false;
return;
}
if (useSasl) {saslReadAndProcess(data);
... | 3.26 |
hbase_SimpleServerRpcConnection_incRpcCount_rdh | /* Increment the outstanding RPC count */
protected void incRpcCount() {
rpcCount.increment();
} | 3.26 |
hbase_SimpleServerRpcConnection_isIdle_rdh | /* Return true if the connection has no outstanding rpc */boolean isIdle() {
return rpcCount.sum() == 0;
} | 3.26 |
hbase_SimpleByteRange_shallowCopy_rdh | //
// methods for duplicating the current instance
//
@Override
public ByteRange shallowCopy() {
SimpleByteRange v0 = new SimpleByteRange(bytes, offset, length);
if (isHashCached()) {
v0.hash = hash;
}
return v0;
} | 3.26 |
hbase_SimpleByteRange_unset_rdh | //
@Override
public ByteRange unset() {
throw new ReadOnlyByteRangeException();
} | 3.26 |
hbase_SimpleByteRange_put_rdh | //
// methods for retrieving data
//
@Override
public ByteRange put(int index, byte val) {
throw new ReadOnlyByteRangeException();
} | 3.26 |
hbase_StoreFileWriter_build_rdh | /**
* Create a store file writer. Client is responsible for closing file when done. If metadata,
* add BEFORE closing using {@link StoreFileWriter#appendMetadata}.
*/
public StoreFileWriter build() throws IOException {
if (((dir == null ? 0 : 1) + (filePath == null ? 0 : 1)) != 1) {
throw new IllegalArgumentExceptio... | 3.26 |
hbase_StoreFileWriter_m0_rdh | /**
* Used when write {@link HStoreFile#COMPACTION_EVENT_KEY} to new file's file info. The compacted
* store files's name is needed. But if the compacted store file is a result of compaction, it's
* compacted files which still not archived is needed, too. And don't need to add compacted files
* recursively. If file... | 3.26 |
hbase_StoreFileWriter_withOutputDir_rdh | /**
* Use either this method or {@link #withFilePath}, but not both.
*
* @param dir
* Path to column family directory. The directory is created if does not exist. The
* file is given a unique name within this directory.
* @return this (for chained invocation)
*/public Builder withOutputDir(Path dir) {
Pr... | 3.26 |
hbase_StoreFileWriter_getGeneralBloomWriter_rdh | /**
* For unit testing only.
*
* @return the Bloom filter used by this writer.
*/
BloomFilterWriter getGeneralBloomWriter() {
return generalBloomFilterWriter;
} | 3.26 |
hbase_StoreFileWriter_appendMetadata_rdh | /**
* Writes meta data. Call before {@link #close()} since its written as meta data to this file.
*
* @param maxSequenceId
* Maximum sequence id.
* @param majorCompaction
* True if this file is product of a major compaction
* @param mobCellsCount
* The number of mob cells.
* @throws IOException
* prob... | 3.26 |
hbase_StoreFileWriter_appendTrackedTimestampsToMetadata_rdh | /**
* Add TimestampRange and earliest put timestamp to Metadata
*/
public void appendTrackedTimestampsToMetadata() throws IOException {
// TODO: The StoreFileReader always converts the byte[] to TimeRange
// via TimeRangeTracker, so we should write the serialization data of TimeRange directly.
appendFileInfo(TIMERAN... | 3.26 |
hbase_StoreFileWriter_appendMobMetadata_rdh | /**
* Appends MOB - specific metadata (even if it is empty)
*
* @param mobRefSet
* - original table -> set of MOB file names
* @throws IOException
* problem writing to FS
*/
public void appendMobMetadata(SetMultimap<TableName, String> mobRefSet) throws IOException {
writer.appendFileInfo(MOB_FILE_REFS, MobUt... | 3.26 |
hbase_StoreFileWriter_withMaxKeyCount_rdh | /**
*
* @param maxKeyCount
* estimated maximum number of keys we expect to add
* @return this (for chained invocation)
*/
public Builder withMaxKeyCount(long maxKeyCount) {
this.maxKeyCount = maxKeyCount;
return this;
} | 3.26 |
hbase_StoreFileWriter_trackTimestamps_rdh | /**
* Record the earlest Put timestamp. If the timeRangeTracker is not set, update TimeRangeTracker
* to include the timestamp of this key
*/
public void trackTimestamps(final Cell cell) {
if (Type.Put.getCode() == cell.getTypeByte()) {
earliestPutTs = Math.min(earliestPutTs, cell.getTimestamp());
}
timeRangeTra... | 3.26 |
hbase_StoreFileWriter_withFilePath_rdh | /**
* Use either this method or {@link #withOutputDir}, but not both.
*
* @param filePath
* the StoreFile path to write
* @return this (for chained invocation)
*/
public Builder withFilePath(Path filePath) {
Preconditions.checkNotNull(filePath);
this.filePath = f... | 3.26 |
hbase_StoreFileWriter_withFavoredNodes_rdh | /**
*
* @param favoredNodes
* an array of favored nodes or possibly null
* @return this (for chained invocation)
*/
public Builder withFavoredNodes(InetSocketAddress[] favoredNodes) {
this.favoredNodes = favoredNodes;
return this;
} | 3.26 |
hbase_StoreFileWriter_getUniqueFile_rdh | /**
*
* @param dir
* Directory to create file in.
* @return random filename inside passed <code>dir</code>
*/
public static Path getUniqueFile(final FileSystem fs, final Path dir) throws IOException {
if (!fs.getFileStatus(dir).isDirectory()) {
throw new IOException(("Expecting " + dir.toString()) + " to be... | 3.26 |
hbase_StoreFileWriter_getHFileWriter_rdh | /**
* For use in testing.
*/
Writer getHFileWriter() {
return writer;
} | 3.26 |
hbase_ReplicationSink_decorateConf_rdh | /**
* decorate the Configuration object to make replication more receptive to delays: lessen the
* timeout and numTries.
*/
private void decorateConf() {
this.conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, this.conf.getInt("replication.sink.client.retries.number", 4));
this.conf.setInt(HConstants.HBASE_... | 3.26 |
hbase_ReplicationSink_stopReplicationSinkServices_rdh | /**
* stop the thread pool executor. It is called when the regionserver is stopped.
*/
public void stopReplicationSinkServices() {
try {
if (this.sharedConn != null) {
synchronized(sharedConnLock) {if (this.sharedConn != null) {
this.sharedConn.close();
this.sharedConn = null;
}
}
}
} catch (IOException e) {
LOG.war... | 3.26 |
hbase_ReplicationSink_getSinkMetrics_rdh | /**
* Get replication Sink Metrics
*/
public MetricsSink getSinkMetrics() {
return this.metrics;
} | 3.26 |
hbase_ReplicationSink_addToHashMultiMap_rdh | /**
* Simple helper to a map from key to (a list of) values TODO: Make a general utility method
*
* @return the list of values corresponding to key1 and key2
*/
private <K1, K2, V> List<V> addToHashMultiMap(Map<K1, Map<K2, List<V>>> map, K1 key1, K2 key2, V value) {
Map<K2, List<V>> innerMap = map.computeIfAbsent(k... | 3.26 |
hbase_ReplicationSink_batch_rdh | /**
* Do the changes and handle the pool
*
* @param tableName
* table to insert into
* @param allRows
* list of actions
* @param batchRowSizeThreshold
* rowSize threshold for batch mutation
*/
private void batch(TableName tableName, Collection<List<Row>> allRows, int batchRowSizeThreshold) throws IOExcep... | 3.26 |
hbase_ReplicationSink_isNewRowOrType_rdh | /**
* Returns True if we have crossed over onto a new row or type
*/
private boolean isNewRowOrType(final Cell previousCell, final Cell cell) {
return ((previousCell == null) || (previousCell.getTypeByte() != cell.getTypeByte())) || (!CellUtil.matchingRows(previousCell, cell));
} | 3.26 |
hbase_ReplicationSink_replicateEntries_rdh | /**
* Replicate this array of entries directly into the local cluster using the native client. Only
* operates against raw protobuf type saving on a conversion from pb to pojo.
*
* @param entries
* WAL entries to be replicated.
* @param cells
* cell scanner for iteration.
... | 3.26 |
hbase_Hash_parseHashType_rdh | /**
* This utility method converts String representation of hash function name to a symbolic
* constant. Currently three function types are supported, "jenkins", "murmur" and "murmur3".
*
* @param name
* hash function name
* @return one of the predefined constants
*/
public static int parseHashType(String name... | 3.26 |
hbase_Hash_getInstance_rdh | /**
* Get a singleton instance of hash function of a type defined in the configuration.
*
* @param conf
* current configuration
* @return defined hash type, or null if type is invalid
*/
public static Hash getInstance(Configuration conf) {
int type = getHashType(conf);
return getInstance(type);
} | 3.26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.