function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
size
long size() { return MINIMUM_SIZE + fileNameLength() + extraFieldLength() + fileCommentLength(); }
Return the size of this record. @return the record size
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/zip/ZipCentralDirectoryFileHeaderRecord.java
74
[]
true
1
6.8
spring-projects/spring-boot
79,428
javadoc
false
ensureCoordinatorReady
protected synchronized boolean ensureCoordinatorReady(final Timer timer) { return ensureCoordinatorReady(timer, false); }
Ensure that the coordinator is ready to receive requests. @param timer Timer bounding how long this method can block @return true If coordinator discovery and initial connection succeeded, false otherwise
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
269
[ "timer" ]
true
1
6.64
apache/kafka
31,560
javadoc
false
equals
@Override public boolean equals(final Object obj) { if (obj == this) { return true; } if (obj instanceof Triple<?, ?, ?>) { final Triple<?, ?, ?> other = (Triple<?, ?, ?>) obj; return Objects.equals(getLeft(), other.getLeft()) && Objects.equals(getMiddle(), other.getMiddle()) && Objects.equals(getRight(), other.getRight()); } return false; }
Compares this triple to another based on the three elements. @param obj the object to compare to, null returns false. @return true if the elements of the triple are equal.
java
src/main/java/org/apache/commons/lang3/tuple/Triple.java
128
[ "obj" ]
true
5
8.24
apache/commons-lang
2,896
javadoc
false
assert_allclose_dense_sparse
def assert_allclose_dense_sparse(x, y, rtol=1e-07, atol=1e-9, err_msg=""): """Assert allclose for sparse and dense data. Both x and y need to be either sparse or dense, they can't be mixed. Parameters ---------- x : {array-like, sparse matrix} First array to compare. y : {array-like, sparse matrix} Second array to compare. rtol : float, default=1e-07 relative tolerance; see numpy.allclose. atol : float, default=1e-9 absolute tolerance; see numpy.allclose. Note that the default here is more tolerant than the default for numpy.testing.assert_allclose, where atol=0. err_msg : str, default='' Error message to raise. """ if sp.sparse.issparse(x) and sp.sparse.issparse(y): x = x.tocsr() y = y.tocsr() x.sum_duplicates() y.sum_duplicates() assert_array_equal(x.indices, y.indices, err_msg=err_msg) assert_array_equal(x.indptr, y.indptr, err_msg=err_msg) assert_allclose(x.data, y.data, rtol=rtol, atol=atol, err_msg=err_msg) elif not sp.sparse.issparse(x) and not sp.sparse.issparse(y): # both dense assert_allclose(x, y, rtol=rtol, atol=atol, err_msg=err_msg) else: raise ValueError( "Can only compare two sparse matrices, not a sparse matrix and an array." )
Assert allclose for sparse and dense data. Both x and y need to be either sparse or dense, they can't be mixed. Parameters ---------- x : {array-like, sparse matrix} First array to compare. y : {array-like, sparse matrix} Second array to compare. rtol : float, default=1e-07 relative tolerance; see numpy.allclose. atol : float, default=1e-9 absolute tolerance; see numpy.allclose. Note that the default here is more tolerant than the default for numpy.testing.assert_allclose, where atol=0. err_msg : str, default='' Error message to raise.
python
sklearn/utils/_testing.py
244
[ "x", "y", "rtol", "atol", "err_msg" ]
false
6
6.24
scikit-learn/scikit-learn
64,340
numpy
false
isListOpener
function isListOpener(token: Node | undefined): token is Node { const kind = token && token.kind; return kind === SyntaxKind.OpenBraceToken || kind === SyntaxKind.OpenBracketToken || kind === SyntaxKind.OpenParenToken || kind === SyntaxKind.JsxOpeningElement; }
Splits sibling nodes into up to four partitions: 1) everything left of the first node matched by `pivotOn`, 2) the first node matched by `pivotOn`, 3) everything right of the first node matched by `pivotOn`, 4) a trailing semicolon, if `separateTrailingSemicolon` is enabled. The left and right groups, if not empty, will each be grouped into their own containing SyntaxList. @param children The sibling nodes to split. @param pivotOn The predicate function to match the node to be the pivot. The first node that matches the predicate will be used; any others that may match will be included into the right-hand group. @param separateTrailingSemicolon If the last token is a semicolon, it will be returned as a separate child rather than be included in the right-hand group.
typescript
src/services/smartSelection.ts
344
[ "token" ]
false
5
6.08
microsoft/TypeScript
107,154
jsdoc
false
registerCustomEditor
public void registerCustomEditor(Class<?> requiredType, PropertyEditor propertyEditor) { TypeConverter converter = getTypeConverter(); if (!(converter instanceof PropertyEditorRegistry registry)) { throw new IllegalStateException( "TypeConverter does not implement PropertyEditorRegistry interface: " + converter); } registry.registerCustomEditor(requiredType, propertyEditor); }
Register the given custom property editor for all properties of the given type. <p>Typically used in conjunction with the default {@link org.springframework.beans.SimpleTypeConverter}; will work with any TypeConverter that implements the PropertyEditorRegistry interface as well. @param requiredType type of the property @param propertyEditor editor to register @see #setTypeConverter @see org.springframework.beans.PropertyEditorRegistry#registerCustomEditor
java
spring-beans/src/main/java/org/springframework/beans/support/ArgumentConvertingMethodInvoker.java
98
[ "requiredType", "propertyEditor" ]
void
true
2
6.08
spring-projects/spring-framework
59,386
javadoc
false
init_RISCV_64Bit
private static void init_RISCV_64Bit() { addProcessors(new Processor(Processor.Arch.BIT_64, Processor.Type.RISC_V), "riscv64"); }
Gets a {@link Processor} object the given value {@link String}. The {@link String} must be like a value returned by the {@code "os.arch"} system property. @param value A {@link String} like a value returned by the {@code os.arch} System Property. @return A {@link Processor} when it exists, else {@code null}.
java
src/main/java/org/apache/commons/lang3/ArchUtils.java
127
[]
void
true
1
6.96
apache/commons-lang
2,896
javadoc
false
getMessageInternal
protected @Nullable String getMessageInternal(@Nullable String code, Object @Nullable [] args, @Nullable Locale locale) { if (code == null) { return null; } if (locale == null) { locale = Locale.getDefault(); } Object[] argsToUse = args; if (!isAlwaysUseMessageFormat() && ObjectUtils.isEmpty(args)) { // Optimized resolution: no arguments to apply, // therefore no MessageFormat needs to be involved. // Note that the default implementation still uses MessageFormat; // this can be overridden in specific subclasses. String message = resolveCodeWithoutArguments(code, locale); if (message != null) { return message; } } else { // Resolve arguments eagerly, for the case where the message // is defined in a parent MessageSource but resolvable arguments // are defined in the child MessageSource. argsToUse = resolveArguments(args, locale); MessageFormat messageFormat = resolveCode(code, locale); if (messageFormat != null) { synchronized (messageFormat) { return messageFormat.format(argsToUse); } } } // Check locale-independent common messages for the given message code. Properties commonMessages = getCommonMessages(); if (commonMessages != null) { String commonMessage = commonMessages.getProperty(code); if (commonMessage != null) { return formatMessage(commonMessage, args, locale); } } // Not found -> check parent, if any. return getMessageFromParent(code, argsToUse, locale); }
Resolve the given code and arguments as message in the given Locale, returning {@code null} if not found. Does <i>not</i> fall back to the code as default message. Invoked by {@code getMessage} methods. @param code the code to lookup up, such as 'calculator.noRateSet' @param args array of arguments that will be filled in for params within the message @param locale the locale in which to do the lookup @return the resolved message, or {@code null} if not found @see #getMessage(String, Object[], String, Locale) @see #getMessage(String, Object[], Locale) @see #getMessage(MessageSourceResolvable, Locale) @see #setUseCodeAsDefaultMessage
java
spring-context/src/main/java/org/springframework/context/support/AbstractMessageSource.java
205
[ "code", "args", "locale" ]
String
true
9
8.08
spring-projects/spring-framework
59,386
javadoc
false
replaceIn
public boolean replaceIn(final StrBuilder source, final int offset, final int length) { if (source == null) { return false; } return substitute(source, offset, length); }
Replaces all the occurrences of variables within the given source builder with their matching values from the resolver. <p> Only the specified portion of the builder will be processed. The rest of the builder is not processed, but it is not deleted. </p> @param source the builder to replace in, null returns zero. @param offset the start offset within the array, must be valid. @param length the length within the builder to be processed, must be valid. @return true if altered.
java
src/main/java/org/apache/commons/lang3/text/StrSubstitutor.java
749
[ "source", "offset", "length" ]
true
2
8.24
apache/commons-lang
2,896
javadoc
false
getCacheKey
protected Object getCacheKey(Class<?> beanClass, @Nullable String beanName) { if (StringUtils.hasLength(beanName)) { return new ComposedCacheKey(beanClass, beanName); } else { return beanClass; } }
Build a cache key for the given bean class and bean name. <p>Note: As of 7.0.2, this implementation returns a composed cache key for bean class plus bean name; or if no bean name specified, then the given bean {@code Class} as-is. @param beanClass the bean class @param beanName the bean name @return the cache key for the given class and name
java
spring-aop/src/main/java/org/springframework/aop/framework/autoproxy/AbstractAutoProxyCreator.java
305
[ "beanClass", "beanName" ]
Object
true
2
7.92
spring-projects/spring-framework
59,386
javadoc
false
thumbprint
def thumbprint(jwk: dict[str, Any], hashalg=hashes.SHA256()) -> str: """ Return the key thumbprint as specified by RFC 7638. :param hashalg: A hash function (defaults to SHA256) :return: A base64url encoded digest of the key """ digest = hashes.Hash(hashalg, backend=default_backend()) jsonstr = json.dumps(jwk, separators=(",", ":"), sort_keys=True) digest.update(jsonstr.encode("utf8")) return base64url_encode(digest.finalize())
Return the key thumbprint as specified by RFC 7638. :param hashalg: A hash function (defaults to SHA256) :return: A base64url encoded digest of the key
python
airflow-core/src/airflow/api_fastapi/auth/tokens.py
495
[ "jwk", "hashalg" ]
str
true
1
7.04
apache/airflow
43,597
sphinx
false
supportsSkippingAssignment
public static boolean supportsSkippingAssignment(int apiVersion) { return apiVersion >= 9; }
Starting from version 9 of the JoinGroup API, static members are able to skip running the assignor based on the `SkipAssignment` field. We leverage this to tell the leader that it is the leader of the group but by skipping running the assignor while the group is in stable state. Notes: 1) This allows the leader to continue monitoring metadata changes for the group. Note that any metadata changes happening while the static leader is down won't be noticed. 2) The assignors are not idempotent nor free from side effects. This is why we skip entirely the assignment step as it could generate a different group assignment which would be ignored by the group coordinator because the group is the stable state. Prior to version 9 of the JoinGroup API, we wanted to avoid current leader performing trivial assignment while the group is in stable stage, because the new assignment in leader's next sync call won't be broadcast by a stable group. This could be guaranteed by always returning the old leader id so that the current leader won't assume itself as a leader based on the returned message, since the new member.id won't match returned leader id, therefore no assignment will be performed. @param apiVersion The JoinGroupRequest api version. @return whether the version supports skipping assignment.
java
clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java
153
[ "apiVersion" ]
true
1
6.64
apache/kafka
31,560
javadoc
false
send
public void send(NetworkSend send) { String connectionId = send.destinationId(); KafkaChannel channel = openOrClosingChannelOrFail(connectionId); if (closingChannels.containsKey(connectionId)) { // ensure notification via `disconnected`, leave channel in the state in which closing was triggered this.failedSends.add(connectionId); } else { try { channel.setSend(send); } catch (Exception e) { // update the state for consistency, the channel will be discarded after `close` channel.state(ChannelState.FAILED_SEND); // ensure notification via `disconnected` when `failedSends` are processed in the next poll this.failedSends.add(connectionId); close(channel, CloseMode.DISCARD_NO_NOTIFY); if (!(e instanceof CancelledKeyException)) { log.error("Unexpected exception during send, closing connection {} and rethrowing exception.", connectionId, e); throw e; } } } }
Queue the given request for sending in the subsequent {@link #poll(long)} calls @param send The request to send
java
clients/src/main/java/org/apache/kafka/common/network/Selector.java
391
[ "send" ]
void
true
4
6.72
apache/kafka
31,560
javadoc
false
_validate_queue_type
def _validate_queue_type(self, queue_type: Optional[str]) -> None: """Validate the queue type configuration. Args: queue_type: The configured queue type Raises: ValueError: If queue type is invalid """ if not queue_type: raise ValueError("broker_native_delayed_delivery_queue_type is not configured") if queue_type not in VALID_QUEUE_TYPES: sorted_types = sorted(VALID_QUEUE_TYPES) raise ValueError( f"Invalid queue type {queue_type!r}. Must be one of: {', '.join(sorted_types)}" )
Validate the queue type configuration. Args: queue_type: The configured queue type Raises: ValueError: If queue type is invalid
python
celery/worker/consumer/delayed_delivery.py
258
[ "self", "queue_type" ]
None
true
3
6.24
celery/celery
27,741
google
false
openBufferedStream
public Writer openBufferedStream() throws IOException { Writer writer = openStream(); return (writer instanceof BufferedWriter) ? (BufferedWriter) writer : new BufferedWriter(writer); }
Opens a new buffered {@link Writer} for writing to this sink. The returned stream is not required to be a {@link BufferedWriter} in order to allow implementations to simply delegate to {@link #openStream()} when the stream returned by that method does not benefit from additional buffering. This method returns a new, independent writer each time it is called. <p>The caller is responsible for ensuring that the returned writer is closed. @throws IOException if an I/O error occurs while opening the writer @since 15.0 (in 14.0 with return type {@link BufferedWriter})
java
android/guava/src/com/google/common/io/CharSink.java
82
[]
Writer
true
2
6.72
google/guava
51,352
javadoc
false
fire
public L fire() { return proxy; }
Returns a proxy object which can be used to call listener methods on all of the registered event listeners. All calls made to this proxy will be forwarded to all registered listeners. @return a proxy object which can be used to call listener methods on all of the registered event listeners
java
src/main/java/org/apache/commons/lang3/event/EventListenerSupport.java
276
[]
L
true
1
6.8
apache/commons-lang
2,896
javadoc
false
handleCompletedReceives
private void handleCompletedReceives(List<ClientResponse> responses, long now) { for (NetworkReceive receive : this.selector.completedReceives()) { String source = receive.source(); InFlightRequest req = inFlightRequests.completeNext(source); AbstractResponse response = parseResponse(receive.payload(), req.header); if (throttleTimeSensor != null) throttleTimeSensor.record(response.throttleTimeMs(), now); if (log.isDebugEnabled()) { log.debug("Received {} response from node {} for request with header {}: {}", req.header.apiKey(), req.destination, req.header, response); } // If the received response includes a throttle delay, throttle the connection. maybeThrottle(response, req.header.apiVersion(), req.destination, now); if (req.isInternalRequest && response instanceof MetadataResponse) metadataUpdater.handleSuccessfulResponse(req.header, now, (MetadataResponse) response); else if (req.isInternalRequest && response instanceof ApiVersionsResponse) handleApiVersionsResponse(responses, req, now, (ApiVersionsResponse) response); else if (req.isInternalRequest && response instanceof GetTelemetrySubscriptionsResponse) telemetrySender.handleResponse((GetTelemetrySubscriptionsResponse) response); else if (req.isInternalRequest && response instanceof PushTelemetryResponse) telemetrySender.handleResponse((PushTelemetryResponse) response); else responses.add(req.completed(response, now)); } }
Handle any completed receives and update the response list with the responses received. @param responses The list of responses to update @param now The current time
java
clients/src/main/java/org/apache/kafka/clients/NetworkClient.java
994
[ "responses", "now" ]
void
true
11
7.04
apache/kafka
31,560
javadoc
false
fromstring
def fromstring(datastring, dtype=None, shape=None, offset=0, formats=None, names=None, titles=None, aligned=False, byteorder=None): r"""Create a record array from binary data Note that despite the name of this function it does not accept `str` instances. Parameters ---------- datastring : bytes-like Buffer of binary data dtype : data-type, optional Valid dtype for all arrays shape : int or tuple of ints, optional Shape of each array. offset : int, optional Position in the buffer to start reading from. formats, names, titles, aligned, byteorder : If `dtype` is ``None``, these arguments are passed to `numpy.format_parser` to construct a dtype. See that function for detailed documentation. Returns ------- np.recarray Record array view into the data in datastring. This will be readonly if `datastring` is readonly. See Also -------- numpy.frombuffer Examples -------- >>> a = b'\x01\x02\x03abc' >>> np.rec.fromstring(a, dtype='u1,u1,u1,S3') rec.array([(1, 2, 3, b'abc')], dtype=[('f0', 'u1'), ('f1', 'u1'), ('f2', 'u1'), ('f3', 'S3')]) >>> grades_dtype = [('Name', (np.str_, 10)), ('Marks', np.float64), ... ('GradeLevel', np.int32)] >>> grades_array = np.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ... ('Aadi', 66.6, 6)], dtype=grades_dtype) >>> np.rec.fromstring(grades_array.tobytes(), dtype=grades_dtype) rec.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ('Aadi', 66.6, 6)], dtype=[('Name', '<U10'), ('Marks', '<f8'), ('GradeLevel', '<i4')]) >>> s = '\x01\x02\x03abc' >>> np.rec.fromstring(s, dtype='u1,u1,u1,S3') Traceback (most recent call last): ... TypeError: a bytes-like object is required, not 'str' """ if dtype is None and formats is None: raise TypeError("fromstring() needs a 'dtype' or 'formats' argument") if dtype is not None: descr = sb.dtype(dtype) else: descr = format_parser(formats, names, titles, aligned, byteorder).dtype itemsize = descr.itemsize # NumPy 1.19.0, 2020-01-01 shape = _deprecate_shape_0_as_None(shape) if shape in (None, -1): shape = (len(datastring) - offset) // itemsize _array = recarray(shape, descr, buf=datastring, offset=offset) return _array
r"""Create a record array from binary data Note that despite the name of this function it does not accept `str` instances. Parameters ---------- datastring : bytes-like Buffer of binary data dtype : data-type, optional Valid dtype for all arrays shape : int or tuple of ints, optional Shape of each array. offset : int, optional Position in the buffer to start reading from. formats, names, titles, aligned, byteorder : If `dtype` is ``None``, these arguments are passed to `numpy.format_parser` to construct a dtype. See that function for detailed documentation. Returns ------- np.recarray Record array view into the data in datastring. This will be readonly if `datastring` is readonly. See Also -------- numpy.frombuffer Examples -------- >>> a = b'\x01\x02\x03abc' >>> np.rec.fromstring(a, dtype='u1,u1,u1,S3') rec.array([(1, 2, 3, b'abc')], dtype=[('f0', 'u1'), ('f1', 'u1'), ('f2', 'u1'), ('f3', 'S3')]) >>> grades_dtype = [('Name', (np.str_, 10)), ('Marks', np.float64), ... ('GradeLevel', np.int32)] >>> grades_array = np.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ... ('Aadi', 66.6, 6)], dtype=grades_dtype) >>> np.rec.fromstring(grades_array.tobytes(), dtype=grades_dtype) rec.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ('Aadi', 66.6, 6)], dtype=[('Name', '<U10'), ('Marks', '<f8'), ('GradeLevel', '<i4')]) >>> s = '\x01\x02\x03abc' >>> np.rec.fromstring(s, dtype='u1,u1,u1,S3') Traceback (most recent call last): ... TypeError: a bytes-like object is required, not 'str'
python
numpy/_core/records.py
753
[ "datastring", "dtype", "shape", "offset", "formats", "names", "titles", "aligned", "byteorder" ]
false
6
7.44
numpy/numpy
31,054
numpy
false
forEach
function forEach(collection, iteratee) { var func = isArray(collection) ? arrayEach : baseEach; return func(collection, getIteratee(iteratee, 3)); }
Iterates over elements of `collection` and invokes `iteratee` for each element. The iteratee is invoked with three arguments: (value, index|key, collection). Iteratee functions may exit iteration early by explicitly returning `false`. **Note:** As with other "Collections" methods, objects with a "length" property are iterated like arrays. To avoid this behavior use `_.forIn` or `_.forOwn` for object iteration. @static @memberOf _ @since 0.1.0 @alias each @category Collection @param {Array|Object} collection The collection to iterate over. @param {Function} [iteratee=_.identity] The function invoked per iteration. @returns {Array|Object} Returns `collection`. @see _.forEachRight @example _.forEach([1, 2], function(value) { console.log(value); }); // => Logs `1` then `2`. _.forEach({ 'a': 1, 'b': 2 }, function(value, key) { console.log(key); }); // => Logs 'a' then 'b' (iteration order is not guaranteed).
javascript
lodash.js
9,447
[ "collection", "iteratee" ]
false
2
6.96
lodash/lodash
61,490
jsdoc
false
prepare_key
def prepare_key( gm: torch.fx.GraphModule, example_inputs: Sequence[InputType], fx_kwargs: _CompileFxKwargs, inputs_to_check: Sequence[int], remote: bool, ) -> tuple[tuple[str, list[str]] | None, dict[str, Any]]: """ Checks that the inductor input is cacheable, then computes and returns the cache key for the input. Returns (key_info, cache_info) where: - key_info is (hash_key, debug_lines), and - cache_info will contain debug info in the event of BypassFxGraphCache. NB: It is possible to have this function return a union instead. But I personally believe it is more annoying/difficult to read in that format. """ try: FxGraphCache._check_can_cache(gm) key, debug_lines = compiled_fx_graph_hash( gm, example_inputs, fx_kwargs, inputs_to_check ) except BypassFxGraphCache as e: counters["inductor"]["fxgraph_cache_bypass"] += 1 log.info("Bypassing FX Graph Cache because '%s'", e) # noqa: G200 if remote: log_cache_bypass("bypass_fx_graph", str(e)) cache_info = { "cache_state": "bypass", "cache_bypass_reason": str(e), "cache_event_time": time_ns(), } return None, cache_info # If key exists, then cache_info will come from load_with_key return (key, debug_lines), {}
Checks that the inductor input is cacheable, then computes and returns the cache key for the input. Returns (key_info, cache_info) where: - key_info is (hash_key, debug_lines), and - cache_info will contain debug info in the event of BypassFxGraphCache. NB: It is possible to have this function return a union instead. But I personally believe it is more annoying/difficult to read in that format.
python
torch/_inductor/codecache.py
1,498
[ "gm", "example_inputs", "fx_kwargs", "inputs_to_check", "remote" ]
tuple[tuple[str, list[str]] | None, dict[str, Any]]
true
2
6.88
pytorch/pytorch
96,034
unknown
false
getPointOfLeastRelativeError
public static double getPointOfLeastRelativeError(long bucketIndex, int scale) { checkIndexAndScaleBounds(bucketIndex, scale); double histogramBase = Math.pow(2, Math.scalb(1, -scale)); if (Double.isFinite(histogramBase)) { double upperBound = getUpperBucketBoundary(bucketIndex, scale); return 2 / (histogramBase + 1) * upperBound; } else { if (bucketIndex >= 0) { // the bucket is (1, +inf), approximate point of least error as inf return Double.POSITIVE_INFINITY; } else { // the bucket is (1/(Inf), 1), approximate point of least error as 0 return 0; } } }
For a bucket with the given index, computes the point {@code x} in the bucket such that {@code (x - l) / l} equals {@code (u - x) / u}, where {@code l} is the lower bucket boundary and {@code u} is the upper bucket boundary. <br> In other words, we select the point in the bucket that has the least relative error with respect to any other point in the bucket. @param bucketIndex the index of the bucket @param scale the scale of the bucket @return the point of least relative error
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialScaleUtils.java
245
[ "bucketIndex", "scale" ]
true
3
8.08
elastic/elasticsearch
75,680
javadoc
false
registerBeanDefinitions
public int registerBeanDefinitions(Map<?, ?> map, @Nullable String prefix) throws BeansException { return registerBeanDefinitions(map, prefix, "Map " + map); }
Register bean definitions contained in a Map. Ignore ineligible properties. @param map a map of {@code name} to {@code property} (String or Object). Property values will be strings if coming from a Properties file etc. Property names (keys) <b>must</b> be Strings. Class keys must be Strings. @param prefix a filter within the keys in the map: for example, 'beans.' (can be empty or {@code null}) @return the number of bean definitions found @throws BeansException in case of loading or parsing errors
java
spring-beans/src/main/java/org/springframework/beans/factory/support/PropertiesBeanDefinitionReader.java
338
[ "map", "prefix" ]
true
1
6.8
spring-projects/spring-framework
59,386
javadoc
false
equals
@Override public boolean equals(final Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; final Generation that = (Generation) o; return generationId == that.generationId && Objects.equals(memberId, that.memberId) && Objects.equals(protocolName, that.protocolName); }
@return true if this generation has a valid member id, false otherwise. A member might have an id before it becomes part of a group generation.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
1,610
[ "o" ]
true
6
7.04
apache/kafka
31,560
javadoc
false
asRunnable
public static Runnable asRunnable(final FailableRunnable<?> runnable) { return () -> run(runnable); }
Converts the given {@link FailableRunnable} into a standard {@link Runnable}. @param runnable a {@link FailableRunnable} @return a standard {@link Runnable} @since 3.10
java
src/main/java/org/apache/commons/lang3/Functions.java
439
[ "runnable" ]
Runnable
true
1
6.16
apache/commons-lang
2,896
javadoc
false
is_due
def is_due(self, last_run_at: datetime) -> tuple[bool, datetime]: """Return tuple of ``(is_due, next_time_to_run)``. Note: next time to run is in seconds. See Also: :meth:`celery.schedules.schedule.is_due` for more information. """ rem_delta = self.remaining_estimate(last_run_at) rem = max(rem_delta.total_seconds(), 0) due = rem == 0 if due: rem_delta = self.remaining_estimate(self.now()) rem = max(rem_delta.total_seconds(), 0) return schedstate(due, rem)
Return tuple of ``(is_due, next_time_to_run)``. Note: next time to run is in seconds. See Also: :meth:`celery.schedules.schedule.is_due` for more information.
python
celery/schedules.py
863
[ "self", "last_run_at" ]
tuple[bool, datetime]
true
2
6.4
celery/celery
27,741
unknown
false
get_window_bounds
def get_window_bounds( self, num_values: int = 0, min_periods: int | None = None, center: bool | None = None, closed: str | None = None, step: int | None = None, ) -> tuple[np.ndarray, np.ndarray]: """ Computes the bounds of a window. Parameters ---------- num_values : int, default 0 number of values that will be aggregated over window_size : int, default 0 the number of rows in a window min_periods : int, default None min_periods passed from the top level rolling API center : bool, default None center passed from the top level rolling API closed : str, default None closed passed from the top level rolling API step : int, default None step passed from the top level rolling API win_type : str, default None win_type passed from the top level rolling API Returns ------- A tuple of ndarray[int64]s, indicating the boundaries of each window """ if center or self.window_size == 0: offset = (self.window_size - 1) // 2 else: offset = 0 end = np.arange(1 + offset, num_values + 1 + offset, step, dtype="int64") start = end - self.window_size if closed in ["left", "both"]: start -= 1 if closed in ["left", "neither"]: end -= 1 end = np.clip(end, 0, num_values) start = np.clip(start, 0, num_values) return start, end
Computes the bounds of a window. Parameters ---------- num_values : int, default 0 number of values that will be aggregated over window_size : int, default 0 the number of rows in a window min_periods : int, default None min_periods passed from the top level rolling API center : bool, default None center passed from the top level rolling API closed : str, default None closed passed from the top level rolling API step : int, default None step passed from the top level rolling API win_type : str, default None win_type passed from the top level rolling API Returns ------- A tuple of ndarray[int64]s, indicating the boundaries of each window
python
pandas/core/indexers/objects.py
111
[ "self", "num_values", "min_periods", "center", "closed", "step" ]
tuple[np.ndarray, np.ndarray]
true
6
6.4
pandas-dev/pandas
47,362
numpy
false
map
private static <T, R, E extends Throwable> R[] map(final T[] array, final Class<R> componentType, final FailableFunction<? super T, ? extends R, E> mapper) throws E { return ArrayFill.fill(newInstance(componentType, array.length), i -> mapper.apply(array[i])); }
Maps elements from an array into elements of a new array of a given type, while mapping old elements to new elements. @param <T> The input array type. @param <R> The output array type. @param <E> The type of exceptions thrown when the mapper function fails. @param array The input array. @param componentType the component type of the result array. @param mapper a non-interfering, stateless function to apply to each element. @return a new array. @throws E Thrown when the mapper function fails.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
4,220
[ "array", "componentType", "mapper" ]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
take
def take( self, indices: TakeIndexer, **kwargs, ) -> Series: """ Return the elements in the given *positional* indices in each group. This means that we are not indexing according to actual values in the index attribute of the object. We are indexing according to the actual position of the element in the object. If a requested index does not exist for some group, this method will raise. To get similar behavior that ignores indices that don't exist, see :meth:`.SeriesGroupBy.nth`. Parameters ---------- indices : array-like An array of ints indicating which positions to take in each group. **kwargs For compatibility with :meth:`numpy.take`. Has no effect on the output. Returns ------- Series A Series containing the elements taken from each group. See Also -------- Series.take : Take elements from a Series along an axis. Series.loc : Select a subset of a DataFrame by labels. Series.iloc : Select a subset of a DataFrame by positions. numpy.take : Take elements from an array along an axis. SeriesGroupBy.nth : Similar to take, won't raise if indices don't exist. Examples -------- >>> df = pd.DataFrame( ... [ ... ("falcon", "bird", 389.0), ... ("parrot", "bird", 24.0), ... ("lion", "mammal", 80.5), ... ("monkey", "mammal", np.nan), ... ("rabbit", "mammal", 15.0), ... ], ... columns=["name", "class", "max_speed"], ... index=[4, 3, 2, 1, 0], ... ) >>> df name class max_speed 4 falcon bird 389.0 3 parrot bird 24.0 2 lion mammal 80.5 1 monkey mammal NaN 0 rabbit mammal 15.0 >>> gb = df["name"].groupby([1, 1, 2, 2, 2]) Take elements at rows 0 and 1 in each group. >>> gb.take([0, 1]) 1 4 falcon 3 parrot 2 2 lion 1 monkey Name: name, dtype: object We may take elements using negative integers for positive indices, starting from the end of the object, just like with Python lists. >>> gb.take([-1, -2]) 1 3 parrot 4 falcon 2 0 rabbit 1 monkey Name: name, dtype: object """ result = self._op_via_apply("take", indices=indices, **kwargs) return result
Return the elements in the given *positional* indices in each group. This means that we are not indexing according to actual values in the index attribute of the object. We are indexing according to the actual position of the element in the object. If a requested index does not exist for some group, this method will raise. To get similar behavior that ignores indices that don't exist, see :meth:`.SeriesGroupBy.nth`. Parameters ---------- indices : array-like An array of ints indicating which positions to take in each group. **kwargs For compatibility with :meth:`numpy.take`. Has no effect on the output. Returns ------- Series A Series containing the elements taken from each group. See Also -------- Series.take : Take elements from a Series along an axis. Series.loc : Select a subset of a DataFrame by labels. Series.iloc : Select a subset of a DataFrame by positions. numpy.take : Take elements from an array along an axis. SeriesGroupBy.nth : Similar to take, won't raise if indices don't exist. Examples -------- >>> df = pd.DataFrame( ... [ ... ("falcon", "bird", 389.0), ... ("parrot", "bird", 24.0), ... ("lion", "mammal", 80.5), ... ("monkey", "mammal", np.nan), ... ("rabbit", "mammal", 15.0), ... ], ... columns=["name", "class", "max_speed"], ... index=[4, 3, 2, 1, 0], ... ) >>> df name class max_speed 4 falcon bird 389.0 3 parrot bird 24.0 2 lion mammal 80.5 1 monkey mammal NaN 0 rabbit mammal 15.0 >>> gb = df["name"].groupby([1, 1, 2, 2, 2]) Take elements at rows 0 and 1 in each group. >>> gb.take([0, 1]) 1 4 falcon 3 parrot 2 2 lion 1 monkey Name: name, dtype: object We may take elements using negative integers for positive indices, starting from the end of the object, just like with Python lists. >>> gb.take([-1, -2]) 1 3 parrot 4 falcon 2 0 rabbit 1 monkey Name: name, dtype: object
python
pandas/core/groupby/generic.py
1,287
[ "self", "indices" ]
Series
true
1
7.2
pandas-dev/pandas
47,362
numpy
false
getLayer
std::string getLayer() const { std::size_t q=name.rfind('.'); if( q==name.npos ) { return ""; } return name.substr(0,q); }
< name used in headers: in singlepart mode, may contain viewname
cpp
3rdparty/openexr/IlmImf/ImfPartHelper.h
86
[]
true
2
6.56
opencv/opencv
85,374
doxygen
false
indexesOf
public static BitSet indexesOf(final boolean[] array, final boolean valueToFind) { return indexesOf(array, valueToFind, 0); }
Finds the indices of the given value in the array. <p> This method returns an empty BitSet for a {@code null} input array. </p> @param array the array to search for the object, may be {@code null}. @param valueToFind the value to find. @return a BitSet of all the indices of the value within the array, an empty BitSet if not found or {@code null} array input. @since 3.10
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
1,895
[ "array", "valueToFind" ]
BitSet
true
1
6.8
apache/commons-lang
2,896
javadoc
false
rewriteExprFromNumberToTime
std::string rewriteExprFromNumberToTime( const ast_matchers::MatchFinder::MatchResult &Result, DurationScale Scale, const Expr *Node) { const Expr &RootNode = *Node->IgnoreParenImpCasts(); // First check to see if we can undo a complementary function call. if (std::optional<std::string> MaybeRewrite = rewriteInverseTimeCall(Result, Scale, RootNode)) return *MaybeRewrite; if (isLiteralZero(Result, RootNode)) return {"absl::UnixEpoch()"}; return (llvm::Twine(getTimeFactoryForScale(Scale)) + "(" + tooling::fixit::getText(RootNode, *Result.Context) + ")") .str(); }
Returns `true` if `Node` is a value which evaluates to a literal `0`.
cpp
clang-tools-extra/clang-tidy/abseil/DurationRewriter.cpp
240
[ "Scale" ]
true
3
6
llvm/llvm-project
36,021
doxygen
false
resolveConstructorArguments
private ConstructorArgumentValues resolveConstructorArguments( BeanDefinitionValueResolver valueResolver, ConstructorArgumentValues constructorArguments) { ConstructorArgumentValues resolvedConstructorArguments = new ConstructorArgumentValues(); for (Map.Entry<Integer, ConstructorArgumentValues.ValueHolder> entry : constructorArguments.getIndexedArgumentValues().entrySet()) { resolvedConstructorArguments.addIndexedArgumentValue(entry.getKey(), resolveArgumentValue(valueResolver, entry.getValue())); } for (ConstructorArgumentValues.ValueHolder valueHolder : constructorArguments.getGenericArgumentValues()) { resolvedConstructorArguments.addGenericArgumentValue(resolveArgumentValue(valueResolver, valueHolder)); } return resolvedConstructorArguments; }
Resolve arguments for the specified registered bean. @param registeredBean the registered bean @return the resolved constructor or factory method arguments
java
spring-beans/src/main/java/org/springframework/beans/factory/aot/BeanInstanceSupplier.java
305
[ "valueResolver", "constructorArguments" ]
ConstructorArgumentValues
true
1
6.08
spring-projects/spring-framework
59,386
javadoc
false
resolveProfileSpecific
default List<R> resolveProfileSpecific(ConfigDataLocationResolverContext context, ConfigDataLocation location, Profiles profiles) throws ConfigDataLocationNotFoundException { return Collections.emptyList(); }
Resolve a {@link ConfigDataLocation} into one or more {@link ConfigDataResource} instances based on available profiles. This method is called once profiles have been deduced from the contributed values. By default this method returns an empty list. @param context the location resolver context @param location the location that should be resolved @param profiles profile information @return a list of resolved locations in ascending priority order. @throws ConfigDataLocationNotFoundException on a non-optional location that cannot be found
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigDataLocationResolver.java
90
[ "context", "location", "profiles" ]
true
1
6.16
spring-projects/spring-boot
79,428
javadoc
false
combine
@CanIgnoreReturnValue Builder<E> combine(Builder<E> other) { addAll(other.contents, other.size); return this; }
Adds each element of {@code elements} to the {@code ImmutableList}. @param elements the {@code Iterator} to add to the {@code ImmutableList} @return this {@code Builder} object @throws NullPointerException if {@code elements} is null or contains a null element
java
android/guava/src/com/google/common/collect/ImmutableList.java
835
[ "other" ]
true
1
6.56
google/guava
51,352
javadoc
false
commitSyncExceptionForError
private Throwable commitSyncExceptionForError(Throwable error) { if (error instanceof StaleMemberEpochException) { return new CommitFailedException("OffsetCommit failed with stale member epoch. " + Errors.STALE_MEMBER_EPOCH.message()); } return error; }
Commit offsets, retrying on expected retriable errors while the retry timeout hasn't expired. @param offsets Offsets to commit @param deadlineMs Time until which the request will be retried if it fails with an expected retriable error. @return Future that will complete when a successful response
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java
488
[ "error" ]
Throwable
true
2
7.76
apache/kafka
31,560
javadoc
false
_select
def _select(readers=None, writers=None, err=None, timeout=0, poll=_select_imp): """Simple wrapper to :class:`~select.select`, using :`~select.poll`. Arguments: readers (Set[Fd]): Set of reader fds to test if readable. writers (Set[Fd]): Set of writer fds to test if writable. err (Set[Fd]): Set of fds to test for error condition. All fd sets passed must be mutable as this function will remove non-working fds from them, this also means the caller must make sure there are still fds in the sets before calling us again. Returns: Tuple[Set, Set, Set]: of ``(readable, writable, again)``, where ``readable`` is a set of fds that have data available for read, ``writable`` is a set of fds that's ready to be written to and ``again`` is a flag that if set means the caller must throw away the result and call us again. """ readers = set() if readers is None else readers writers = set() if writers is None else writers err = set() if err is None else err try: return poll(readers, writers, err, timeout) except OSError as exc: _errno = exc.errno if _errno == errno.EINTR: return set(), set(), 1 elif _errno in SELECT_BAD_FD: for fd in readers | writers | err: try: select.select([fd], [], [], 0) except OSError as exc: _errno = exc.errno if _errno not in SELECT_BAD_FD: raise readers.discard(fd) writers.discard(fd) err.discard(fd) return set(), set(), 1 else: raise
Simple wrapper to :class:`~select.select`, using :`~select.poll`. Arguments: readers (Set[Fd]): Set of reader fds to test if readable. writers (Set[Fd]): Set of writer fds to test if writable. err (Set[Fd]): Set of fds to test for error condition. All fd sets passed must be mutable as this function will remove non-working fds from them, this also means the caller must make sure there are still fds in the sets before calling us again. Returns: Tuple[Set, Set, Set]: of ``(readable, writable, again)``, where ``readable`` is a set of fds that have data available for read, ``writable`` is a set of fds that's ready to be written to and ``again`` is a flag that if set means the caller must throw away the result and call us again.
python
celery/concurrency/asynpool.py
150
[ "readers", "writers", "err", "timeout", "poll" ]
false
9
7.28
celery/celery
27,741
google
false
suspend
public void suspend() { if (runningState != State.RUNNING) { throw new IllegalStateException("Stopwatch must be running to suspend."); } stopTimeNanos = System.nanoTime(); stopInstant = Instant.now(); runningState = State.SUSPENDED; }
Suspends this StopWatch for later resumption. <p> This method suspends the watch until it is resumed. The watch will not include time between the suspend and resume calls in the total time. </p> @throws IllegalStateException if this StopWatch is not currently running.
java
src/main/java/org/apache/commons/lang3/time/StopWatch.java
779
[]
void
true
2
6.88
apache/commons-lang
2,896
javadoc
false
visitor
function visitor(node: Node): VisitResult<Node | undefined> { return shouldVisitNode(node) ? visitorWorker(node, /*expressionResultIsUnused*/ false) : node; }
Restores the `HierarchyFacts` for this node's ancestor after visiting this node's subtree, propagating specific facts from the subtree. @param ancestorFacts The `HierarchyFacts` of the ancestor to restore after visiting the subtree. @param excludeFacts The existing `HierarchyFacts` of the subtree that should not be propagated. @param includeFacts The new `HierarchyFacts` of the subtree that should be propagated.
typescript
src/compiler/transformers/es2015.ts
601
[ "node" ]
true
2
6.16
microsoft/TypeScript
107,154
jsdoc
false
dump
def dump(obj, fp): '''Serialize an object representing the ARFF document to a given file-like object. :param obj: a dictionary. :param fp: a file-like object. ''' encoder = ArffEncoder() generator = encoder.iter_encode(obj) last_row = next(generator) for row in generator: fp.write(last_row + '\n') last_row = row fp.write(last_row) return fp
Serialize an object representing the ARFF document to a given file-like object. :param obj: a dictionary. :param fp: a file-like object.
python
sklearn/externals/_arff.py
1,081
[ "obj", "fp" ]
false
2
6.24
scikit-learn/scikit-learn
64,340
sphinx
false
awaitWakeup
void awaitWakeup(Timer timer) { try { lock.lock(); while (!wokenup.compareAndSet(true, false)) { // Update the timer before we head into the loop in case it took a while to get the lock. timer.update(); if (timer.isExpired()) { // If the thread was interrupted before we start waiting, it still counts as // interrupted from the point of view of the KafkaConsumer.poll(Duration) contract. // We only need to check this when we are not going to wait because waiting // already checks whether the thread is interrupted. if (Thread.interrupted()) throw new InterruptException("Interrupted waiting for results from fetching records"); break; } if (!blockingCondition.await(timer.remainingMs(), TimeUnit.MILLISECONDS)) { break; } } } catch (InterruptedException e) { throw new InterruptException("Interrupted waiting for results from fetching records", e); } finally { lock.unlock(); timer.update(); } }
Allows the caller to await a response from the broker for requested data. The method will block, returning only under one of the following conditions: <ol> <li>The buffer was already woken</li> <li>The buffer was woken during the wait</li> <li>The remaining time on the {@link Timer timer} elapsed</li> <li>The thread was interrupted</li> </ol> @param timer Timer that provides time to wait
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetchBuffer.java
165
[ "timer" ]
void
true
6
6.24
apache/kafka
31,560
javadoc
false
mapKeys
function mapKeys(object, iteratee) { var result = {}; iteratee = getIteratee(iteratee, 3); baseForOwn(object, function(value, key, object) { baseAssignValue(result, iteratee(value, key, object), value); }); return result; }
The opposite of `_.mapValues`; this method creates an object with the same values as `object` and keys generated by running each own enumerable string keyed property of `object` thru `iteratee`. The iteratee is invoked with three arguments: (value, key, object). @static @memberOf _ @since 3.8.0 @category Object @param {Object} object The object to iterate over. @param {Function} [iteratee=_.identity] The function invoked per iteration. @returns {Object} Returns the new mapped object. @see _.mapValues @example _.mapKeys({ 'a': 1, 'b': 2 }, function(value, key) { return key + value; }); // => { 'a1': 1, 'b2': 2 }
javascript
lodash.js
13,465
[ "object", "iteratee" ]
false
1
6.32
lodash/lodash
61,490
jsdoc
false
readCandidateConfigurations
private static List<String> readCandidateConfigurations(URL url) { try (BufferedReader reader = new BufferedReader( new InputStreamReader(new UrlResource(url).getInputStream(), StandardCharsets.UTF_8))) { List<String> candidates = new ArrayList<>(); String line; while ((line = reader.readLine()) != null) { line = stripComment(line); line = line.trim(); if (line.isEmpty()) { continue; } candidates.add(line); } return candidates; } catch (IOException ex) { throw new IllegalArgumentException("Unable to load configurations from location [" + url + "]", ex); } }
Loads the names of import candidates from the classpath. The names of the import candidates are stored in files named {@code META-INF/spring/full-qualified-annotation-name.imports} on the classpath. Every line contains the full qualified name of the candidate class. Comments are supported using the # character. @param annotation annotation to load @param classLoader class loader to use for loading @return list of names of annotated classes
java
core/spring-boot/src/main/java/org/springframework/boot/context/annotation/ImportCandidates.java
110
[ "url" ]
true
4
7.6
spring-projects/spring-boot
79,428
javadoc
false
shutdown
public void shutdown() throws IOException { if (currentUsages.updateAndGet(u -> -1 - u) == -1) { doShutdown(); } }
Prepares the database for lookup by incrementing the usage count. If the usage count is already negative, it indicates that the database is being closed, and this method will return false to indicate that no lookup should be performed. @return true if the database is ready for lookup, false if it is being closed
java
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/DatabaseReaderLazyLoader.java
149
[]
void
true
2
8.08
elastic/elasticsearch
75,680
javadoc
false
checkAborted
function checkAborted(signal, callback) { if (signal?.aborted) { callback(new AbortError(undefined, { cause: signal.reason })); return true; } return false; }
Synchronously tests whether or not the given path exists. @param {string | Buffer | URL} path @returns {boolean}
javascript
lib/fs.js
335
[ "signal", "callback" ]
false
2
6.24
nodejs/node
114,839
jsdoc
false
getProperty
@Override public @Nullable Object getProperty(String name) { Object value = super.getProperty(name); if (value instanceof OriginTrackedValue originTrackedValue) { return originTrackedValue.getValue(); } return value; }
Create a new {@link OriginTrackedMapPropertySource} instance. @param name the property source name @param source the underlying map source @param immutable if the underlying source is immutable and guaranteed not to change @since 2.2.0
java
core/spring-boot/src/main/java/org/springframework/boot/env/OriginTrackedMapPropertySource.java
65
[ "name" ]
Object
true
2
6.4
spring-projects/spring-boot
79,428
javadoc
false
postProcessBeforeInitialization
@Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { if (bean instanceof LoadTimeWeaverAware loadTimeWeaverAware) { LoadTimeWeaver ltw = this.loadTimeWeaver; if (ltw == null) { Assert.state(this.beanFactory != null, "BeanFactory required if no LoadTimeWeaver explicitly specified"); ltw = this.beanFactory.getBean( ConfigurableApplicationContext.LOAD_TIME_WEAVER_BEAN_NAME, LoadTimeWeaver.class); } loadTimeWeaverAware.setLoadTimeWeaver(ltw); } return bean; }
Create a new {@code LoadTimeWeaverAwareProcessor}. <p>The {@code LoadTimeWeaver} will be auto-retrieved from the given {@link BeanFactory}, expecting a bean named {@link ConfigurableApplicationContext#LOAD_TIME_WEAVER_BEAN_NAME "loadTimeWeaver"}. @param beanFactory the BeanFactory to retrieve the LoadTimeWeaver from
java
spring-context/src/main/java/org/springframework/context/weaving/LoadTimeWeaverAwareProcessor.java
92
[ "bean", "beanName" ]
Object
true
3
6.08
spring-projects/spring-framework
59,386
javadoc
false
add
private void add(ExponentialHistogram toAdd, boolean allowUpscaling) { ExponentialHistogram a = result == null ? ExponentialHistogram.empty() : result; ExponentialHistogram b = toAdd; CopyableBucketIterator posBucketsA = a.positiveBuckets().iterator(); CopyableBucketIterator negBucketsA = a.negativeBuckets().iterator(); CopyableBucketIterator posBucketsB = b.positiveBuckets().iterator(); CopyableBucketIterator negBucketsB = b.negativeBuckets().iterator(); ZeroBucket zeroBucket = a.zeroBucket().merge(b.zeroBucket()); zeroBucket = zeroBucket.collapseOverlappingBucketsForAll(posBucketsA, negBucketsA, posBucketsB, negBucketsB); if (buffer == null) { buffer = FixedCapacityExponentialHistogram.create(bucketLimit, circuitBreaker); } buffer.setZeroBucket(zeroBucket); buffer.setSum(a.sum() + b.sum()); buffer.setMin(nanAwareAggregate(a.min(), b.min(), Math::min)); buffer.setMax(nanAwareAggregate(a.max(), b.max(), Math::max)); // We attempt to bring everything to the scale of A. // This might involve increasing the scale for B, which would increase its indices. // We need to ensure that we do not exceed MAX_INDEX / MIN_INDEX in this case. int targetScale = Math.min(maxScale, a.scale()); if (allowUpscaling == false) { targetScale = Math.min(targetScale, b.scale()); } if (targetScale > b.scale()) { if (negBucketsB.hasNext()) { long smallestIndex = negBucketsB.peekIndex(); OptionalLong maximumIndex = b.negativeBuckets().maxBucketIndex(); assert maximumIndex.isPresent() : "We checked that the negative bucket range is not empty, therefore the maximum index should be present"; int maxScaleIncrease = Math.min(getMaximumScaleIncrease(smallestIndex), getMaximumScaleIncrease(maximumIndex.getAsLong())); targetScale = Math.min(targetScale, b.scale() + maxScaleIncrease); } if (posBucketsB.hasNext()) { long smallestIndex = posBucketsB.peekIndex(); OptionalLong maximumIndex = b.positiveBuckets().maxBucketIndex(); assert maximumIndex.isPresent() : "We checked that the positive bucket range is not empty, therefore the maximum index should be present"; int maxScaleIncrease = Math.min(getMaximumScaleIncrease(smallestIndex), getMaximumScaleIncrease(maximumIndex.getAsLong())); targetScale = Math.min(targetScale, b.scale() + maxScaleIncrease); } } // Now we are sure that everything fits numerically into targetScale. // However, we might exceed our limit for the total number of buckets. // Therefore, we try the merge optimistically. If we fail, we reduce the target scale to make everything fit. MergingBucketIterator positiveMerged = new MergingBucketIterator(posBucketsA.copy(), posBucketsB.copy(), targetScale); MergingBucketIterator negativeMerged = new MergingBucketIterator(negBucketsA.copy(), negBucketsB.copy(), targetScale); buffer.resetBuckets(targetScale); downscaleStats.reset(); int overflowCount = putBuckets(buffer, negativeMerged, false, downscaleStats); overflowCount += putBuckets(buffer, positiveMerged, true, downscaleStats); if (overflowCount > 0) { // UDD-sketch approach: decrease the scale and retry. int reduction = downscaleStats.getRequiredScaleReductionToReduceBucketCountBy(overflowCount); targetScale -= reduction; buffer.resetBuckets(targetScale); positiveMerged = new MergingBucketIterator(posBucketsA, posBucketsB, targetScale); negativeMerged = new MergingBucketIterator(negBucketsA, negBucketsB, targetScale); overflowCount = putBuckets(buffer, negativeMerged, false, null); overflowCount += putBuckets(buffer, positiveMerged, true, null); assert overflowCount == 0 : "Should never happen, the histogram should have had enough space"; } FixedCapacityExponentialHistogram temp = result; result = buffer; buffer = temp; }
Merges the given histogram into the current result. The histogram might be upscaled if needed. @param toAdd the histogram to merge
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialHistogramMerger.java
196
[ "toAdd", "allowUpscaling" ]
void
true
8
6.8
elastic/elasticsearch
75,680
javadoc
false
toString
@Override public String toString() { return (!isNested()) ? path().toString() : path() + "[" + nestedEntryName() + "]"; }
Return if this is the source of a nested zip. @return if this is for a nested zip
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/zip/ZipContent.java
435
[]
String
true
2
7.84
spring-projects/spring-boot
79,428
javadoc
false
indexOfDifference
public static int indexOfDifference(final CharSequence cs1, final CharSequence cs2) { if (cs1 == cs2) { return INDEX_NOT_FOUND; } if (cs1 == null || cs2 == null) { return 0; } int i; for (i = 0; i < cs1.length() && i < cs2.length(); ++i) { if (cs1.charAt(i) != cs2.charAt(i)) { break; } } if (i < cs2.length() || i < cs1.length()) { return i; } return INDEX_NOT_FOUND; }
Compares two CharSequences, and returns the index at which the CharSequences begin to differ. <p> For example, {@code indexOfDifference("i am a machine", "i am a robot") -> 7} </p> <pre> StringUtils.indexOfDifference(null, null) = -1 StringUtils.indexOfDifference("", "") = -1 StringUtils.indexOfDifference("", "abc") = 0 StringUtils.indexOfDifference("abc", "") = 0 StringUtils.indexOfDifference("abc", "abc") = -1 StringUtils.indexOfDifference("ab", "abxyz") = 2 StringUtils.indexOfDifference("abcde", "abxyz") = 2 StringUtils.indexOfDifference("abcde", "xyz") = 0 </pre> @param cs1 the first CharSequence, may be null. @param cs2 the second CharSequence, may be null. @return the index where cs1 and cs2 begin to differ; -1 if they are equal. @since 2.0 @since 3.0 Changed signature from indexOfDifference(String, String) to indexOfDifference(CharSequence, CharSequence)
java
src/main/java/org/apache/commons/lang3/StringUtils.java
3,022
[ "cs1", "cs2" ]
true
9
7.44
apache/commons-lang
2,896
javadoc
false
invokeExactMethod
public static Object invokeExactMethod(final Object object, final String methodName) throws NoSuchMethodException, IllegalAccessException, InvocationTargetException { return invokeExactMethod(object, methodName, ArrayUtils.EMPTY_OBJECT_ARRAY, null); }
Invokes a method whose parameter types match exactly the object type. <p> This uses reflection to invoke the method obtained from a call to {@link #getAccessibleMethod(Class, String, Class[])}. </p> @param object invoke method on this object. @param methodName get method with this name. @return The value returned by the invoked method. @throws NoSuchMethodException Thrown if there is no such accessible method. @throws IllegalAccessException Thrown if this found {@code Method} is enforcing Java language access control and the underlying method is inaccessible. @throws IllegalArgumentException Thrown if: <ul> <li>the found {@code Method} is an instance method and the specified {@code object} argument is not an instance of the class or interface declaring the underlying method (or of a subclass or interface implementor);</li> <li>the number of actual and formal parameters differ;</li> </ul> @throws InvocationTargetException Thrown if the underlying method throws an exception. @throws NullPointerException Thrown if the specified {@code object} is null. @throws ExceptionInInitializerError Thrown if the initialization provoked by this method fails. @since 3.4
java
src/main/java/org/apache/commons/lang3/reflect/MethodUtils.java
559
[ "object", "methodName" ]
Object
true
1
6.16
apache/commons-lang
2,896
javadoc
false
newNestedPropertyAccessor
@Override protected BeanWrapperImpl newNestedPropertyAccessor(Object object, String nestedPath) { return new BeanWrapperImpl(object, nestedPath, this); }
Convert the given value for the specified property to the latter's type. <p>This method is only intended for optimizations in a BeanFactory. Use the {@code convertIfNecessary} methods for programmatic conversion. @param value the value to convert @param propertyName the target property (note that nested or indexed properties are not supported here) @return the new value, possibly the result of type conversion @throws TypeMismatchException if type conversion failed
java
spring-beans/src/main/java/org/springframework/beans/BeanWrapperImpl.java
197
[ "object", "nestedPath" ]
BeanWrapperImpl
true
1
6.16
spring-projects/spring-framework
59,386
javadoc
false
set
private static Date set(final Date date, final int calendarField, final int amount) { validateDateNotNull(date); // getInstance() returns a new object, so this method is thread safe. final Calendar c = Calendar.getInstance(); c.setLenient(false); c.setTime(date); c.set(calendarField, amount); return c.getTime(); }
Sets the specified field to a date returning a new object. This does not use a lenient calendar. The original {@link Date} is unchanged. @param date the date, not null. @param calendarField the {@link Calendar} field to set the amount to. @param amount the amount to set. @return a new {@link Date} set with the specified value. @throws NullPointerException if the date is null. @since 2.4
java
src/main/java/org/apache/commons/lang3/time/DateUtils.java
1,484
[ "date", "calendarField", "amount" ]
Date
true
1
7.04
apache/commons-lang
2,896
javadoc
false
_make_validation_split
def _make_validation_split(self, y, sample_mask): """Split the dataset between training set and validation set. Parameters ---------- y : ndarray of shape (n_samples, ) Target values. sample_mask : ndarray of shape (n_samples, ) A boolean array indicating whether each sample should be included for validation set. Returns ------- validation_mask : ndarray of shape (n_samples, ) Equal to True on the validation set, False on the training set. """ n_samples = y.shape[0] validation_mask = np.zeros(n_samples, dtype=np.bool_) if not self.early_stopping: # use the full set for training, with an empty validation set return validation_mask if is_classifier(self): splitter_type = StratifiedShuffleSplit else: splitter_type = ShuffleSplit cv = splitter_type( test_size=self.validation_fraction, random_state=self.random_state ) idx_train, idx_val = next(cv.split(np.zeros(shape=(y.shape[0], 1)), y)) if not np.any(sample_mask[idx_val]): raise ValueError( "The sample weights for validation set are all zero, consider using a" " different random state." ) if idx_train.shape[0] == 0 or idx_val.shape[0] == 0: raise ValueError( "Splitting %d samples into a train set and a validation set " "with validation_fraction=%r led to an empty set (%d and %d " "samples). Please either change validation_fraction, increase " "number of samples, or disable early_stopping." % ( n_samples, self.validation_fraction, idx_train.shape[0], idx_val.shape[0], ) ) validation_mask[idx_val] = True return validation_mask
Split the dataset between training set and validation set. Parameters ---------- y : ndarray of shape (n_samples, ) Target values. sample_mask : ndarray of shape (n_samples, ) A boolean array indicating whether each sample should be included for validation set. Returns ------- validation_mask : ndarray of shape (n_samples, ) Equal to True on the validation set, False on the training set.
python
sklearn/linear_model/_stochastic_gradient.py
280
[ "self", "y", "sample_mask" ]
false
7
6.08
scikit-learn/scikit-learn
64,340
numpy
false
repmat
def repmat(a, m, n): """ Repeat a 0-D to 2-D array or matrix MxN times. Parameters ---------- a : array_like The array or matrix to be repeated. m, n : int The number of times `a` is repeated along the first and second axes. Returns ------- out : ndarray The result of repeating `a`. Examples -------- >>> import numpy.matlib >>> a0 = np.array(1) >>> np.matlib.repmat(a0, 2, 3) array([[1, 1, 1], [1, 1, 1]]) >>> a1 = np.arange(4) >>> np.matlib.repmat(a1, 2, 2) array([[0, 1, 2, 3, 0, 1, 2, 3], [0, 1, 2, 3, 0, 1, 2, 3]]) >>> a2 = np.asmatrix(np.arange(6).reshape(2, 3)) >>> np.matlib.repmat(a2, 2, 3) matrix([[0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5], [0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5]]) """ a = asanyarray(a) ndim = a.ndim if ndim == 0: origrows, origcols = (1, 1) elif ndim == 1: origrows, origcols = (1, a.shape[0]) else: origrows, origcols = a.shape rows = origrows * m cols = origcols * n c = a.reshape(1, a.size).repeat(m, 0).reshape(rows, origcols).repeat(n, 0) return c.reshape(rows, cols)
Repeat a 0-D to 2-D array or matrix MxN times. Parameters ---------- a : array_like The array or matrix to be repeated. m, n : int The number of times `a` is repeated along the first and second axes. Returns ------- out : ndarray The result of repeating `a`. Examples -------- >>> import numpy.matlib >>> a0 = np.array(1) >>> np.matlib.repmat(a0, 2, 3) array([[1, 1, 1], [1, 1, 1]]) >>> a1 = np.arange(4) >>> np.matlib.repmat(a1, 2, 2) array([[0, 1, 2, 3, 0, 1, 2, 3], [0, 1, 2, 3, 0, 1, 2, 3]]) >>> a2 = np.asmatrix(np.arange(6).reshape(2, 3)) >>> np.matlib.repmat(a2, 2, 3) matrix([[0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5], [0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5]])
python
numpy/matlib.py
332
[ "a", "m", "n" ]
false
4
7.52
numpy/numpy
31,054
numpy
false
of
public static <T> Range<T> of(final T fromInclusive, final T toInclusive, final Comparator<T> comparator) { return new Range<>(fromInclusive, toInclusive, comparator); }
Creates a range with the specified minimum and maximum values (both inclusive). <p>The range uses the specified {@link Comparator} to determine where values lie in the range.</p> <p>The arguments may be passed in the order (min,max) or (max,min). The getMinimum and getMaximum methods will return the correct values.</p> @param <T> the type of the elements in this range. @param fromInclusive the first value that defines the edge of the range, inclusive. @param toInclusive the second value that defines the edge of the range, inclusive. @param comparator the comparator to be used, null for natural ordering. @return the range object, not null. @throws NullPointerException when fromInclusive is null. @throws NullPointerException when toInclusive is null. @throws ClassCastException if using natural ordering and the elements are not {@link Comparable}. @since 3.13.0
java
src/main/java/org/apache/commons/lang3/Range.java
184
[ "fromInclusive", "toInclusive", "comparator" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
format
def format(self, name: str, roffset=True) -> str: """ Codegen a call to tl.make_tensor_descriptor() Args: name: variable name for pointer roffset: unused, but kept for compatibility with BlockPtrOptions.format() Returns: "tl.make_tensor_descriptor(...)" """ f = V.kernel.index_to_str args = [ ( f"{name} + ({f(self.constant_offset)})" if self.constant_offset != 0 else name ), f"shape={f(self.shape)}", f"strides={f(self.strides)}", f"block_shape={f(self.block_shape)}", ] return f"tl.make_tensor_descriptor({', '.join(args)})"
Codegen a call to tl.make_tensor_descriptor() Args: name: variable name for pointer roffset: unused, but kept for compatibility with BlockPtrOptions.format() Returns: "tl.make_tensor_descriptor(...)"
python
torch/_inductor/codegen/triton.py
650
[ "self", "name", "roffset" ]
str
true
2
7.12
pytorch/pytorch
96,034
google
false
identity
static <E extends Throwable> FailableIntUnaryOperator<E> identity() { return t -> t; }
Returns a unary operator that always returns its input argument. @param <E> The kind of thrown exception or error. @return a unary operator that always returns its input argument
java
src/main/java/org/apache/commons/lang3/function/FailableIntUnaryOperator.java
41
[]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
_should_be_skipped_or_marked
def _should_be_skipped_or_marked( estimator, check, expected_failed_checks: dict[str, str] | None = None ) -> tuple[bool, str]: """Check whether a check should be skipped or marked as xfail. Parameters ---------- estimator : estimator object Estimator instance for which to generate checks. check : partial or callable Check to be marked. expected_failed_checks : dict[str, str], default=None Dictionary of the form {check_name: reason} for checks that are expected to fail. Returns ------- should_be_marked : bool Whether the check should be marked as xfail or skipped. reason : str Reason for skipping the check. """ expected_failed_checks = expected_failed_checks or {} check_name = _check_name(check) if check_name in expected_failed_checks: return True, expected_failed_checks[check_name] return False, "Check is not expected to fail"
Check whether a check should be skipped or marked as xfail. Parameters ---------- estimator : estimator object Estimator instance for which to generate checks. check : partial or callable Check to be marked. expected_failed_checks : dict[str, str], default=None Dictionary of the form {check_name: reason} for checks that are expected to fail. Returns ------- should_be_marked : bool Whether the check should be marked as xfail or skipped. reason : str Reason for skipping the check.
python
sklearn/utils/estimator_checks.py
481
[ "estimator", "check", "expected_failed_checks" ]
tuple[bool, str]
true
3
6.88
scikit-learn/scikit-learn
64,340
numpy
false
applyTo
public static void applyTo(ConfigurableEnvironment environment, @Nullable ResourceLoader resourceLoader, @Nullable ConfigurableBootstrapContext bootstrapContext, Collection<String> additionalProfiles) { DeferredLogFactory logFactory = Supplier::get; bootstrapContext = (bootstrapContext != null) ? bootstrapContext : new DefaultBootstrapContext(); ConfigDataEnvironmentPostProcessor postProcessor = new ConfigDataEnvironmentPostProcessor(logFactory, bootstrapContext); postProcessor.postProcessEnvironment(environment, resourceLoader, additionalProfiles); }
Apply {@link ConfigData} post-processing to an existing {@link Environment}. This method can be useful when working with an {@link Environment} that has been created directly and not necessarily as part of a {@link SpringApplication}. @param environment the environment to apply {@link ConfigData} to @param resourceLoader the resource loader to use @param bootstrapContext the bootstrap context to use or {@code null} to use a throw-away context @param additionalProfiles any additional profiles that should be applied
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigDataEnvironmentPostProcessor.java
141
[ "environment", "resourceLoader", "bootstrapContext", "additionalProfiles" ]
void
true
2
6.08
spring-projects/spring-boot
79,428
javadoc
false
getAdvice
@Override public Advice getAdvice() { Advice advice = this.advice; if (advice != null) { return advice; } Assert.state(this.adviceBeanName != null, "'adviceBeanName' must be specified"); Assert.state(this.beanFactory != null, "BeanFactory must be set to resolve 'adviceBeanName'"); if (this.beanFactory.isSingleton(this.adviceBeanName)) { // Rely on singleton semantics provided by the factory. advice = this.beanFactory.getBean(this.adviceBeanName, Advice.class); this.advice = advice; return advice; } else { // No singleton guarantees from the factory -> let's lock locally. synchronized (this.adviceMonitor) { advice = this.advice; if (advice == null) { advice = this.beanFactory.getBean(this.adviceBeanName, Advice.class); this.advice = advice; } return advice; } } }
Specify a particular instance of the target advice directly, avoiding lazy resolution in {@link #getAdvice()}. @since 3.1
java
spring-aop/src/main/java/org/springframework/aop/support/AbstractBeanFactoryPointcutAdvisor.java
89
[]
Advice
true
4
6.24
spring-projects/spring-framework
59,386
javadoc
false
argmin
def argmin(self, axis=None, fill_value=None, out=None, *, keepdims=np._NoValue): """ Return array of indices to the minimum values along the given axis. Parameters ---------- axis : {None, integer} If None, the index is into the flattened array, otherwise along the specified axis fill_value : scalar or None, optional Value used to fill in the masked values. If None, the output of minimum_fill_value(self._data) is used instead. out : {None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns ------- ndarray or scalar If multi-dimension input, returns a new ndarray of indices to the minimum values along the given axis. Otherwise, returns a scalar of index to the minimum values along the given axis. Examples -------- >>> import numpy as np >>> x = np.ma.array(np.arange(4), mask=[1,1,0,0]) >>> x.shape = (2,2) >>> x masked_array( data=[[--, --], [2, 3]], mask=[[ True, True], [False, False]], fill_value=999999) >>> x.argmin(axis=0, fill_value=-1) array([0, 0]) >>> x.argmin(axis=0, fill_value=9) array([1, 1]) """ if fill_value is None: fill_value = minimum_fill_value(self) d = self.filled(fill_value).view(ndarray) keepdims = False if keepdims is np._NoValue else bool(keepdims) return d.argmin(axis, out=out, keepdims=keepdims)
Return array of indices to the minimum values along the given axis. Parameters ---------- axis : {None, integer} If None, the index is into the flattened array, otherwise along the specified axis fill_value : scalar or None, optional Value used to fill in the masked values. If None, the output of minimum_fill_value(self._data) is used instead. out : {None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns ------- ndarray or scalar If multi-dimension input, returns a new ndarray of indices to the minimum values along the given axis. Otherwise, returns a scalar of index to the minimum values along the given axis. Examples -------- >>> import numpy as np >>> x = np.ma.array(np.arange(4), mask=[1,1,0,0]) >>> x.shape = (2,2) >>> x masked_array( data=[[--, --], [2, 3]], mask=[[ True, True], [False, False]], fill_value=999999) >>> x.argmin(axis=0, fill_value=-1) array([0, 0]) >>> x.argmin(axis=0, fill_value=9) array([1, 1])
python
numpy/ma/core.py
5,685
[ "self", "axis", "fill_value", "out", "keepdims" ]
false
3
7.52
numpy/numpy
31,054
numpy
false
firstNonEmpty
@SafeVarargs public static <T extends CharSequence> T firstNonEmpty(final T... values) { if (values != null) { for (final T val : values) { if (isNotEmpty(val)) { return val; } } } return null; }
Returns the first value in the array which is not empty. <p> If all values are empty or the array is {@code null} or empty then {@code null} is returned. </p> <pre> StringUtils.firstNonEmpty(null, null, null) = null StringUtils.firstNonEmpty(null, null, "") = null StringUtils.firstNonEmpty(null, "", " ") = " " StringUtils.firstNonEmpty("abc") = "abc" StringUtils.firstNonEmpty(null, "xyz") = "xyz" StringUtils.firstNonEmpty("", "xyz") = "xyz" StringUtils.firstNonEmpty(null, "xyz", "abc") = "xyz" StringUtils.firstNonEmpty() = null </pre> @param <T> the specific kind of CharSequence. @param values the values to test, may be {@code null} or empty. @return the first value from {@code values} which is not empty, or {@code null} if there are no non-empty values. @since 3.8
java
src/main/java/org/apache/commons/lang3/StringUtils.java
1,930
[]
T
true
3
7.76
apache/commons-lang
2,896
javadoc
false
numResidentSends
int numResidentSends() { int count = 0; if (current != null) count += 1; count += sendQueue.size(); return count; }
Construct a MultiRecordsSend from a queue of Send objects. The queue will be consumed as the MultiRecordsSend progresses (on completion, it will be empty).
java
clients/src/main/java/org/apache/kafka/common/record/MultiRecordsSend.java
73
[]
true
2
6.72
apache/kafka
31,560
javadoc
false
bind
@Nullable Object bind(ConfigurationPropertyName name, Bindable<?> target, @Nullable ConfigurationPropertySource source);
Bind the given name to a target bindable using optionally limited to a single source. @param name the name to bind @param target the target bindable @param source the source of the elements or {@code null} to use all sources @return a bound object or {@code null}
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/AggregateElementBinder.java
52
[ "name", "target", "source" ]
Object
true
1
6.64
spring-projects/spring-boot
79,428
javadoc
false
get_autotune_deletion_call
def get_autotune_deletion_call() -> str: """After all the autotune kernel calls have been written (i.e. self.kernel_autotune_example_args is complete), returns a deletion call for all autotune example tensors that are unnecessary after kernel_name is called.""" tensors_to_delete = [ tensor for tensor, kn in self.kernel_autotune_example_args.values() if kn == kernel_name ] if tensors_to_delete: return f"del {', '.join(tensors_to_delete)}\n" return ""
After all the autotune kernel calls have been written (i.e. self.kernel_autotune_example_args is complete), returns a deletion call for all autotune example tensors that are unnecessary after kernel_name is called.
python
torch/_inductor/codegen/wrapper.py
2,956
[]
str
true
2
6.56
pytorch/pytorch
96,034
unknown
false
combine
@CanIgnoreReturnValue @Override Builder<E> combine(ImmutableSet.Builder<E> builder) { super.combine(builder); return this; }
Adds each element of {@code elements} to the {@code ImmutableSortedSet}, ignoring duplicate elements (only the first duplicate element is added). @param elements the elements to add to the {@code ImmutableSortedSet} @return this {@code Builder} object @throws NullPointerException if {@code elements} contains a null element
java
android/guava/src/com/google/common/collect/ImmutableSortedSet.java
519
[ "builder" ]
true
1
6.08
google/guava
51,352
javadoc
false
shouldAssertFunction
function shouldAssertFunction<K extends AssertionKeys>(level: AssertionLevel, name: K): boolean { if (!shouldAssert(level)) { assertionCache[name] = { level, assertion: Debug[name] }; (Debug as any)[name] = noop; return false; } return true; }
Tests whether an assertion function should be executed. If it shouldn't, it is cached and replaced with `ts.noop`. Replaced assertion functions are restored when `Debug.setAssertionLevel` is set to a high enough level. @param level The minimum assertion level required. @param name The name of the current assertion function.
typescript
src/compiler/debug.ts
187
[ "level", "name" ]
true
2
6.72
microsoft/TypeScript
107,154
jsdoc
false
findNonEmptyMapStart
@Nullable private static String findNonEmptyMapStart(XContentParser parser) throws IOException { Token token = parser.currentToken(); if (token == null) { token = parser.nextToken(); } if (token == XContentParser.Token.START_OBJECT) { return parser.nextFieldName(); } return token == Token.FIELD_NAME ? parser.currentName() : null; }
Checks if the next current token in the supplied parser is a map start for a non-empty map. Skips to the next token if the parser does not yet have a current token (i.e. {@link #currentToken()} returns {@code null}) and then checks it. @return the first key in the map if a non-empty map start is found
java
libs/x-content/src/main/java/org/elasticsearch/xcontent/support/AbstractXContentParser.java
368
[ "parser" ]
String
true
4
6.72
elastic/elasticsearch
75,680
javadoc
false
getEffectiveAnnotatedParameter
private static AnnotatedElement getEffectiveAnnotatedParameter(Parameter parameter, int index) { Executable executable = parameter.getDeclaringExecutable(); if (executable instanceof Constructor && ClassUtils.isInnerClass(executable.getDeclaringClass()) && executable.getParameterAnnotations().length == executable.getParameterCount() - 1) { // Bug in javac in JDK <9: annotation array excludes enclosing instance parameter // for inner classes, so access it with the actual parameter index lowered by 1 return (index == 0 ? EMPTY_ANNOTATED_ELEMENT : executable.getParameters()[index - 1]); } return parameter; }
Due to a bug in {@code javac} on JDK versions prior to JDK 9, looking up annotations directly on a {@link Parameter} will fail for inner class constructors. <p>Note: Since Spring 6 may still encounter user code compiled with {@code javac 8}, this workaround is kept in place for the time being. <h4>Bug in javac in JDK &lt; 9</h4> <p>The parameter annotations array in the compiled byte code excludes an entry for the implicit <em>enclosing instance</em> parameter for an inner class constructor. <h4>Workaround</h4> <p>This method provides a workaround for this off-by-one error by allowing the caller to access annotations on the preceding {@link Parameter} object (i.e., {@code index - 1}). If the supplied {@code index} is zero, this method returns an empty {@code AnnotatedElement}. <h4>WARNING</h4> <p>The {@code AnnotatedElement} returned by this method should never be cast and treated as a {@code Parameter} since the metadata (for example, {@link Parameter#getName()}, {@link Parameter#getType()}, etc.) will not match those for the declared parameter at the given index in an inner class constructor. @return the supplied {@code parameter} or the <em>effective</em> {@code Parameter} if the aforementioned bug is in effect
java
spring-beans/src/main/java/org/springframework/beans/factory/annotation/ParameterResolutionDelegate.java
163
[ "parameter", "index" ]
AnnotatedElement
true
5
6.88
spring-projects/spring-framework
59,386
javadoc
false
join
public static String join(String separator, byte... array) { checkNotNull(separator); if (array.length == 0) { return ""; } // For pre-sizing a builder, just get the right order of magnitude StringBuilder builder = new StringBuilder(array.length * (3 + separator.length())); builder.append(toUnsignedInt(array[0])); for (int i = 1; i < array.length; i++) { builder.append(separator).append(toString(array[i])); } return builder.toString(); }
Returns a string containing the supplied {@code byte} values separated by {@code separator}. For example, {@code join(":", (byte) 1, (byte) 2, (byte) 255)} returns the string {@code "1:2:255"}. @param separator the text that should appear between consecutive values in the resulting string (but not at the start or end) @param array an array of {@code byte} values, possibly empty
java
android/guava/src/com/google/common/primitives/UnsignedBytes.java
248
[ "separator" ]
String
true
3
6.72
google/guava
51,352
javadoc
false
shouldProxyTargetClass
public static boolean shouldProxyTargetClass( ConfigurableListableBeanFactory beanFactory, @Nullable String beanName) { if (beanName != null && beanFactory.containsBeanDefinition(beanName)) { BeanDefinition bd = beanFactory.getBeanDefinition(beanName); return Boolean.TRUE.equals(bd.getAttribute(PRESERVE_TARGET_CLASS_ATTRIBUTE)); } return false; }
Determine whether the given bean should be proxied with its target class rather than its interfaces. Checks the {@link #PRESERVE_TARGET_CLASS_ATTRIBUTE "preserveTargetClass" attribute} of the corresponding bean definition. @param beanFactory the containing ConfigurableListableBeanFactory @param beanName the name of the bean @return whether the given bean should be proxied with its target class @see #PRESERVE_TARGET_CLASS_ATTRIBUTE
java
spring-aop/src/main/java/org/springframework/aop/framework/autoproxy/AutoProxyUtils.java
141
[ "beanFactory", "beanName" ]
true
3
7.28
spring-projects/spring-framework
59,386
javadoc
false
where
def where(condition, x=_NoValue, y=_NoValue): """ Return a masked array with elements from `x` or `y`, depending on condition. .. note:: When only `condition` is provided, this function is identical to `nonzero`. The rest of this documentation covers only the case where all three arguments are provided. Parameters ---------- condition : array_like, bool Where True, yield `x`, otherwise yield `y`. x, y : array_like, optional Values from which to choose. `x`, `y` and `condition` need to be broadcastable to some shape. Returns ------- out : MaskedArray An masked array with `masked` elements where the condition is masked, elements from `x` where `condition` is True, and elements from `y` elsewhere. See Also -------- numpy.where : Equivalent function in the top-level NumPy module. nonzero : The function that is called when x and y are omitted Examples -------- >>> import numpy as np >>> x = np.ma.array(np.arange(9.).reshape(3, 3), mask=[[0, 1, 0], ... [1, 0, 1], ... [0, 1, 0]]) >>> x masked_array( data=[[0.0, --, 2.0], [--, 4.0, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20) >>> np.ma.where(x > 5, x, -3.1416) masked_array( data=[[-3.1416, --, -3.1416], [--, -3.1416, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20) """ # handle the single-argument case missing = (x is _NoValue, y is _NoValue).count(True) if missing == 1: raise ValueError("Must provide both 'x' and 'y' or neither.") if missing == 2: return nonzero(condition) # we only care if the condition is true - false or masked pick y cf = filled(condition, False) xd = getdata(x) yd = getdata(y) # we need the full arrays here for correct final dimensions cm = getmaskarray(condition) xm = getmaskarray(x) ym = getmaskarray(y) # deal with the fact that masked.dtype == float64, but we don't actually # want to treat it as that. if x is masked and y is not masked: xd = np.zeros((), dtype=yd.dtype) xm = np.ones((), dtype=ym.dtype) elif y is masked and x is not masked: yd = np.zeros((), dtype=xd.dtype) ym = np.ones((), dtype=xm.dtype) data = np.where(cf, xd, yd) mask = np.where(cf, xm, ym) mask = np.where(cm, np.ones((), dtype=mask.dtype), mask) # collapse the mask, for backwards compatibility mask = _shrink_mask(mask) return masked_array(data, mask=mask)
Return a masked array with elements from `x` or `y`, depending on condition. .. note:: When only `condition` is provided, this function is identical to `nonzero`. The rest of this documentation covers only the case where all three arguments are provided. Parameters ---------- condition : array_like, bool Where True, yield `x`, otherwise yield `y`. x, y : array_like, optional Values from which to choose. `x`, `y` and `condition` need to be broadcastable to some shape. Returns ------- out : MaskedArray An masked array with `masked` elements where the condition is masked, elements from `x` where `condition` is True, and elements from `y` elsewhere. See Also -------- numpy.where : Equivalent function in the top-level NumPy module. nonzero : The function that is called when x and y are omitted Examples -------- >>> import numpy as np >>> x = np.ma.array(np.arange(9.).reshape(3, 3), mask=[[0, 1, 0], ... [1, 0, 1], ... [0, 1, 0]]) >>> x masked_array( data=[[0.0, --, 2.0], [--, 4.0, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20) >>> np.ma.where(x > 5, x, -3.1416) masked_array( data=[[-3.1416, --, -3.1416], [--, -3.1416, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20)
python
numpy/ma/core.py
7,913
[ "condition", "x", "y" ]
false
7
7.76
numpy/numpy
31,054
numpy
false
execute_async
def execute_async(self, key: TaskInstanceKey, command: CommandType, queue=None, executor_config=None): """ Save the task to be executed in the next sync by inserting the commands into a queue. :param key: A unique task key (typically a tuple identifying the task instance). :param command: The shell command string to execute. :param executor_config: (Unused) to keep the same signature as the base. :param queue: (Unused) to keep the same signature as the base. """ if len(command) == 1: from airflow.executors.workloads import ExecuteTask if isinstance(command[0], ExecuteTask): workload = command[0] ser_input = workload.model_dump_json() command = [ "python", "-m", "airflow.sdk.execution_time.execute_workload", "--json-string", ser_input, ] else: raise RuntimeError( f"LambdaExecutor doesn't know how to handle workload of type: {type(command[0])}" ) self.pending_tasks.append( LambdaQueuedTask( key, command, queue if queue else "", executor_config or {}, 1, timezone.utcnow() ) )
Save the task to be executed in the next sync by inserting the commands into a queue. :param key: A unique task key (typically a tuple identifying the task instance). :param command: The shell command string to execute. :param executor_config: (Unused) to keep the same signature as the base. :param queue: (Unused) to keep the same signature as the base.
python
providers/amazon/src/airflow/providers/amazon/aws/executors/aws_lambda/lambda_executor.py
225
[ "self", "key", "command", "queue", "executor_config" ]
true
6
7.04
apache/airflow
43,597
sphinx
false
timeout
public static CloseOptions timeout(final Duration timeout) { return new CloseOptions().withTimeout(timeout); }
Static method to create a {@code CloseOptions} with a custom timeout. @param timeout the maximum time to wait for the consumer to close. @return a new {@code CloseOptions} instance with the specified timeout.
java
clients/src/main/java/org/apache/kafka/clients/consumer/CloseOptions.java
68
[ "timeout" ]
CloseOptions
true
1
6.8
apache/kafka
31,560
javadoc
false
introspectPlainAccessors
private void introspectPlainAccessors(Class<?> beanClass, Set<String> readMethodNames) throws IntrospectionException { for (Method method : beanClass.getMethods()) { if (!this.propertyDescriptors.containsKey(method.getName()) && !readMethodNames.contains(method.getName()) && isPlainAccessor(method)) { this.propertyDescriptors.put(method.getName(), new GenericTypeAwarePropertyDescriptor(beanClass, method.getName(), method, null, null)); readMethodNames.add(method.getName()); } } }
Create a new CachedIntrospectionResults instance for the given class. @param beanClass the bean class to analyze @throws BeansException in case of introspection failure
java
spring-beans/src/main/java/org/springframework/beans/CachedIntrospectionResults.java
332
[ "beanClass", "readMethodNames" ]
void
true
4
6.24
spring-projects/spring-framework
59,386
javadoc
false
_min_max
def _min_max(self, kind: Literal["min", "max"], skipna: bool) -> Scalar: """ Min/max of non-NA/null values Parameters ---------- kind : {"min", "max"} skipna : bool Returns ------- scalar """ valid_vals = self._valid_sp_values has_nonnull_fill_vals = not self._null_fill_value and self.sp_index.ngaps > 0 if len(valid_vals) > 0: sp_min_max = getattr(valid_vals, kind)() # If a non-null fill value is currently present, it might be the min/max if has_nonnull_fill_vals: func = max if kind == "max" else min return func(sp_min_max, self.fill_value) elif skipna: return sp_min_max elif self.sp_index.ngaps == 0: # No NAs present return sp_min_max else: return na_value_for_dtype(self.dtype.subtype, compat=False) elif has_nonnull_fill_vals: return self.fill_value else: return na_value_for_dtype(self.dtype.subtype, compat=False)
Min/max of non-NA/null values Parameters ---------- kind : {"min", "max"} skipna : bool Returns ------- scalar
python
pandas/core/arrays/sparse/array.py
1,636
[ "self", "kind", "skipna" ]
Scalar
true
10
6.4
pandas-dev/pandas
47,362
numpy
false
_replace_columnwise
def _replace_columnwise( self, mapping: dict[Hashable, tuple[Any, Any]], inplace: bool, regex ) -> Self: """ Dispatch to Series.replace column-wise. Parameters ---------- mapping : dict of the form {col: (target, value)} inplace : bool regex : bool or same types as `to_replace` in DataFrame.replace Returns ------- DataFrame """ # Operate column-wise res = self if inplace else self.copy(deep=False) ax = self.columns for i, ax_value in enumerate(ax): if ax_value in mapping: ser = self.iloc[:, i] target, value = mapping[ax_value] newobj = ser.replace(target, value, regex=regex) res._iset_item(i, newobj, inplace=inplace) return res if inplace else res.__finalize__(self)
Dispatch to Series.replace column-wise. Parameters ---------- mapping : dict of the form {col: (target, value)} inplace : bool regex : bool or same types as `to_replace` in DataFrame.replace Returns ------- DataFrame
python
pandas/core/frame.py
6,342
[ "self", "mapping", "inplace", "regex" ]
Self
true
5
6.72
pandas-dev/pandas
47,362
numpy
false
bindValue
private void bindValue(Bindable<?> target, Collection<Object> collection, ResolvableType aggregateType, ResolvableType elementType, @Nullable Object value) { if (value == null || (value instanceof CharSequence charSequence && charSequence.isEmpty())) { return; } Object aggregate = convert(value, aggregateType, target.getAnnotations()); ResolvableType collectionType = ResolvableType.forClassWithGenerics(collection.getClass(), elementType); Collection<Object> elements = convert(aggregate, collectionType); if (elements != null) { collection.addAll(elements); } }
Bind indexed elements to the supplied collection. @param name the name of the property to bind @param target the target bindable @param elementBinder the binder to use for elements @param aggregateType the aggregate type, may be a collection or an array @param elementType the element type @param result the destination for results
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/IndexedElementsBinder.java
96
[ "target", "collection", "aggregateType", "elementType", "value" ]
void
true
5
6.4
spring-projects/spring-boot
79,428
javadoc
false
validateNonPattern
private void validateNonPattern(String location) { Assert.state(!isPattern(location), () -> String.format("Location '%s' must not be a pattern", location)); }
Get a single resource from a non-pattern location. @param location the location @return the resource @see #isPattern(String)
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/LocationResourceLoader.java
83
[ "location" ]
void
true
1
6
spring-projects/spring-boot
79,428
javadoc
false
between
@Deprecated public static <T extends Comparable<? super T>> Range<T> between(final T fromInclusive, final T toInclusive) { return of(fromInclusive, toInclusive, null); }
Creates a range with the specified minimum and maximum values (both inclusive). <p>The range uses the natural ordering of the elements to determine where values lie in the range.</p> <p>The arguments may be passed in the order (min,max) or (max,min). The getMinimum and getMaximum methods will return the correct values.</p> @param <T> the type of the elements in this range. @param fromInclusive the first value that defines the edge of the range, inclusive. @param toInclusive the second value that defines the edge of the range, inclusive. @return the range object, not null. @throws NullPointerException when fromInclusive is null. @throws NullPointerException when toInclusive is null. @throws ClassCastException if the elements are not {@link Comparable}. @deprecated Use {@link #of(Comparable, Comparable)}.
java
src/main/java/org/apache/commons/lang3/Range.java
80
[ "fromInclusive", "toInclusive" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
create_default_config_parser
def create_default_config_parser(configuration_description: dict[str, dict[str, Any]]) -> ConfigParser: """ Create default config parser based on configuration description. It creates ConfigParser with all default values retrieved from the configuration description and expands all the variables from the global and local variables defined in this module. :param configuration_description: configuration description - retrieved from config.yaml files following the schema defined in "config.yml.schema.json" in the config_templates folder. :return: Default Config Parser that can be used to read configuration values from. """ parser = ConfigParser() all_vars = get_all_expansion_variables() for section, section_desc in configuration_description.items(): parser.add_section(section) options = section_desc["options"] for key in options: default_value = options[key]["default"] is_template = options[key].get("is_template", False) if default_value is not None: if is_template or not isinstance(default_value, str): parser.set(section, key, default_value) else: parser.set(section, key, default_value.format(**all_vars)) return parser
Create default config parser based on configuration description. It creates ConfigParser with all default values retrieved from the configuration description and expands all the variables from the global and local variables defined in this module. :param configuration_description: configuration description - retrieved from config.yaml files following the schema defined in "config.yml.schema.json" in the config_templates folder. :return: Default Config Parser that can be used to read configuration values from.
python
airflow-core/src/airflow/configuration.py
659
[ "configuration_description" ]
ConfigParser
true
7
7.44
apache/airflow
43,597
sphinx
false
child
public MemberPath child(int index) { return new MemberPath(this, null, index); }
Create a new child from this path with the specified index. @param index the index of the child @return a new {@link MemberPath} instance
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
805
[ "index" ]
MemberPath
true
1
6.64
spring-projects/spring-boot
79,428
javadoc
false
iterator
@Override public Iterator<Character> iterator() { return new CharacterIterator(this); }
Returns an iterator which can be used to walk through the characters described by this range. <p>#NotThreadSafe# the iterator is not thread-safe</p> @return an iterator to the chars represented by this range @since 2.5
java
src/main/java/org/apache/commons/lang3/CharRange.java
341
[]
true
1
6.48
apache/commons-lang
2,896
javadoc
false
is_platform_arm
def is_platform_arm() -> bool: """ Checking if the running platform use ARM architecture. Returns ------- bool True if the running platform uses ARM architecture. """ return platform.machine() in ("arm64", "aarch64") or platform.machine().startswith( "armv" )
Checking if the running platform use ARM architecture. Returns ------- bool True if the running platform uses ARM architecture.
python
pandas/compat/__init__.py
103
[]
bool
true
2
6.56
pandas-dev/pandas
47,362
unknown
false
dropRightWhile
function dropRightWhile(array, predicate) { return (array && array.length) ? baseWhile(array, getIteratee(predicate, 3), true, true) : []; }
Creates a slice of `array` excluding elements dropped from the end. Elements are dropped until `predicate` returns falsey. The predicate is invoked with three arguments: (value, index, array). @static @memberOf _ @since 3.0.0 @category Array @param {Array} array The array to query. @param {Function} [predicate=_.identity] The function invoked per iteration. @returns {Array} Returns the slice of `array`. @example var users = [ { 'user': 'barney', 'active': true }, { 'user': 'fred', 'active': false }, { 'user': 'pebbles', 'active': false } ]; _.dropRightWhile(users, function(o) { return !o.active; }); // => objects for ['barney'] // The `_.matches` iteratee shorthand. _.dropRightWhile(users, { 'user': 'pebbles', 'active': false }); // => objects for ['barney', 'fred'] // The `_.matchesProperty` iteratee shorthand. _.dropRightWhile(users, ['active', false]); // => objects for ['barney'] // The `_.property` iteratee shorthand. _.dropRightWhile(users, 'active'); // => objects for ['barney', 'fred', 'pebbles']
javascript
lodash.js
7,229
[ "array", "predicate" ]
false
3
7.2
lodash/lodash
61,490
jsdoc
false
parseBinaryExpressionRest
function parseBinaryExpressionRest(precedence: OperatorPrecedence, leftOperand: Expression, pos: number): Expression { while (true) { // We either have a binary operator here, or we're finished. We call // reScanGreaterToken so that we merge token sequences like > and = into >= reScanGreaterToken(); const newPrecedence = getBinaryOperatorPrecedence(token()); // Check the precedence to see if we should "take" this operator // - For left associative operator (all operator but **), consume the operator, // recursively call the function below, and parse binaryExpression as a rightOperand // of the caller if the new precedence of the operator is greater then or equal to the current precedence. // For example: // a - b - c; // ^token; leftOperand = b. Return b to the caller as a rightOperand // a * b - c // ^token; leftOperand = b. Return b to the caller as a rightOperand // a - b * c; // ^token; leftOperand = b. Return b * c to the caller as a rightOperand // - For right associative operator (**), consume the operator, recursively call the function // and parse binaryExpression as a rightOperand of the caller if the new precedence of // the operator is strictly grater than the current precedence // For example: // a ** b ** c; // ^^token; leftOperand = b. Return b ** c to the caller as a rightOperand // a - b ** c; // ^^token; leftOperand = b. Return b ** c to the caller as a rightOperand // a ** b - c // ^token; leftOperand = b. Return b to the caller as a rightOperand const consumeCurrentOperator = token() === SyntaxKind.AsteriskAsteriskToken ? newPrecedence >= precedence : newPrecedence > precedence; if (!consumeCurrentOperator) { break; } if (token() === SyntaxKind.InKeyword && inDisallowInContext()) { break; } if (token() === SyntaxKind.AsKeyword || token() === SyntaxKind.SatisfiesKeyword) { // Make sure we *do* perform ASI for constructs like this: // var x = foo // as (Bar) // This should be parsed as an initialized variable, followed // by a function call to 'as' with the argument 'Bar' if (scanner.hasPrecedingLineBreak()) { break; } else { const keywordKind = token(); nextToken(); leftOperand = keywordKind === SyntaxKind.SatisfiesKeyword ? makeSatisfiesExpression(leftOperand, parseType()) : makeAsExpression(leftOperand, parseType()); } } else { leftOperand = makeBinaryExpression(leftOperand, parseTokenNode(), parseBinaryExpressionOrHigher(newPrecedence), pos); } } return leftOperand; }
Reports a diagnostic error for the current token being an invalid name. @param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName). @param nameDiagnostic Diagnostic to report for all other cases. @param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
typescript
src/compiler/parser.ts
5,608
[ "precedence", "leftOperand", "pos" ]
true
12
6.8
microsoft/TypeScript
107,154
jsdoc
false
dedup_names
def dedup_names( names: Sequence[Hashable], is_potential_multiindex: bool ) -> Sequence[Hashable]: """ Rename column names if duplicates exist. Currently the renaming is done by appending a period and an autonumeric, but a custom pattern may be supported in the future. Examples -------- >>> dedup_names(["x", "y", "x", "x"], is_potential_multiindex=False) ['x', 'y', 'x.1', 'x.2'] """ names = list(names) # so we can index counts: DefaultDict[Hashable, int] = defaultdict(int) for i, col in enumerate(names): cur_count = counts[col] while cur_count > 0: counts[col] = cur_count + 1 if is_potential_multiindex: # for mypy assert isinstance(col, tuple) col = col[:-1] + (f"{col[-1]}.{cur_count}",) else: col = f"{col}.{cur_count}" cur_count = counts[col] names[i] = col counts[col] = cur_count + 1 return names
Rename column names if duplicates exist. Currently the renaming is done by appending a period and an autonumeric, but a custom pattern may be supported in the future. Examples -------- >>> dedup_names(["x", "y", "x", "x"], is_potential_multiindex=False) ['x', 'y', 'x.1', 'x.2']
python
pandas/io/common.py
1,248
[ "names", "is_potential_multiindex" ]
Sequence[Hashable]
true
5
7.44
pandas-dev/pandas
47,362
unknown
false
optString
public String optString(int index) { return optString(index, ""); }
Returns the value at {@code index} if it exists, coercing it if necessary. Returns the empty string if no such value exists. @param index the index to get the value from @return the {@code value} or an empty string
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONArray.java
499
[ "index" ]
String
true
1
6.96
spring-projects/spring-boot
79,428
javadoc
false
acceptParams
function acceptParams (str) { var length = str.length; var colonIndex = str.indexOf(';'); var index = colonIndex === -1 ? length : colonIndex; var ret = { value: str.slice(0, index).trim(), quality: 1, params: {} }; while (index < length) { var splitIndex = str.indexOf('=', index); if (splitIndex === -1) break; var colonIndex = str.indexOf(';', index); var endIndex = colonIndex === -1 ? length : colonIndex; if (splitIndex > endIndex) { index = str.lastIndexOf(';', splitIndex - 1) + 1; continue; } var key = str.slice(index, splitIndex).trim(); var value = str.slice(splitIndex + 1, endIndex).trim(); if (key === 'q') { ret.quality = parseFloat(value); } else { ret.params[key] = value; } index = endIndex + 1; } return ret; }
Parse accept params `str` returning an object with `.value`, `.quality` and `.params`. @param {String} str @return {Object} @api private
javascript
lib/utils.js
89
[ "str" ]
false
8
6.08
expressjs/express
68,358
jsdoc
false
recordWritten
private void recordWritten(long offset, long timestamp, int size) { if (numRecords == Integer.MAX_VALUE) throw new IllegalArgumentException("Maximum number of records per batch exceeded, max records: " + Integer.MAX_VALUE); if (offset - baseOffset > Integer.MAX_VALUE) throw new IllegalArgumentException("Maximum offset delta exceeded, base offset: " + baseOffset + ", last offset: " + offset); numRecords += 1; uncompressedRecordsSizeInBytes += size; lastOffset = offset; if (magic > RecordBatch.MAGIC_VALUE_V0 && timestamp > maxTimestamp) { maxTimestamp = timestamp; offsetOfMaxTimestamp = offset; } }
Append the record at the next consecutive offset. If no records have been appended yet, use the base offset of this builder. @param record The record to add
java
clients/src/main/java/org/apache/kafka/common/record/MemoryRecordsBuilder.java
787
[ "offset", "timestamp", "size" ]
void
true
5
7.04
apache/kafka
31,560
javadoc
false
createEntrySet
@Override ImmutableSet<Entry<K, V>> createEntrySet() { final class EntrySet extends ImmutableMapEntrySet<K, V> { @Override public UnmodifiableIterator<Entry<K, V>> iterator() { return asList().iterator(); } @Override public Spliterator<Entry<K, V>> spliterator() { return asList().spliterator(); } @Override public void forEach(Consumer<? super Entry<K, V>> action) { asList().forEach(action); } @Override ImmutableList<Entry<K, V>> createAsList() { return new ImmutableAsList<Entry<K, V>>() { @Override public Entry<K, V> get(int index) { return new AbstractMap.SimpleImmutableEntry<>( keySet.asList().get(index), valueList.get(index)); } @Override public Spliterator<Entry<K, V>> spliterator() { return CollectSpliterators.indexed( size(), ImmutableSet.SPLITERATOR_CHARACTERISTICS, this::get); } @Override ImmutableCollection<Entry<K, V>> delegateCollection() { return EntrySet.this; } // redeclare to help optimizers with b/310253115 @SuppressWarnings("RedundantOverride") @Override @J2ktIncompatible @GwtIncompatible Object writeReplace() { return super.writeReplace(); } }; } @Override ImmutableMap<K, V> map() { return ImmutableSortedMap.this; } // redeclare to help optimizers with b/310253115 @SuppressWarnings("RedundantOverride") @Override @J2ktIncompatible @GwtIncompatible Object writeReplace() { return super.writeReplace(); } } return isEmpty() ? ImmutableSet.of() : new EntrySet(); }
Returns an immutable set of the mappings in this map, sorted by the key ordering.
java
guava/src/com/google/common/collect/ImmutableSortedMap.java
806
[]
true
2
7.12
google/guava
51,352
javadoc
false
getMatchingAccessibleConstructor
public static <T> Constructor<T> getMatchingAccessibleConstructor(final Class<T> cls, final Class<?>... parameterTypes) { Objects.requireNonNull(cls, "cls"); // see if we can find the constructor directly // most of the time this works and it's much faster try { return MemberUtils.setAccessibleWorkaround(cls.getConstructor(parameterTypes)); } catch (final NoSuchMethodException ignored) { // ignore } Constructor<T> result = null; /* * (1) Class.getConstructors() is documented to return Constructor<T> so as long as the array is not subsequently modified, everything's fine. */ final Constructor<?>[] ctors = cls.getConstructors(); // return best match: for (Constructor<?> ctor : ctors) { // compare parameters if (MemberUtils.isMatchingConstructor(ctor, parameterTypes)) { // get accessible version of constructor ctor = getAccessibleConstructor(ctor); if (ctor != null) { MemberUtils.setAccessibleWorkaround(ctor); if (result == null || MemberUtils.compareConstructorFit(ctor, result, parameterTypes) < 0) { // temporary variable for annotation, see comment above (1) @SuppressWarnings("unchecked") final Constructor<T> constructor = (Constructor<T>) ctor; result = constructor; } } } } return result; }
Finds an accessible constructor with compatible parameters. <p> This checks all the constructor and finds one with compatible parameters This requires that every parameter is assignable from the given parameter types. This is a more flexible search than the normal exact matching algorithm. </p> <p> First it checks if there is a constructor matching the exact signature. If not then all the constructors of the class are checked to see if their signatures are assignment-compatible with the parameter types. The first assignment-compatible matching constructor is returned. </p> @param <T> the constructor type. @param cls the class to find a constructor for, not {@code null}. @param parameterTypes find method with compatible parameters. @return the constructor, null if no matching accessible constructor found. @throws NullPointerException Thrown if {@code cls} is {@code null} @throws SecurityException Thrown if a security manager is present and the caller's class loader is not the same as or an ancestor of the class loader for the class and invocation of {@link SecurityManager#checkPackageAccess(String)} denies access to the package of the class. @see SecurityManager#checkPackageAccess(String)
java
src/main/java/org/apache/commons/lang3/reflect/ConstructorUtils.java
115
[ "cls" ]
true
6
8.08
apache/commons-lang
2,896
javadoc
false
stopAsync
@CanIgnoreReturnValue public ServiceManager stopAsync() { for (Service service : services) { service.stopAsync(); } return this; }
Initiates service {@linkplain Service#stopAsync shutdown} if necessary on all the services being managed. @return this
java
android/guava/src/com/google/common/util/concurrent/ServiceManager.java
341
[]
ServiceManager
true
1
6.08
google/guava
51,352
javadoc
false
getDescription
@Override protected String getDescription() { Assert.notNull(this.listener, "'listener' must not be null"); return "listener " + this.listener; }
Return the listener to be registered. @return the listener to be registered
java
core/spring-boot/src/main/java/org/springframework/boot/web/servlet/ServletListenerRegistrationBean.java
111
[]
String
true
1
6.88
spring-projects/spring-boot
79,428
javadoc
false
is_bool_dtype
def is_bool_dtype(arr_or_dtype) -> bool: """ Check whether the provided array or dtype is of a boolean dtype. This function verifies whether a given object is a boolean data type. The input can be an array or a dtype object. Accepted array types include instances of ``np.array``, ``pd.Series``, ``pd.Index``, and similar array-like structures. Parameters ---------- arr_or_dtype : array-like or dtype The array or dtype to check. Returns ------- boolean Whether or not the array or dtype is of a boolean dtype. See Also -------- api.types.is_bool : Check if an object is a boolean. Notes ----- An ExtensionArray is considered boolean when the ``_is_boolean`` attribute is set to True. Examples -------- >>> from pandas.api.types import is_bool_dtype >>> is_bool_dtype(str) False >>> is_bool_dtype(int) False >>> is_bool_dtype(bool) True >>> is_bool_dtype(np.bool_) True >>> is_bool_dtype(np.array(["a", "b"])) False >>> is_bool_dtype(pd.Series([1, 2])) False >>> is_bool_dtype(np.array([True, False])) True >>> is_bool_dtype(pd.Categorical([True, False])) True >>> is_bool_dtype(pd.arrays.SparseArray([True, False])) True """ if arr_or_dtype is None: return False try: dtype = _get_dtype(arr_or_dtype) except (TypeError, ValueError): return False if isinstance(dtype, CategoricalDtype): arr_or_dtype = dtype.categories # now we use the special definition for Index if isinstance(arr_or_dtype, ABCIndex): # Allow Index[object] that is all-bools or Index["boolean"] if arr_or_dtype.inferred_type == "boolean": if not is_bool_dtype(arr_or_dtype.dtype): # GH#52680 warnings.warn( "The behavior of is_bool_dtype with an object-dtype Index " "of bool objects is deprecated. In a future version, " "this will return False. Cast the Index to a bool dtype instead.", Pandas4Warning, stacklevel=2, ) return True return False elif isinstance(dtype, ExtensionDtype): return getattr(dtype, "_is_boolean", False) return issubclass(dtype.type, np.bool_)
Check whether the provided array or dtype is of a boolean dtype. This function verifies whether a given object is a boolean data type. The input can be an array or a dtype object. Accepted array types include instances of ``np.array``, ``pd.Series``, ``pd.Index``, and similar array-like structures. Parameters ---------- arr_or_dtype : array-like or dtype The array or dtype to check. Returns ------- boolean Whether or not the array or dtype is of a boolean dtype. See Also -------- api.types.is_bool : Check if an object is a boolean. Notes ----- An ExtensionArray is considered boolean when the ``_is_boolean`` attribute is set to True. Examples -------- >>> from pandas.api.types import is_bool_dtype >>> is_bool_dtype(str) False >>> is_bool_dtype(int) False >>> is_bool_dtype(bool) True >>> is_bool_dtype(np.bool_) True >>> is_bool_dtype(np.array(["a", "b"])) False >>> is_bool_dtype(pd.Series([1, 2])) False >>> is_bool_dtype(np.array([True, False])) True >>> is_bool_dtype(pd.Categorical([True, False])) True >>> is_bool_dtype(pd.arrays.SparseArray([True, False])) True
python
pandas/core/dtypes/common.py
1,393
[ "arr_or_dtype" ]
bool
true
7
7.92
pandas-dev/pandas
47,362
numpy
false
parseDateStrictly
public static Date parseDateStrictly(final String str, final Locale locale, final String... parsePatterns) throws ParseException { return parseDateWithLeniency(str, locale, parsePatterns, false); }
Parses a string representing a date by trying a variety of different parsers, using the default date format symbols for the given locale. <p>The parse will try each parse pattern in turn. A parse is only deemed successful if it parses the whole of the input string. If no parse patterns match, a ParseException is thrown.</p> The parser parses strictly - it does not allow for dates such as "February 942, 1996". @param str the date to parse, not null. @param locale the locale whose date format symbols should be used. If {@code null}, the system locale is used (as per {@link #parseDateStrictly(String, String...)}). @param parsePatterns the date format patterns to use, see SimpleDateFormat, not null. @return the parsed date. @throws NullPointerException if the date string or pattern array is null. @throws ParseException if none of the date patterns were suitable. @since 3.2
java
src/main/java/org/apache/commons/lang3/time/DateUtils.java
1,304
[ "str", "locale" ]
Date
true
1
6.8
apache/commons-lang
2,896
javadoc
false
filterPropertyDescriptorsForDependencyCheck
protected PropertyDescriptor[] filterPropertyDescriptorsForDependencyCheck(BeanWrapper bw, boolean cache) { PropertyDescriptor[] filtered = this.filteredPropertyDescriptorsCache.get(bw.getWrappedClass()); if (filtered == null) { filtered = filterPropertyDescriptorsForDependencyCheck(bw); if (cache) { PropertyDescriptor[] existing = this.filteredPropertyDescriptorsCache.putIfAbsent(bw.getWrappedClass(), filtered); if (existing != null) { filtered = existing; } } } return filtered; }
Extract a filtered set of PropertyDescriptors from the given BeanWrapper, excluding ignored dependency types or properties defined on ignored dependency interfaces. @param bw the BeanWrapper the bean was created with @param cache whether to cache filtered PropertyDescriptors for the given bean Class @return the filtered PropertyDescriptors @see #isExcludedFromDependencyCheck @see #filterPropertyDescriptorsForDependencyCheck(org.springframework.beans.BeanWrapper)
java
spring-beans/src/main/java/org/springframework/beans/factory/support/AbstractAutowireCapableBeanFactory.java
1,580
[ "bw", "cache" ]
true
4
7.28
spring-projects/spring-framework
59,386
javadoc
false
findAutowireCandidates
protected Map<String, Object> findAutowireCandidates( @Nullable String beanName, Class<?> requiredType, DependencyDescriptor descriptor) { String[] candidateNames = BeanFactoryUtils.beanNamesForTypeIncludingAncestors( this, requiredType, true, descriptor.isEager()); Map<String, Object> result = CollectionUtils.newLinkedHashMap(candidateNames.length); for (Map.Entry<Class<?>, Object> classObjectEntry : this.resolvableDependencies.entrySet()) { Class<?> autowiringType = classObjectEntry.getKey(); if (autowiringType.isAssignableFrom(requiredType)) { Object autowiringValue = classObjectEntry.getValue(); autowiringValue = AutowireUtils.resolveAutowiringValue(autowiringValue, requiredType); if (requiredType.isInstance(autowiringValue)) { result.put(ObjectUtils.identityToString(autowiringValue), autowiringValue); break; } } } for (String candidate : candidateNames) { if (!isSelfReference(beanName, candidate) && isAutowireCandidate(candidate, descriptor)) { addCandidateEntry(result, candidate, descriptor, requiredType); } } if (result.isEmpty()) { boolean multiple = indicatesArrayCollectionOrMap(requiredType); // Consider fallback matches if the first pass failed to find anything... DependencyDescriptor fallbackDescriptor = descriptor.forFallbackMatch(); for (String candidate : candidateNames) { if (!isSelfReference(beanName, candidate) && isAutowireCandidate(candidate, fallbackDescriptor) && (!multiple || matchesBeanName(candidate, descriptor.getDependencyName()) || getAutowireCandidateResolver().hasQualifier(descriptor))) { addCandidateEntry(result, candidate, descriptor, requiredType); } } if (result.isEmpty() && !multiple) { // Consider self references as a final pass... // but in the case of a dependency collection, not the very same bean itself. for (String candidate : candidateNames) { if (isSelfReference(beanName, candidate) && (!(descriptor instanceof MultiElementDescriptor) || !beanName.equals(candidate)) && isAutowireCandidate(candidate, fallbackDescriptor)) { addCandidateEntry(result, candidate, descriptor, requiredType); } } } } return result; }
Find bean instances that match the required type. Called during autowiring for the specified bean. @param beanName the name of the bean that is about to be wired @param requiredType the actual type of bean to look for (may be an array component type or collection element type) @param descriptor the descriptor of the dependency to resolve @return a Map of candidate names and candidate instances that match the required type (never {@code null}) @throws BeansException in case of errors @see #autowireByType @see #autowireConstructor
java
spring-beans/src/main/java/org/springframework/beans/factory/support/DefaultListableBeanFactory.java
1,952
[ "beanName", "requiredType", "descriptor" ]
true
17
6.72
spring-projects/spring-framework
59,386
javadoc
false
isSetterDefinedInInterface
public static boolean isSetterDefinedInInterface(PropertyDescriptor pd, Set<Class<?>> interfaces) { Method setter = pd.getWriteMethod(); if (setter != null) { Class<?> targetClass = setter.getDeclaringClass(); for (Class<?> ifc : interfaces) { if (ifc.isAssignableFrom(targetClass) && ClassUtils.hasMethod(ifc, setter)) { return true; } } } return false; }
Return whether the setter method of the given bean property is defined in any of the given interfaces. @param pd the PropertyDescriptor of the bean property @param interfaces the Set of interfaces (Class objects) @return whether the setter method is defined by an interface
java
spring-beans/src/main/java/org/springframework/beans/factory/support/AutowireUtils.java
114
[ "pd", "interfaces" ]
true
4
7.6
spring-projects/spring-framework
59,386
javadoc
false
toString
@Override public String toString() { return this.out.isEmpty() ? null : this.out.toString(); }
Returns the encoded JSON string. <p> If invoked with unterminated arrays or unclosed objects, this method's return value is undefined. <p> <strong>Warning:</strong> although it contradicts the general contract of {@link Object#toString}, this method returns null if the stringer contains no data. @return the encoded JSON string.
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONStringer.java
424
[]
String
true
2
7.52
spring-projects/spring-boot
79,428
javadoc
false