function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
ofIfValid
public static @Nullable ConfigurationPropertyName ofIfValid(@Nullable CharSequence name) { return of(name, true); }
Return a {@link ConfigurationPropertyName} for the specified string or {@code null} if the name is not valid. @param name the source name @return a {@link ConfigurationPropertyName} instance @since 2.3.1
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertyName.java
650
[ "name" ]
ConfigurationPropertyName
true
1
6.32
spring-projects/spring-boot
79,428
javadoc
false
nextBytes
@Deprecated public static byte[] nextBytes(final int count) { return secure().randomBytes(count); }
Generates an array of random bytes. @param count the size of the returned array. @return the random byte array. @throws IllegalArgumentException if {@code count} is negative. @deprecated Use {@link #secure()}, {@link #secureStrong()}, or {@link #insecure()}.
java
src/main/java/org/apache/commons/lang3/RandomUtils.java
126
[ "count" ]
true
1
6.32
apache/commons-lang
2,896
javadoc
false
has_wrong_whitespace
def has_wrong_whitespace(first_line: str, second_line: str) -> bool: """ Checking if the two lines are mattching the unwanted pattern. Parameters ---------- first_line : str First line to check. second_line : str Second line to check. Returns ------- bool True if the two received string match, an unwanted pattern. Notes ----- The unwanted pattern that we are trying to catch is if the spaces in a string that is concatenated over multiple lines are placed at the end of each string, unless this string is ending with a newline character (\n). For example, this is bad: >>> rule = "We want the space at the end of the line, not at the beginning" And what we want is: >>> rule = "We want the space at the end of the line, not at the beginning" And if the string is ending with a new line character (\n) we do not want any trailing whitespaces after it. For example, this is bad: >>> rule = ( ... "We want the space at the begging of " ... "the line if the previous line is ending with a \n " ... "not at the end, like always" ... ) And what we do want is: >>> rule = ( ... "We want the space at the begging of " ... "the line if the previous line is ending with a \n" ... " not at the end, like always" ... ) """ if first_line.endswith(r"\n"): return False elif first_line.startswith(" ") or second_line.startswith(" "): return False elif first_line.endswith(" ") or second_line.endswith(" "): return False elif (not first_line.endswith(" ")) and second_line.startswith(" "): return True return False
Checking if the two lines are mattching the unwanted pattern. Parameters ---------- first_line : str First line to check. second_line : str Second line to check. Returns ------- bool True if the two received string match, an unwanted pattern. Notes ----- The unwanted pattern that we are trying to catch is if the spaces in a string that is concatenated over multiple lines are placed at the end of each string, unless this string is ending with a newline character (\n). For example, this is bad: >>> rule = "We want the space at the end of the line, not at the beginning" And what we want is: >>> rule = "We want the space at the end of the line, not at the beginning" And if the string is ending with a new line character (\n) we do not want any trailing whitespaces after it. For example, this is bad: >>> rule = ( ... "We want the space at the begging of " ... "the line if the previous line is ending with a \n " ... "not at the end, like always" ... ) And what we do want is: >>> rule = ( ... "We want the space at the begging of " ... "the line if the previous line is ending with a \n" ... " not at the end, like always" ... )
python
scripts/validate_unwanted_patterns.py
207
[ "first_line", "second_line" ]
bool
true
8
8.32
pandas-dev/pandas
47,362
numpy
false
maybeBindThisJoinPointStaticPart
private void maybeBindThisJoinPointStaticPart() { if (this.argumentTypes[0] == JoinPoint.StaticPart.class) { bindParameterName(0, THIS_JOIN_POINT_STATIC_PART); } }
If the first parameter is of type JoinPoint or ProceedingJoinPoint, bind "thisJoinPoint" as parameter name and return true, else return false.
java
spring-aop/src/main/java/org/springframework/aop/aspectj/AspectJAdviceParameterNameDiscoverer.java
319
[]
void
true
2
6.56
spring-projects/spring-framework
59,386
javadoc
false
on
@GwtIncompatible // java.util.regex public static Splitter on(Pattern separatorPattern) { return onPatternInternal(new JdkPattern(separatorPattern)); }
Returns a splitter that considers any subsequence matching {@code pattern} to be a separator. For example, {@code Splitter.on(Pattern.compile("\r?\n")).split(entireFile)} splits a string into lines whether it uses DOS-style or UNIX-style line terminators. @param separatorPattern the pattern that determines whether a subsequence is a separator. This pattern may not match the empty string. @return a splitter, with default settings, that uses this pattern @throws IllegalArgumentException if {@code separatorPattern} matches the empty string
java
android/guava/src/com/google/common/base/Splitter.java
207
[ "separatorPattern" ]
Splitter
true
1
6.16
google/guava
51,352
javadoc
false
fnv64_FIXED
constexpr uint64_t fnv64_FIXED( const char* buf, uint64_t hash = fnv64_hash_start) noexcept { for (; *buf; ++buf) { hash = fnv64_append_byte_FIXED(hash, static_cast<uint8_t>(*buf)); } return hash; }
FNV hash of a c-str. Continues hashing until a null byte is reached. @param hash The initial hash seed. @see fnv32 @methodset fnv
cpp
folly/hash/FnvHash.h
300
[]
true
2
7.04
facebook/folly
30,157
doxygen
false
constant
public static <E extends @Nullable Object> Function<@Nullable Object, E> constant( @ParametricNullness E value) { return new ConstantFunction<>(value); }
Returns a function that ignores its input and always returns {@code value}. <p>Prefer to use the lambda expression {@code o -> value} instead. Note that it is not serializable unless you explicitly make it {@link Serializable}, typically by writing {@code (Function<Object, E> & Serializable) o -> value}. @param value the constant value for the function to return @return a function that always returns {@code value}
java
android/guava/src/com/google/common/base/Functions.java
357
[ "value" ]
true
1
6.96
google/guava
51,352
javadoc
false
doubleQuoteMatcher
public static StrMatcher doubleQuoteMatcher() { return DOUBLE_QUOTE_MATCHER; }
Gets the matcher for the double quote character. @return the matcher for a double quote.
java
src/main/java/org/apache/commons/lang3/text/StrMatcher.java
295
[]
StrMatcher
true
1
6.96
apache/commons-lang
2,896
javadoc
false
drawWeb
function drawWeb(nodeToData: Map<HostInstance, Data>) { if (canvas === null) { initialize(); } const dpr = window.devicePixelRatio || 1; const canvasFlow: HTMLCanvasElement = ((canvas: any): HTMLCanvasElement); canvasFlow.width = window.innerWidth * dpr; canvasFlow.height = window.innerHeight * dpr; canvasFlow.style.width = `${window.innerWidth}px`; canvasFlow.style.height = `${window.innerHeight}px`; const context = canvasFlow.getContext('2d'); context.scale(dpr, dpr); context.clearRect(0, 0, canvasFlow.width / dpr, canvasFlow.height / dpr); const mergedNodes = groupAndSortNodes(nodeToData); mergedNodes.forEach(group => { drawGroupBorders(context, group); drawGroupLabel(context, group); }); if (canvas !== null) { if (nodeToData.size === 0 && canvas.matches(':popover-open')) { // $FlowFixMe[prop-missing]: Flow doesn't recognize Popover API // $FlowFixMe[incompatible-use]: Flow doesn't recognize Popover API canvas.hidePopover(); return; } // $FlowFixMe[incompatible-use]: Flow doesn't recognize Popover API if (canvas.matches(':popover-open')) { // $FlowFixMe[prop-missing]: Flow doesn't recognize Popover API // $FlowFixMe[incompatible-use]: Flow doesn't recognize Popover API canvas.hidePopover(); } // $FlowFixMe[prop-missing]: Flow doesn't recognize Popover API // $FlowFixMe[incompatible-use]: Flow doesn't recognize Popover API canvas.showPopover(); } }
Copyright (c) Meta Platforms, Inc. and affiliates. This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. @flow
javascript
packages/react-devtools-shared/src/backend/views/TraceUpdates/canvas.js
45
[]
false
7
6.4
facebook/react
241,750
jsdoc
false
maybeWrapAsKafkaException
public static KafkaException maybeWrapAsKafkaException(Throwable t) { if (t instanceof KafkaException) return (KafkaException) t; else return new KafkaException(t); }
Update subscription state and metadata using the provided committed offsets: <li>Update partition offsets with the committed offsets</li> <li>Update the metadata with any newer leader epoch discovered in the committed offsets metadata</li> </p> This will ignore any partition included in the <code>offsetsAndMetadata</code> parameter that may no longer be assigned. @param offsetsAndMetadata Committed offsets and metadata to be used for updating the subscription state and metadata object. @param metadata Metadata object to update with a new leader epoch if discovered in the committed offsets' metadata. @param subscriptions Subscription state to update, setting partitions' offsets to the committed offsets.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerUtils.java
249
[ "t" ]
KafkaException
true
2
6.24
apache/kafka
31,560
javadoc
false
ensureCoordinatorReady
private synchronized boolean ensureCoordinatorReady(final Timer timer, boolean disableWakeup) { if (!coordinatorUnknown()) return true; long attempts = 0L; do { if (fatalFindCoordinatorException != null) { final RuntimeException fatalException = fatalFindCoordinatorException; fatalFindCoordinatorException = null; throw fatalException; } final RequestFuture<Void> future = lookupCoordinator(); client.poll(future, timer, disableWakeup); if (!future.isDone()) { // ran out of time break; } RuntimeException fatalException = null; if (future.failed()) { if (future.isRetriable()) { log.debug("Coordinator discovery failed, refreshing metadata", future.exception()); timer.sleep(retryBackoff.backoff(attempts++)); client.awaitMetadataUpdate(timer); } else { fatalException = future.exception(); log.info("FindCoordinator request hit fatal exception", fatalException); } } else if (coordinator != null && client.isUnavailable(coordinator)) { // we found the coordinator, but the connection has failed, so mark // it dead and backoff before retrying discovery markCoordinatorUnknown("coordinator unavailable"); timer.sleep(retryBackoff.backoff(attempts++)); } clearFindCoordinatorFuture(); if (fatalException != null) throw fatalException; } while (coordinatorUnknown() && timer.notExpired()); return !coordinatorUnknown(); }
Ensure that the coordinator is ready to receive requests. This will return immediately without blocking. It is intended to be called in an asynchronous context when wakeups are not expected. @return true If coordinator discovery and initial connection succeeded, false otherwise
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
284
[ "timer", "disableWakeup" ]
true
10
7.04
apache/kafka
31,560
javadoc
false
minutesFrac
public double minutesFrac() { return ((double) nanos()) / C4; }
@return the number of {@link #timeUnit()} units this value contains
java
libs/core/src/main/java/org/elasticsearch/core/TimeValue.java
194
[]
true
1
6
elastic/elasticsearch
75,680
javadoc
false
newConcurrentHashSet
public static <E> Set<E> newConcurrentHashSet() { return Platform.newConcurrentHashSet(); }
Creates a thread-safe set backed by a hash map. The set is backed by a {@link ConcurrentHashMap} instance, and thus carries the same concurrency guarantees. <p>Unlike {@code HashSet}, this class does NOT allow {@code null} to be used as an element. The set is serializable. @return a new, empty thread-safe {@code Set} @since 15.0
java
guava/src/com/google/common/collect/Sets.java
280
[]
true
1
6.8
google/guava
51,352
javadoc
false
checkedCast
public static char checkedCast(long value) { char result = (char) value; checkArgument(result == value, "Out of range: %s", value); return result; }
Returns the {@code char} value that is equal to {@code value}, if possible. @param value any value in the range of the {@code char} type @return the {@code char} value that equals {@code value} @throws IllegalArgumentException if {@code value} is greater than {@link Character#MAX_VALUE} or less than {@link Character#MIN_VALUE}
java
android/guava/src/com/google/common/primitives/Chars.java
85
[ "value" ]
true
1
6.56
google/guava
51,352
javadoc
false
andThen
default FailableIntUnaryOperator<E> andThen(final FailableIntUnaryOperator<E> after) { Objects.requireNonNull(after); return (final int t) -> after.applyAsInt(applyAsInt(t)); }
Returns a composed {@link FailableDoubleUnaryOperator} like {@link IntUnaryOperator#andThen(IntUnaryOperator)}. @param after the operator to apply after this one. @return a composed {@link FailableIntUnaryOperator} like {@link IntUnaryOperator#andThen(IntUnaryOperator)}. @throws NullPointerException if after is null. @see #compose(FailableIntUnaryOperator)
java
src/main/java/org/apache/commons/lang3/function/FailableIntUnaryOperator.java
64
[ "after" ]
true
1
6
apache/commons-lang
2,896
javadoc
false
_explain_graph_detail
def _explain_graph_detail( gm: torch.fx.GraphModule, graphs: list[torch.fx.GraphModule], op_count: int, ops_per_graph: list[list["Target"]], break_reasons: list[GraphCompileReason], ) -> tuple[ torch.fx.GraphModule, list[torch.fx.GraphModule], int, list[list["Target"]], list[GraphCompileReason], ]: """ This function is a utility which processes a torch.fx.GraphModule and accumulates information about its ops, graph breaks, and other details. It is intended to be used by the ExplainWithBackend class and `torch._dynamo.explain()` to provide details from Dynamo's graph capture. Parameters: gm (torch.fx.GraphModule): The GraphModule to be processed. graphs (list): A list that accumulates all the GraphModules processed. op_count (int): The total count of operations in all GraphModules processed so far. ops_per_graph (list): A list that accumulates the operations of each GraphModule. break_reasons (list): A list that accumulates the reasons for breaks in each GraphModule. Returns: tuple: A tuple containing the processed GraphModule, the updated lists of graphs, operations per graph, and break reasons, and the updated operation count. """ graphs.append(gm) ops = [node.target for node in gm.graph.nodes if node.op == "call_function"] op_count += len(ops) ops_per_graph.append(ops) if gm.compile_subgraph_reason.graph_break: # type: ignore[union-attr] break_reasons.append(gm.compile_subgraph_reason) # type: ignore[arg-type] return gm, graphs, op_count, ops_per_graph, break_reasons
This function is a utility which processes a torch.fx.GraphModule and accumulates information about its ops, graph breaks, and other details. It is intended to be used by the ExplainWithBackend class and `torch._dynamo.explain()` to provide details from Dynamo's graph capture. Parameters: gm (torch.fx.GraphModule): The GraphModule to be processed. graphs (list): A list that accumulates all the GraphModules processed. op_count (int): The total count of operations in all GraphModules processed so far. ops_per_graph (list): A list that accumulates the operations of each GraphModule. break_reasons (list): A list that accumulates the reasons for breaks in each GraphModule. Returns: tuple: A tuple containing the processed GraphModule, the updated lists of graphs, operations per graph, and break reasons, and the updated operation count.
python
torch/_dynamo/backends/debugging.py
472
[ "gm", "graphs", "op_count", "ops_per_graph", "break_reasons" ]
tuple[ torch.fx.GraphModule, list[torch.fx.GraphModule], int, list[list["Target"]], list[GraphCompileReason], ]
true
2
7.6
pytorch/pytorch
96,034
google
false
toString
@Override public String toString() { return getClass().getName() + ": " + this.mappedNamePatterns; }
Determine if the given method name matches the mapped name pattern. <p>The default implementation checks for {@code xxx*}, {@code *xxx}, {@code *xxx*}, and {@code xxx*yyy} matches, as well as direct equality. <p>Can be overridden in subclasses. @param methodName the method name to check @param mappedNamePattern the method name pattern @return {@code true} if the method name matches the pattern @see PatternMatchUtils#simpleMatch(String, String)
java
spring-aop/src/main/java/org/springframework/aop/support/NameMatchMethodPointcut.java
124
[]
String
true
1
6.32
spring-projects/spring-framework
59,386
javadoc
false
convert
def convert(self) -> list[Block]: """ Attempt to coerce any object types to better types. Return a copy of the block (if copy = True). """ if not self.is_object: return [self.copy(deep=False)] if self.ndim != 1 and self.shape[0] != 1: blocks = self.split_and_operate(Block.convert) if all(blk.dtype.kind == "O" for blk in blocks): # Avoid fragmenting the block if convert is a no-op return [self.copy(deep=False)] return blocks values = self.values if values.ndim == 2: # the check above ensures we only get here with values.shape[0] == 1, # avoid doing .ravel as that might make a copy values = values[0] res_values = lib.maybe_convert_objects( values, # type: ignore[arg-type] convert_non_numeric=True, ) refs = None if res_values is values or ( isinstance(res_values, NumpyExtensionArray) and res_values._ndarray is values ): refs = self.refs res_values = ensure_block_shape(res_values, self.ndim) res_values = maybe_coerce_values(res_values) return [self.make_block(res_values, refs=refs)]
Attempt to coerce any object types to better types. Return a copy of the block (if copy = True).
python
pandas/core/internals/blocks.py
488
[ "self" ]
list[Block]
true
9
6
pandas-dev/pandas
47,362
unknown
false
render_template
def render_template( template_name: str, context: dict[str, Any], extension: str, autoescape: bool = True, lstrip_blocks: bool = False, trim_blocks: bool = False, keep_trailing_newline: bool = False, ) -> str: """ Renders template based on its name. Reads the template from <name>_TEMPLATE.md.jinja2 in current dir. :param template_name: name of the template to use :param context: Jinja2 context :param extension: Target file extension :param autoescape: Whether to autoescape HTML :param lstrip_blocks: Whether to strip leading blocks :param trim_blocks: Whether to trim blocks :param keep_trailing_newline: Whether to keep the newline in rendered output :return: rendered template """ import jinja2 template_loader = jinja2.FileSystemLoader(searchpath=BREEZE_SOURCES_PATH / "airflow_breeze" / "templates") template_env = jinja2.Environment( loader=template_loader, undefined=jinja2.StrictUndefined, autoescape=autoescape, lstrip_blocks=lstrip_blocks, trim_blocks=trim_blocks, keep_trailing_newline=keep_trailing_newline, ) template = template_env.get_template(f"{template_name}_TEMPLATE{extension}.jinja2") content: str = template.render(context) return content
Renders template based on its name. Reads the template from <name>_TEMPLATE.md.jinja2 in current dir. :param template_name: name of the template to use :param context: Jinja2 context :param extension: Target file extension :param autoescape: Whether to autoescape HTML :param lstrip_blocks: Whether to strip leading blocks :param trim_blocks: Whether to trim blocks :param keep_trailing_newline: Whether to keep the newline in rendered output :return: rendered template
python
dev/breeze/src/airflow_breeze/utils/packages.py
730
[ "template_name", "context", "extension", "autoescape", "lstrip_blocks", "trim_blocks", "keep_trailing_newline" ]
str
true
1
6.72
apache/airflow
43,597
sphinx
false
getValueParameter
public CacheInvocationParameter getValueParameter(@Nullable Object... values) { int parameterPosition = this.valueParameterDetail.getParameterPosition(); if (parameterPosition >= values.length) { throw new IllegalStateException("Values mismatch, value parameter at position " + parameterPosition + " cannot be matched against " + values.length + " value(s)"); } return this.valueParameterDetail.toCacheInvocationParameter(values[parameterPosition]); }
Return the {@link CacheInvocationParameter} for the parameter holding the value to cache. <p>The method arguments must match the signature of the related method invocation @param values the parameters value for a particular invocation @return the {@link CacheInvocationParameter} instance for the value parameter
java
spring-context-support/src/main/java/org/springframework/cache/jcache/interceptor/CachePutOperation.java
85
[]
CacheInvocationParameter
true
2
7.28
spring-projects/spring-framework
59,386
javadoc
false
recordsWrite
boolean recordsWrite() { return expiresAfterWrite() || refreshes(); }
Creates a new, empty map with the specified strategy, initial capacity and concurrency level.
java
android/guava/src/com/google/common/cache/LocalCache.java
352
[]
true
2
6.64
google/guava
51,352
javadoc
false
equals
@Override public boolean equals(final Object obj) { if (!(obj instanceof FastDateFormat)) { return false; } final FastDateFormat other = (FastDateFormat) obj; // no need to check parser, as it has same invariants as printer return printer.equals(other.printer); }
Compares two objects for equality. @param obj the object to compare to. @return {@code true} if equal.
java
src/main/java/org/apache/commons/lang3/time/FastDateFormat.java
390
[ "obj" ]
true
2
8.4
apache/commons-lang
2,896
javadoc
false
lastIndexOf
public static int lastIndexOf(final boolean[] array, final boolean valueToFind) { return lastIndexOf(array, valueToFind, Integer.MAX_VALUE); }
Finds the last index of the given value within the array. <p> This method returns {@link #INDEX_NOT_FOUND} ({@code -1}) if {@code null} array input. </p> @param array the array to traverse backwards looking for the object, may be {@code null}. @param valueToFind the object to find. @return the last index of the value within the array, {@link #INDEX_NOT_FOUND} ({@code -1}) if not found or {@code null} array input.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
3,772
[ "array", "valueToFind" ]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
unstack
def unstack(x, /, *, axis=0): """ Split an array into a sequence of arrays along the given axis. The ``axis`` parameter specifies the dimension along which the array will be split. For example, if ``axis=0`` (the default) it will be the first dimension and if ``axis=-1`` it will be the last dimension. The result is a tuple of arrays split along ``axis``. .. versionadded:: 2.1.0 Parameters ---------- x : ndarray The array to be unstacked. axis : int, optional Axis along which the array will be split. Default: ``0``. Returns ------- unstacked : tuple of ndarrays The unstacked arrays. See Also -------- stack : Join a sequence of arrays along a new axis. concatenate : Join a sequence of arrays along an existing axis. block : Assemble an nd-array from nested lists of blocks. split : Split array into a list of multiple sub-arrays of equal size. Notes ----- ``unstack`` serves as the reverse operation of :py:func:`stack`, i.e., ``stack(unstack(x, axis=axis), axis=axis) == x``. This function is equivalent to ``tuple(np.moveaxis(x, axis, 0))``, since iterating on an array iterates along the first axis. Examples -------- >>> arr = np.arange(24).reshape((2, 3, 4)) >>> np.unstack(arr) (array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]), array([[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]])) >>> np.unstack(arr, axis=1) (array([[ 0, 1, 2, 3], [12, 13, 14, 15]]), array([[ 4, 5, 6, 7], [16, 17, 18, 19]]), array([[ 8, 9, 10, 11], [20, 21, 22, 23]])) >>> arr2 = np.stack(np.unstack(arr, axis=1), axis=1) >>> arr2.shape (2, 3, 4) >>> np.all(arr == arr2) np.True_ """ if x.ndim == 0: raise ValueError("Input array must be at least 1-d.") return tuple(_nx.moveaxis(x, axis, 0))
Split an array into a sequence of arrays along the given axis. The ``axis`` parameter specifies the dimension along which the array will be split. For example, if ``axis=0`` (the default) it will be the first dimension and if ``axis=-1`` it will be the last dimension. The result is a tuple of arrays split along ``axis``. .. versionadded:: 2.1.0 Parameters ---------- x : ndarray The array to be unstacked. axis : int, optional Axis along which the array will be split. Default: ``0``. Returns ------- unstacked : tuple of ndarrays The unstacked arrays. See Also -------- stack : Join a sequence of arrays along a new axis. concatenate : Join a sequence of arrays along an existing axis. block : Assemble an nd-array from nested lists of blocks. split : Split array into a list of multiple sub-arrays of equal size. Notes ----- ``unstack`` serves as the reverse operation of :py:func:`stack`, i.e., ``stack(unstack(x, axis=axis), axis=axis) == x``. This function is equivalent to ``tuple(np.moveaxis(x, axis, 0))``, since iterating on an array iterates along the first axis. Examples -------- >>> arr = np.arange(24).reshape((2, 3, 4)) >>> np.unstack(arr) (array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]), array([[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]])) >>> np.unstack(arr, axis=1) (array([[ 0, 1, 2, 3], [12, 13, 14, 15]]), array([[ 4, 5, 6, 7], [16, 17, 18, 19]]), array([[ 8, 9, 10, 11], [20, 21, 22, 23]])) >>> arr2 = np.stack(np.unstack(arr, axis=1), axis=1) >>> arr2.shape (2, 3, 4) >>> np.all(arr == arr2) np.True_
python
numpy/_core/shape_base.py
472
[ "x", "axis" ]
false
2
7.6
numpy/numpy
31,054
numpy
false
doubleValue
@Override public double doubleValue() { return value; }
Returns the value of this MutableByte as a double. @return the numeric value represented by this object after conversion to type double.
java
src/main/java/org/apache/commons/lang3/mutable/MutableByte.java
178
[]
true
1
6.48
apache/commons-lang
2,896
javadoc
false
findKey
function findKey(object, predicate) { return baseFindKey(object, getIteratee(predicate, 3), baseForOwn); }
This method is like `_.find` except that it returns the key of the first element `predicate` returns truthy for instead of the element itself. @static @memberOf _ @since 1.1.0 @category Object @param {Object} object The object to inspect. @param {Function} [predicate=_.identity] The function invoked per iteration. @returns {string|undefined} Returns the key of the matched element, else `undefined`. @example var users = { 'barney': { 'age': 36, 'active': true }, 'fred': { 'age': 40, 'active': false }, 'pebbles': { 'age': 1, 'active': true } }; _.findKey(users, function(o) { return o.age < 40; }); // => 'barney' (iteration order is not guaranteed) // The `_.matches` iteratee shorthand. _.findKey(users, { 'age': 1, 'active': true }); // => 'pebbles' // The `_.matchesProperty` iteratee shorthand. _.findKey(users, ['active', false]); // => 'fred' // The `_.property` iteratee shorthand. _.findKey(users, 'active'); // => 'barney'
javascript
lodash.js
12,983
[ "object", "predicate" ]
false
1
6.24
lodash/lodash
61,490
jsdoc
false
masked_inside
def masked_inside(x, v1, v2, copy=True): """ Mask an array inside a given interval. Shortcut to ``masked_where``, where `condition` is True for `x` inside the interval [v1,v2] (v1 <= x <= v2). The boundaries `v1` and `v2` can be given in either order. See Also -------- masked_where : Mask where a condition is met. Notes ----- The array `x` is prefilled with its filling value. Examples -------- >>> import numpy as np >>> import numpy.ma as ma >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] >>> ma.masked_inside(x, -0.3, 0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20) The order of `v1` and `v2` doesn't matter. >>> ma.masked_inside(x, 0.3, -0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20) """ if v2 < v1: (v1, v2) = (v2, v1) xf = filled(x) condition = (xf >= v1) & (xf <= v2) return masked_where(condition, x, copy=copy)
Mask an array inside a given interval. Shortcut to ``masked_where``, where `condition` is True for `x` inside the interval [v1,v2] (v1 <= x <= v2). The boundaries `v1` and `v2` can be given in either order. See Also -------- masked_where : Mask where a condition is met. Notes ----- The array `x` is prefilled with its filling value. Examples -------- >>> import numpy as np >>> import numpy.ma as ma >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] >>> ma.masked_inside(x, -0.3, 0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20) The order of `v1` and `v2` doesn't matter. >>> ma.masked_inside(x, 0.3, -0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20)
python
numpy/ma/core.py
2,165
[ "x", "v1", "v2", "copy" ]
false
2
7.44
numpy/numpy
31,054
unknown
false
beforeKey
private void beforeKey() throws JSONException { Scope context = peek(); if (context == Scope.NONEMPTY_OBJECT) { // first in object this.out.append(','); } else if (context != Scope.EMPTY_OBJECT) { // not in an object! throw new JSONException("Nesting problem"); } newline(); replaceTop(Scope.DANGLING_KEY); }
Inserts any necessary separators and whitespace before a name. Also adjusts the stack to expect the key's value. @throws JSONException if processing of json failed
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONStringer.java
373
[]
void
true
3
6.88
spring-projects/spring-boot
79,428
javadoc
false
sem
def sem( self, ddof: int = 1, numeric_only: bool = False, ): """ Compute standard error of the mean of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex. Parameters ---------- ddof : int, default 1 Degrees of freedom. numeric_only : bool, default False Include only `float`, `int` or `boolean` data. .. versionchanged:: 2.0.0 numeric_only now defaults to ``False``. Returns ------- Series or DataFrame Standard error of the mean of values within each group. See Also -------- DataFrame.sem : Return unbiased standard error of the mean over requested axis. Series.sem : Return unbiased standard error of the mean over requested axis. Examples -------- >>> ser = pd.Series( ... [1, 3, 2, 4, 3, 8], ... index=pd.DatetimeIndex( ... [ ... "2023-01-01", ... "2023-01-10", ... "2023-01-15", ... "2023-02-01", ... "2023-02-10", ... "2023-02-15", ... ] ... ), ... ) >>> ser.resample("MS").sem() 2023-01-01 0.577350 2023-02-01 1.527525 Freq: MS, dtype: float64 """ return self._downsample("sem", ddof=ddof, numeric_only=numeric_only)
Compute standard error of the mean of groups, excluding missing values. For multiple groupings, the result index will be a MultiIndex. Parameters ---------- ddof : int, default 1 Degrees of freedom. numeric_only : bool, default False Include only `float`, `int` or `boolean` data. .. versionchanged:: 2.0.0 numeric_only now defaults to ``False``. Returns ------- Series or DataFrame Standard error of the mean of values within each group. See Also -------- DataFrame.sem : Return unbiased standard error of the mean over requested axis. Series.sem : Return unbiased standard error of the mean over requested axis. Examples -------- >>> ser = pd.Series( ... [1, 3, 2, 4, 3, 8], ... index=pd.DatetimeIndex( ... [ ... "2023-01-01", ... "2023-01-10", ... "2023-01-15", ... "2023-02-01", ... "2023-02-10", ... "2023-02-15", ... ] ... ), ... ) >>> ser.resample("MS").sem() 2023-01-01 0.577350 2023-02-01 1.527525 Freq: MS, dtype: float64
python
pandas/core/resample.py
1,651
[ "self", "ddof", "numeric_only" ]
true
1
7.12
pandas-dev/pandas
47,362
numpy
false
resolveVariable
protected String resolveVariable(final String variableName, final StrBuilder buf, final int startPos, final int endPos) { final StrLookup<?> resolver = getVariableResolver(); if (resolver == null) { return null; } return resolver.lookup(variableName); }
Internal method that resolves the value of a variable. <p> Most users of this class do not need to call this method. This method is called automatically by the substitution process. </p> <p> Writers of subclasses can override this method if they need to alter how each substitution occurs. The method is passed the variable's name and must return the corresponding value. This implementation uses the {@link #getVariableResolver()} with the variable's name as the key. </p> @param variableName the name of the variable, not null. @param buf the buffer where the substitution is occurring, not null. @param startPos the start position of the variable including the prefix, valid. @param endPos the end position of the variable including the suffix, valid. @return the variable's value or <strong>null</strong> if the variable is unknown.
java
src/main/java/org/apache/commons/lang3/text/StrSubstitutor.java
859
[ "variableName", "buf", "startPos", "endPos" ]
String
true
2
7.92
apache/commons-lang
2,896
javadoc
false
forId
static @Nullable CommonStructuredLogFormat forId(String id) { for (CommonStructuredLogFormat candidate : values()) { if (candidate.getId().equalsIgnoreCase(id)) { return candidate; } } return null; }
Find the {@link CommonStructuredLogFormat} for the given ID. @param id the format identifier @return the associated {@link CommonStructuredLogFormat} or {@code null}
java
core/spring-boot/src/main/java/org/springframework/boot/logging/structured/CommonStructuredLogFormat.java
68
[ "id" ]
CommonStructuredLogFormat
true
2
7.28
spring-projects/spring-boot
79,428
javadoc
false
text
String text() throws IOException;
Returns an instance of {@link Map} holding parsed map. Serves as a replacement for the "map", "mapOrdered" and "mapStrings" methods above. @param mapFactory factory for creating new {@link Map} objects @param mapValueParser parser for parsing a single map value @param <T> map value type @return {@link Map} object
java
libs/x-content/src/main/java/org/elasticsearch/xcontent/XContentParser.java
108
[]
String
true
1
6.32
elastic/elasticsearch
75,680
javadoc
false
getLayoutFactory
private LayoutFactory getLayoutFactory() { if (this.layoutFactory != null) { return this.layoutFactory; } List<LayoutFactory> factories = SpringFactoriesLoader.loadFactories(LayoutFactory.class, null); if (factories.isEmpty()) { return new DefaultLayoutFactory(); } Assert.state(factories.size() == 1, "No unique LayoutFactory found"); return factories.get(0); }
Return the {@link File} to use to back up the original source. @return the file to use to back up the original source
java
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/Packager.java
384
[]
LayoutFactory
true
3
8.08
spring-projects/spring-boot
79,428
javadoc
false
check_for_write_conflict
def check_for_write_conflict(key: str) -> None: """ Log a warning if a variable exists outside the metastore. If we try to write a variable to the metastore while the same key exists in an environment variable or custom secrets backend, then subsequent reads will not read the set value. :param key: Variable Key """ for secrets_backend in ensure_secrets_loaded(): if not isinstance(secrets_backend, MetastoreBackend): try: var_val = secrets_backend.get_variable(key=key) if var_val is not None: _backend_name = type(secrets_backend).__name__ log.warning( "The variable %s is defined in the %s secrets backend, which takes " "precedence over reading from the database. The value in the database will be " "updated, but to read it you have to delete the conflicting variable " "from %s", key, _backend_name, _backend_name, ) return except Exception: log.exception( "Unable to retrieve variable from secrets backend (%s). " "Checking subsequent secrets backend.", type(secrets_backend).__name__, ) return None
Log a warning if a variable exists outside the metastore. If we try to write a variable to the metastore while the same key exists in an environment variable or custom secrets backend, then subsequent reads will not read the set value. :param key: Variable Key
python
airflow-core/src/airflow/models/variable.py
454
[ "key" ]
None
true
4
6.88
apache/airflow
43,597
sphinx
false
maybeAdd
public void maybeAdd(Object candidate) { if (candidate instanceof ClusterResourceListener) { clusterResourceListeners.add((ClusterResourceListener) candidate); } }
Add only if the candidate implements {@link ClusterResourceListener}. @param candidate Object which might implement {@link ClusterResourceListener}
java
clients/src/main/java/org/apache/kafka/common/internals/ClusterResourceListeners.java
37
[ "candidate" ]
void
true
2
6.08
apache/kafka
31,560
javadoc
false
fullyQualifiedMethodName
public String fullyQualifiedMethodName() { return fullyQualifiedMethodName; }
Provides the fully-qualified method name, e.g. {@code ConsumerRebalanceListener.onPartitionsRevoked}. This is used for log messages. @return Fully-qualified method name
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerRebalanceListenerMethodName.java
43
[]
String
true
1
6.16
apache/kafka
31,560
javadoc
false
parseObject
@Override public Object parseObject(final String source, final ParsePosition pos) { return parse(source, pos); }
Parses a formatted date string according to the format. Updates the Calendar with parsed fields. Upon success, the ParsePosition index is updated to indicate how much of the source text was consumed. Not all source text needs to be consumed. Upon parse failure, ParsePosition error index is updated to the offset of the source text which does not match the supplied format. @param source The text to parse. @param pos On input, the position in the source to start parsing, on output, updated position. @param calendar The calendar into which to set parsed fields. @return true, if source has been parsed (pos parsePosition is updated); otherwise false (and pos errorIndex is updated) @throws IllegalArgumentException when Calendar has been set to be not lenient, and a parsed field is out of range.
java
src/main/java/org/apache/commons/lang3/time/FastDateParser.java
1,090
[ "source", "pos" ]
Object
true
1
6.8
apache/commons-lang
2,896
javadoc
false
toObject
public static Float[] toObject(final float[] array) { if (array == null) { return null; } if (array.length == 0) { return EMPTY_FLOAT_OBJECT_ARRAY; } return setAll(new Float[array.length], i -> Float.valueOf(array[i])); }
Converts an array of primitive floats to objects. <p>This method returns {@code null} for a {@code null} input array.</p> @param array a {@code float} array. @return a {@link Float} array, {@code null} if null array input.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
8,744
[ "array" ]
true
3
8.24
apache/commons-lang
2,896
javadoc
false
isEqualWith
function isEqualWith(value, other, customizer) { customizer = typeof customizer == 'function' ? customizer : undefined; var result = customizer ? customizer(value, other) : undefined; return result === undefined ? baseIsEqual(value, other, undefined, customizer) : !!result; }
This method is like `_.isEqual` except that it accepts `customizer` which is invoked to compare values. If `customizer` returns `undefined`, comparisons are handled by the method instead. The `customizer` is invoked with up to six arguments: (objValue, othValue [, index|key, object, other, stack]). @static @memberOf _ @since 4.0.0 @category Lang @param {*} value The value to compare. @param {*} other The other value to compare. @param {Function} [customizer] The function to customize comparisons. @returns {boolean} Returns `true` if the values are equivalent, else `false`. @example function isGreeting(value) { return /^h(?:i|ello)$/.test(value); } function customizer(objValue, othValue) { if (isGreeting(objValue) && isGreeting(othValue)) { return true; } } var array = ['hello', 'goodbye']; var other = ['hi', 'goodbye']; _.isEqualWith(array, other, customizer); // => true
javascript
lodash.js
11,674
[ "value", "other", "customizer" ]
false
4
7.04
lodash/lodash
61,490
jsdoc
false
lag_plot
def lag_plot(series: Series, lag: int = 1, ax: Axes | None = None, **kwds) -> Axes: """ Lag plot for time series. A lag plot is a scatter plot of a time series against a lag of itself. It helps in visualizing the temporal dependence between observations by plotting the values at time `t` on the x-axis and the values at time `t + lag` on the y-axis. Parameters ---------- series : Series The time series to visualize. lag : int, default 1 Lag length of the scatter plot. ax : Matplotlib axis object, optional The matplotlib axis object to use. **kwds Matplotlib scatter method keyword arguments. Returns ------- matplotlib.axes.Axes The matplotlib Axes object containing the lag plot. See Also -------- plotting.autocorrelation_plot : Autocorrelation plot for time series. matplotlib.pyplot.scatter : A scatter plot of y vs. x with varying marker size and/or color in Matplotlib. Examples -------- Lag plots are most commonly used to look for patterns in time series data. Given the following time series .. plot:: :context: close-figs >>> np.random.seed(5) >>> x = np.cumsum(np.random.normal(loc=1, scale=5, size=50)) >>> s = pd.Series(x) >>> s.plot() # doctest: +SKIP A lag plot with ``lag=1`` returns .. plot:: :context: close-figs >>> _ = pd.plotting.lag_plot(s, lag=1) """ plot_backend = _get_plot_backend("matplotlib") return plot_backend.lag_plot(series=series, lag=lag, ax=ax, **kwds)
Lag plot for time series. A lag plot is a scatter plot of a time series against a lag of itself. It helps in visualizing the temporal dependence between observations by plotting the values at time `t` on the x-axis and the values at time `t + lag` on the y-axis. Parameters ---------- series : Series The time series to visualize. lag : int, default 1 Lag length of the scatter plot. ax : Matplotlib axis object, optional The matplotlib axis object to use. **kwds Matplotlib scatter method keyword arguments. Returns ------- matplotlib.axes.Axes The matplotlib Axes object containing the lag plot. See Also -------- plotting.autocorrelation_plot : Autocorrelation plot for time series. matplotlib.pyplot.scatter : A scatter plot of y vs. x with varying marker size and/or color in Matplotlib. Examples -------- Lag plots are most commonly used to look for patterns in time series data. Given the following time series .. plot:: :context: close-figs >>> np.random.seed(5) >>> x = np.cumsum(np.random.normal(loc=1, scale=5, size=50)) >>> s = pd.Series(x) >>> s.plot() # doctest: +SKIP A lag plot with ``lag=1`` returns .. plot:: :context: close-figs >>> _ = pd.plotting.lag_plot(s, lag=1)
python
pandas/plotting/_misc.py
587
[ "series", "lag", "ax" ]
Axes
true
1
7.12
pandas-dev/pandas
47,362
numpy
false
maybeMarkPartitionsPendingRevocation
private void maybeMarkPartitionsPendingRevocation() { if (protocol != RebalanceProtocol.EAGER) { return; } // When asynchronously committing offsets prior to the revocation of a set of partitions, there will be a // window of time between when the offset commit is sent and when it returns and revocation completes. It is // possible for pending fetches for these partitions to return during this time, which means the application's // position may get ahead of the committed position prior to revocation. This can cause duplicate consumption. // To prevent this, we mark the partitions as "pending revocation," which stops the Fetcher from sending new // fetches or returning data from previous fetches to the user. Set<TopicPartition> partitions = subscriptions.assignedPartitions(); log.debug("Marking assigned partitions pending for revocation: {}", partitions); subscriptions.markPendingRevocation(partitions); }
Used by COOPERATIVE rebalance protocol only. Validate the assignments returned by the assignor such that no owned partitions are going to be reassigned to a different consumer directly: if the assignor wants to reassign an owned partition, it must first remove it from the new assignment of the current owner so that it is not assigned to any member, and then in the next rebalance it can finally reassign those partitions not owned by anyone to consumers.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerCoordinator.java
866
[]
void
true
2
6.88
apache/kafka
31,560
javadoc
false
removeAllOccurences
@Deprecated public static int[] removeAllOccurences(final int[] array, final int element) { return (int[]) removeAt(array, indexesOf(array, element)); }
Removes the occurrences of the specified element from the specified int array. <p> All subsequent elements are shifted to the left (subtracts one from their indices). If the array doesn't contain such an element, no elements are removed from the array. {@code null} will be returned if the input array is {@code null}. </p> @param array the input array, will not be modified, and may be {@code null}. @param element the element to remove. @return A new array containing the existing elements except the occurrences of the specified element. @since 3.5 @deprecated Use {@link #removeAllOccurrences(int[], int)}.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
5,360
[ "array", "element" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
matches
@Override public boolean matches(Method method, Class<?> targetClass) { if (matchesMethod(method)) { return true; } // Proxy classes never have annotations on their redeclared methods. if (Proxy.isProxyClass(targetClass)) { return false; } // The method may be on an interface, so let's check on the target class as well. Method specificMethod = AopUtils.getMostSpecificMethod(method, targetClass); return (specificMethod != method && matchesMethod(specificMethod)); }
Create a new AnnotationClassFilter for the given annotation type. @param annotationType the annotation type to look for @param checkInherited whether to also check the superclasses and interfaces as well as meta-annotations for the annotation type (i.e. whether to use {@link AnnotatedElementUtils#hasAnnotation} semantics instead of standard Java {@link Method#isAnnotationPresent}) @since 5.0
java
spring-aop/src/main/java/org/springframework/aop/support/annotation/AnnotationMethodMatcher.java
72
[ "method", "targetClass" ]
true
4
6.4
spring-projects/spring-framework
59,386
javadoc
false
checkConfigMembers
public void checkConfigMembers(RootBeanDefinition beanDefinition) { if (this.injectedElements.isEmpty()) { this.checkedElements = Collections.emptySet(); } else { Set<InjectedElement> checkedElements = CollectionUtils.newLinkedHashSet(this.injectedElements.size()); for (InjectedElement element : this.injectedElements) { Member member = element.getMember(); if (!beanDefinition.isExternallyManagedConfigMember(member)) { beanDefinition.registerExternallyManagedConfigMember(member); checkedElements.add(element); } } this.checkedElements = checkedElements; } }
Determine whether this metadata instance needs to be refreshed. @param clazz the current target class @return {@code true} indicating a refresh, {@code false} otherwise @since 5.2.4
java
spring-beans/src/main/java/org/springframework/beans/factory/annotation/InjectionMetadata.java
123
[ "beanDefinition" ]
void
true
3
7.92
spring-projects/spring-framework
59,386
javadoc
false
parseSimpleUnaryExpression
function parseSimpleUnaryExpression(): UnaryExpression { switch (token()) { case SyntaxKind.PlusToken: case SyntaxKind.MinusToken: case SyntaxKind.TildeToken: case SyntaxKind.ExclamationToken: return parsePrefixUnaryExpression(); case SyntaxKind.DeleteKeyword: return parseDeleteExpression(); case SyntaxKind.TypeOfKeyword: return parseTypeOfExpression(); case SyntaxKind.VoidKeyword: return parseVoidExpression(); case SyntaxKind.LessThanToken: // Just like in parseUpdateExpression, we need to avoid parsing type assertions when // in JSX and we see an expression like "+ <foo> bar". if (languageVariant === LanguageVariant.JSX) { return parseJsxElementOrSelfClosingElementOrFragment(/*inExpressionContext*/ true, /*topInvalidNodePosition*/ undefined, /*openingTag*/ undefined, /*mustBeUnary*/ true); } // This is modified UnaryExpression grammar in TypeScript // UnaryExpression (modified): // < type > UnaryExpression return parseTypeAssertion(); case SyntaxKind.AwaitKeyword: if (isAwaitExpression()) { return parseAwaitExpression(); } // falls through default: return parseUpdateExpression(); } }
Parse ES7 simple-unary expression or higher: ES7 UnaryExpression: 1) UpdateExpression[?yield] 2) delete UnaryExpression[?yield] 3) void UnaryExpression[?yield] 4) typeof UnaryExpression[?yield] 5) + UnaryExpression[?yield] 6) - UnaryExpression[?yield] 7) ~ UnaryExpression[?yield] 8) ! UnaryExpression[?yield] 9) [+Await] await UnaryExpression[?yield]
typescript
src/compiler/parser.ts
5,796
[]
true
3
6.08
microsoft/TypeScript
107,154
jsdoc
false
set_state
def set_state(self, state: str | None, session: Session = NEW_SESSION) -> bool: """ Set TaskInstance state. :param state: State to set for the TI :param session: SQLAlchemy ORM Session :return: Was the state changed """ if self.state == state: return False current_time = timezone.utcnow() self.log.debug("Setting task state for %s to %s", self, state) if self not in session: self.refresh_from_db(session) self.state = state self.start_date = self.start_date or current_time if self.state in State.finished or self.state == TaskInstanceState.UP_FOR_RETRY: self.end_date = self.end_date or current_time self.duration = (self.end_date - self.start_date).total_seconds() session.merge(self) session.flush() return True
Set TaskInstance state. :param state: State to set for the TI :param session: SQLAlchemy ORM Session :return: Was the state changed
python
airflow-core/src/airflow/models/taskinstance.py
758
[ "self", "state", "session" ]
bool
true
7
7.76
apache/airflow
43,597
sphinx
false
_predict_recursive
def _predict_recursive(self, X, sample_weight, cluster_node): """Predict recursively by going down the hierarchical tree. Parameters ---------- X : {ndarray, csr_matrix} of shape (n_samples, n_features) The data points, currently assigned to `cluster_node`, to predict between the subclusters of this node. sample_weight : ndarray of shape (n_samples,) The weights for each observation in X. cluster_node : _BisectingTree node object The cluster node of the hierarchical tree. Returns ------- labels : ndarray of shape (n_samples,) Index of the cluster each sample belongs to. """ if cluster_node.left is None: # This cluster has no subcluster. Labels are just the label of the cluster. return np.full(X.shape[0], cluster_node.label, dtype=np.int32) # Determine if data points belong to the left or right subcluster centers = np.vstack((cluster_node.left.center, cluster_node.right.center)) if hasattr(self, "_X_mean"): centers += self._X_mean cluster_labels = _labels_inertia_threadpool_limit( X, sample_weight, centers, self._n_threads, return_inertia=False, ) mask = cluster_labels == 0 # Compute the labels for each subset of the data points. labels = np.full(X.shape[0], -1, dtype=np.int32) labels[mask] = self._predict_recursive( X[mask], sample_weight[mask], cluster_node.left ) labels[~mask] = self._predict_recursive( X[~mask], sample_weight[~mask], cluster_node.right ) return labels
Predict recursively by going down the hierarchical tree. Parameters ---------- X : {ndarray, csr_matrix} of shape (n_samples, n_features) The data points, currently assigned to `cluster_node`, to predict between the subclusters of this node. sample_weight : ndarray of shape (n_samples,) The weights for each observation in X. cluster_node : _BisectingTree node object The cluster node of the hierarchical tree. Returns ------- labels : ndarray of shape (n_samples,) Index of the cluster each sample belongs to.
python
sklearn/cluster/_bisect_k_means.py
488
[ "self", "X", "sample_weight", "cluster_node" ]
false
3
6.08
scikit-learn/scikit-learn
64,340
numpy
false
_is_single_string_color
def _is_single_string_color(color: Color) -> bool: """Check if `color` is a single string color. Examples of single string colors: - 'r' - 'g' - 'red' - 'green' - 'C3' - 'firebrick' Parameters ---------- color : Color Color string or sequence of floats. Returns ------- bool True if `color` looks like a valid color. False otherwise. """ conv = matplotlib.colors.ColorConverter() try: # error: Argument 1 to "to_rgba" of "ColorConverter" has incompatible type # "str | Sequence[float]"; expected "tuple[float, float, float] | ..." conv.to_rgba(color) # type: ignore[arg-type] except ValueError: return False else: return True
Check if `color` is a single string color. Examples of single string colors: - 'r' - 'g' - 'red' - 'green' - 'C3' - 'firebrick' Parameters ---------- color : Color Color string or sequence of floats. Returns ------- bool True if `color` looks like a valid color. False otherwise.
python
pandas/plotting/_matplotlib/style.py
263
[ "color" ]
bool
true
2
7.04
pandas-dev/pandas
47,362
numpy
false
_sanitize_str_dtypes
def _sanitize_str_dtypes( result: np.ndarray, data, dtype: np.dtype | None, copy: bool ) -> np.ndarray: """ Ensure we have a dtype that is supported by pandas. """ # This is to prevent mixed-type Series getting all casted to # NumPy string type, e.g. NaN --> '-1#IND'. if issubclass(result.dtype.type, str): # GH#16605 # If not empty convert the data to dtype # GH#19853: If data is a scalar, result has already the result if not lib.is_scalar(data): if not np.all(isna(data)): data = np.asarray(data, dtype=dtype) if not copy: result = np.asarray(data, dtype=object) else: result = np.array(data, dtype=object, copy=copy) return result
Ensure we have a dtype that is supported by pandas.
python
pandas/core/construction.py
757
[ "result", "data", "dtype", "copy" ]
np.ndarray
true
6
6
pandas-dev/pandas
47,362
unknown
false
forBindables
public static BindableRuntimeHintsRegistrar forBindables(Iterable<Bindable<?>> bindables) { Assert.notNull(bindables, "'bindables' must not be null"); return forBindables(StreamSupport.stream(bindables.spliterator(), false).toArray(Bindable[]::new)); }
Create a new {@link BindableRuntimeHintsRegistrar} for the specified bindables. @param bindables the bindables to process @return a new {@link BindableRuntimeHintsRegistrar} instance @since 3.0.8
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/BindableRuntimeHintsRegistrar.java
131
[ "bindables" ]
BindableRuntimeHintsRegistrar
true
1
6.32
spring-projects/spring-boot
79,428
javadoc
false
markCoordinatorUnknown
protected synchronized void markCoordinatorUnknown(boolean isDisconnected, String cause) { if (this.coordinator != null) { log.info("Group coordinator {} is unavailable or invalid due to cause: {}. " + "isDisconnected: {}. Rediscovery will be attempted.", this.coordinator, cause, isDisconnected); Node oldCoordinator = this.coordinator; // Mark the coordinator dead before disconnecting requests since the callbacks for any pending // requests may attempt to do likewise. This also prevents new requests from being sent to the // coordinator while the disconnect is in progress. this.coordinator = null; // Disconnect from the coordinator to ensure that there are no in-flight requests remaining. // Pending callbacks will be invoked with a DisconnectException on the next call to poll. if (!isDisconnected) { log.info("Requesting disconnect from last known coordinator {}", oldCoordinator); client.disconnectAsync(oldCoordinator); } lastTimeOfConnectionMs = time.milliseconds(); } else { long durationOfOngoingDisconnect = time.milliseconds() - lastTimeOfConnectionMs; if (durationOfOngoingDisconnect > rebalanceConfig.rebalanceTimeoutMs) log.warn("Consumer has been disconnected from the group coordinator for {}ms", durationOfOngoingDisconnect); } }
Get the coordinator if its connection is still active. Otherwise mark it unknown and return null. @return the current coordinator or null if it is unknown
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
1,004
[ "isDisconnected", "cause" ]
void
true
4
7.04
apache/kafka
31,560
javadoc
false
isIdentifier
function isIdentifier(): boolean { if (token() === SyntaxKind.Identifier) { return true; } // If we have a 'yield' keyword, and we're in the [yield] context, then 'yield' is // considered a keyword and is not an identifier. if (token() === SyntaxKind.YieldKeyword && inYieldContext()) { return false; } // If we have a 'await' keyword, and we're in the [Await] context, then 'await' is // considered a keyword and is not an identifier. if (token() === SyntaxKind.AwaitKeyword && inAwaitContext()) { return false; } return token() > SyntaxKind.LastReservedWord; }
Invokes the provided callback. If the callback returns something falsy, then it restores the parser to the state it was in immediately prior to invoking the callback. If the callback returns something truthy, then the parser state is not rolled back. The result of invoking the callback is returned from this function.
typescript
src/compiler/parser.ts
2,318
[]
true
6
7.2
microsoft/TypeScript
107,154
jsdoc
false
publicSuffix
public @Nullable InternetDomainName publicSuffix() { return hasPublicSuffix() ? ancestor(publicSuffixIndex()) : null; }
Returns the {@linkplain #isPublicSuffix() public suffix} portion of the domain name, or {@code null} if no public suffix is present. @since 6.0
java
android/guava/src/com/google/common/net/InternetDomainName.java
408
[]
InternetDomainName
true
2
6.48
google/guava
51,352
javadoc
false
_configure_async_session
def _configure_async_session() -> None: """ Configure async SQLAlchemy session. This exists so tests can reconfigure the session. How SQLAlchemy configures this does not work well with Pytest and you can end up with issues when the session and runs in a different event loop from the test itself. """ global AsyncSession, async_engine if not SQL_ALCHEMY_CONN_ASYNC: async_engine = None AsyncSession = None return async_engine = create_async_engine( SQL_ALCHEMY_CONN_ASYNC, connect_args=_get_connect_args("async"), future=True, ) AsyncSession = async_sessionmaker( bind=async_engine, class_=SAAsyncSession, autoflush=False, expire_on_commit=False, )
Configure async SQLAlchemy session. This exists so tests can reconfigure the session. How SQLAlchemy configures this does not work well with Pytest and you can end up with issues when the session and runs in a different event loop from the test itself.
python
airflow-core/src/airflow/settings.py
378
[]
None
true
2
7.04
apache/airflow
43,597
unknown
false
bindCaseBlock
function bindCaseBlock(node: CaseBlock): void { const clauses = node.clauses; const isNarrowingSwitch = node.parent.expression.kind === SyntaxKind.TrueKeyword || isNarrowingExpression(node.parent.expression); let fallthroughFlow: FlowNode = unreachableFlow; for (let i = 0; i < clauses.length; i++) { const clauseStart = i; while (!clauses[i].statements.length && i + 1 < clauses.length) { if (fallthroughFlow === unreachableFlow) { currentFlow = preSwitchCaseFlow!; } bind(clauses[i]); i++; } const preCaseLabel = createBranchLabel(); addAntecedent(preCaseLabel, isNarrowingSwitch ? createFlowSwitchClause(preSwitchCaseFlow!, node.parent, clauseStart, i + 1) : preSwitchCaseFlow!); addAntecedent(preCaseLabel, fallthroughFlow); currentFlow = finishFlowLabel(preCaseLabel); const clause = clauses[i]; bind(clause); fallthroughFlow = currentFlow; if (!(currentFlow.flags & FlowFlags.Unreachable) && i !== clauses.length - 1 && options.noFallthroughCasesInSwitch) { clause.fallthroughFlowNode = currentFlow; } } }
Declares a Symbol for the node and adds it to symbols. Reports errors for conflicting identifier names. @param symbolTable - The symbol table which node will be added to. @param parent - node's parent declaration. @param node - The declaration to be added to the symbol table @param includes - The SymbolFlags that node has in addition to its declaration type (eg: export, ambient, etc.) @param excludes - The flags which node cannot be declared alongside in a symbol table. Used to report forbidden declarations.
typescript
src/compiler/binder.ts
1,738
[ "node" ]
true
10
6.72
microsoft/TypeScript
107,154
jsdoc
false
_values_for_argsort
def _values_for_argsort(self) -> np.ndarray: """ Return values for sorting. Returns ------- ndarray The transformed values should maintain the ordering between values within the array. See Also -------- ExtensionArray.argsort : Return the indices that would sort this array. Notes ----- The caller is responsible for *not* modifying these values in-place, so it is safe for implementers to give views on ``self``. Functions that use this (e.g. ``ExtensionArray.argsort``) should ignore entries with missing values in the original array (according to ``self.isna()``). This means that the corresponding entries in the returned array don't need to be modified to sort correctly. Examples -------- In most cases, this is the underlying Numpy array of the ``ExtensionArray``: >>> arr = pd.array([1, 2, 3]) >>> arr._values_for_argsort() array([1, 2, 3]) """ # Note: this is used in `ExtensionArray.argsort/argmin/argmax`. return np.array(self)
Return values for sorting. Returns ------- ndarray The transformed values should maintain the ordering between values within the array. See Also -------- ExtensionArray.argsort : Return the indices that would sort this array. Notes ----- The caller is responsible for *not* modifying these values in-place, so it is safe for implementers to give views on ``self``. Functions that use this (e.g. ``ExtensionArray.argsort``) should ignore entries with missing values in the original array (according to ``self.isna()``). This means that the corresponding entries in the returned array don't need to be modified to sort correctly. Examples -------- In most cases, this is the underlying Numpy array of the ``ExtensionArray``: >>> arr = pd.array([1, 2, 3]) >>> arr._values_for_argsort() array([1, 2, 3])
python
pandas/core/arrays/base.py
877
[ "self" ]
np.ndarray
true
1
6.08
pandas-dev/pandas
47,362
unknown
false
open_slots
def open_slots(self, session: Session = NEW_SESSION) -> float: """ Get the number of slots open at the moment. :param session: SQLAlchemy ORM Session :return: the number of slots """ if self.slots == -1: return float("inf") return self.slots - self.occupied_slots(session)
Get the number of slots open at the moment. :param session: SQLAlchemy ORM Session :return: the number of slots
python
airflow-core/src/airflow/models/pool.py
348
[ "self", "session" ]
float
true
2
8.24
apache/airflow
43,597
sphinx
false
addTo
public void addTo(@Nullable AttributeAccessor attributes) { if (attributes != null) { attributes.setAttribute(NAME, this); } }
Add this container image metadata to the given attributes. @param attributes the attributes to add the metadata to
java
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/container/ContainerImageMetadata.java
42
[ "attributes" ]
void
true
2
6.88
spring-projects/spring-boot
79,428
javadoc
false
post
public void post(Object event) { Iterator<Subscriber> eventSubscribers = subscribers.getSubscribers(event); if (eventSubscribers.hasNext()) { dispatcher.dispatch(event, eventSubscribers); } else if (!(event instanceof DeadEvent)) { // the event had no subscribers and was not itself a DeadEvent post(new DeadEvent(this, event)); } }
Posts an event to all registered subscribers. This method will return successfully after the event has been posted to all subscribers, and regardless of any exceptions thrown by subscribers. <p>If no subscribers have been subscribed for {@code event}'s class, and {@code event} is not already a {@link DeadEvent}, it will be wrapped in a DeadEvent and reposted. @param event event to post.
java
android/guava/src/com/google/common/eventbus/EventBus.java
256
[ "event" ]
void
true
3
6.88
google/guava
51,352
javadoc
false
resetCaches
@Override public void resetCaches() { CacheManager cacheManager = getCacheManager(); if (cacheManager != null && !cacheManager.isClosed()) { for (String cacheName : cacheManager.getCacheNames()) { javax.cache.Cache<Object, Object> jcache = cacheManager.getCache(cacheName); if (jcache != null && !jcache.isClosed()) { jcache.clear(); } } } }
Return whether this cache manager accepts and converts {@code null} values for all of its caches.
java
spring-context-support/src/main/java/org/springframework/cache/jcache/JCacheCacheManager.java
133
[]
void
true
5
7.04
spring-projects/spring-framework
59,386
javadoc
false
prepare_base_build_command
def prepare_base_build_command(image_params: CommonBuildParams) -> list[str]: """ Prepare build command for docker build. Depending on whether we have buildx plugin installed or not, and whether we run cache preparation, there might be different results: * if buildx plugin is installed - `docker buildx` command is returned - using regular or cache builder depending on whether we build regular image or cache * if no buildx plugin is installed, and we do not prepare cache, regular docker `build` command is used. * if no buildx plugin is installed, and we prepare cache - we fail. Cache can only be done with buildx :param image_params: parameters of the image :return: command to use as docker build command """ build_command_param = [] is_buildx_available = check_if_buildx_plugin_installed() if is_buildx_available: build_command_param.extend( [ "buildx", "build", "--push" if image_params.push else "--load", ] ) if not image_params.docker_host: builder = get_and_use_docker_context(image_params.builder) build_command_param.extend( [ "--builder", builder, ] ) if builder != "default": build_command_param.append("--load") else: build_command_param.append("build") return build_command_param
Prepare build command for docker build. Depending on whether we have buildx plugin installed or not, and whether we run cache preparation, there might be different results: * if buildx plugin is installed - `docker buildx` command is returned - using regular or cache builder depending on whether we build regular image or cache * if no buildx plugin is installed, and we do not prepare cache, regular docker `build` command is used. * if no buildx plugin is installed, and we prepare cache - we fail. Cache can only be done with buildx :param image_params: parameters of the image :return: command to use as docker build command
python
dev/breeze/src/airflow_breeze/utils/docker_command_utils.py
393
[ "image_params" ]
list[str]
true
6
7.92
apache/airflow
43,597
sphinx
false
getFileAttributes
private FileAttribute<?>[] getFileAttributes(FileSystem fileSystem, EnumSet<PosixFilePermission> ownerReadWrite) { if (!fileSystem.supportedFileAttributeViews().contains("posix")) { return NO_FILE_ATTRIBUTES; } return new FileAttribute<?>[] { PosixFilePermissions.asFileAttribute(ownerReadWrite) }; }
Return a subdirectory of the application temp. @param subDir the subdirectory name @return a subdirectory
java
core/spring-boot/src/main/java/org/springframework/boot/system/ApplicationTemp.java
128
[ "fileSystem", "ownerReadWrite" ]
true
2
7.28
spring-projects/spring-boot
79,428
javadoc
false
removeEndIgnoreCase
@Deprecated public static String removeEndIgnoreCase(final String str, final String remove) { return Strings.CI.removeEnd(str, remove); }
Case-insensitive removal of a substring if it is at the end of a source string, otherwise returns the source string. <p> A {@code null} source string will return {@code null}. An empty ("") source string will return the empty string. A {@code null} search string will return the source string. </p> <pre> StringUtils.removeEndIgnoreCase(null, *) = null StringUtils.removeEndIgnoreCase("", *) = "" StringUtils.removeEndIgnoreCase(*, null) = * StringUtils.removeEndIgnoreCase("www.domain.com", ".com.") = "www.domain.com" StringUtils.removeEndIgnoreCase("www.domain.com", ".com") = "www.domain" StringUtils.removeEndIgnoreCase("www.domain.com", "domain") = "www.domain.com" StringUtils.removeEndIgnoreCase("abc", "") = "abc" StringUtils.removeEndIgnoreCase("www.domain.com", ".COM") = "www.domain" StringUtils.removeEndIgnoreCase("www.domain.COM", ".com") = "www.domain" </pre> @param str the source String to search, may be null. @param remove the String to search for (case-insensitive) and remove, may be null. @return the substring with the string removed if found, {@code null} if null String input. @since 2.4 @deprecated Use {@link Strings#removeEnd(String, CharSequence) Strings.CI.removeEnd(String, CharSequence)}.
java
src/main/java/org/apache/commons/lang3/StringUtils.java
5,816
[ "str", "remove" ]
String
true
1
6.32
apache/commons-lang
2,896
javadoc
false
currentJoinPoint
public static JoinPoint currentJoinPoint() { MethodInvocation mi = ExposeInvocationInterceptor.currentInvocation(); if (!(mi instanceof ProxyMethodInvocation pmi)) { throw new IllegalStateException("MethodInvocation is not a Spring ProxyMethodInvocation: " + mi); } JoinPoint jp = (JoinPoint) pmi.getUserAttribute(JOIN_POINT_KEY); if (jp == null) { jp = new MethodInvocationProceedingJoinPoint(pmi); pmi.setUserAttribute(JOIN_POINT_KEY, jp); } return jp; }
Lazily instantiate joinpoint for the current invocation. Requires MethodInvocation to be bound with ExposeInvocationInterceptor. <p>Do not use if access is available to the current ReflectiveMethodInvocation (in an around advice). @return current AspectJ joinpoint, or through an exception if we're not in a Spring AOP invocation.
java
spring-aop/src/main/java/org/springframework/aop/aspectj/AbstractAspectJAdvice.java
80
[]
JoinPoint
true
3
7.44
spring-projects/spring-framework
59,386
javadoc
false
hasMetaAnnotation
private boolean hasMetaAnnotation(Element annotationElement, String type, Set<Element> seen) { if (seen.add(annotationElement)) { for (AnnotationMirror annotation : annotationElement.getAnnotationMirrors()) { DeclaredType annotationType = annotation.getAnnotationType(); if (type.equals(annotationType.toString()) || hasMetaAnnotation(annotationType.asElement(), type, seen)) { return true; } } } return false; }
Resolve the {@link SourceMetadata} for the specified property. @param field the field of the property (can be {@code null}) @param getter the getter of the property (can be {@code null}) @return the {@link SourceMetadata} for the specified property
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/MetadataGenerationEnvironment.java
245
[ "annotationElement", "type", "seen" ]
true
4
7.76
spring-projects/spring-boot
79,428
javadoc
false
append
public static Formatter append(final CharSequence seq, final Formatter formatter, final int flags, final int width, final int precision) { return append(seq, formatter, flags, width, precision, ' ', null); }
Handles the common {@link Formattable} operations of truncate-pad-append, with no ellipsis on precision overflow, and padding width underflow with spaces. @param seq the string to handle, not null. @param formatter the destination formatter, not null. @param flags the flags for formatting, see {@link Formattable}. @param width the width of the output, see {@link Formattable}. @param precision the precision of the output, see {@link Formattable}. @return the {@code formatter} instance, not null.
java
src/main/java/org/apache/commons/lang3/text/FormattableUtils.java
59
[ "seq", "formatter", "flags", "width", "precision" ]
Formatter
true
1
6.64
apache/commons-lang
2,896
javadoc
false
of
static SslOptions of(String @Nullable [] ciphers, String @Nullable [] enabledProtocols) { return new SslOptions() { @Override public String @Nullable [] getCiphers() { return ciphers; } @Override public String @Nullable [] getEnabledProtocols() { return enabledProtocols; } @Override public String toString() { ToStringCreator creator = new ToStringCreator(this); creator.append("ciphers", ciphers); creator.append("enabledProtocols", enabledProtocols); return creator.toString(); } }; }
Factory method to create a new {@link SslOptions} instance. @param ciphers the ciphers @param enabledProtocols the enabled protocols @return a new {@link SslOptions} instance
java
core/spring-boot/src/main/java/org/springframework/boot/ssl/SslOptions.java
75
[ "ciphers", "enabledProtocols" ]
SslOptions
true
1
6.08
spring-projects/spring-boot
79,428
javadoc
false
totalSize
public int totalSize() { return totalSize; }
Get the total size of the message. @return total size in bytes
java
clients/src/main/java/org/apache/kafka/common/protocol/MessageSizeAccumulator.java
31
[]
true
1
6.8
apache/kafka
31,560
javadoc
false
findIndefiniteField
public String findIndefiniteField() { String indefinite = patternFilter.findIndefiniteField(); if (indefinite != null) return indefinite; return entryFilter.findIndefiniteField(); }
Return a string describing an ANY or UNKNOWN field, or null if there is no such field.
java
clients/src/main/java/org/apache/kafka/common/acl/AclBindingFilter.java
93
[]
String
true
2
6.88
apache/kafka
31,560
javadoc
false
sort_graph_by_row_values
def sort_graph_by_row_values(graph, copy=False, warn_when_not_sorted=True): """Sort a sparse graph such that each row is stored with increasing values. .. versionadded:: 1.2 Parameters ---------- graph : sparse matrix of shape (n_samples, n_samples) Distance matrix to other samples, where only non-zero elements are considered neighbors. Matrix is converted to CSR format if not already. copy : bool, default=False If True, the graph is copied before sorting. If False, the sorting is performed inplace. If the graph is not of CSR format, `copy` must be True to allow the conversion to CSR format, otherwise an error is raised. warn_when_not_sorted : bool, default=True If True, a :class:`~sklearn.exceptions.EfficiencyWarning` is raised when the input graph is not sorted by row values. Returns ------- graph : sparse matrix of shape (n_samples, n_samples) Distance matrix to other samples, where only non-zero elements are considered neighbors. Matrix is in CSR format. Examples -------- >>> from scipy.sparse import csr_matrix >>> from sklearn.neighbors import sort_graph_by_row_values >>> X = csr_matrix( ... [[0., 3., 1.], ... [3., 0., 2.], ... [1., 2., 0.]]) >>> X.data array([3., 1., 3., 2., 1., 2.]) >>> X_ = sort_graph_by_row_values(X) >>> X_.data array([1., 3., 2., 3., 1., 2.]) """ if graph.format == "csr" and _is_sorted_by_data(graph): return graph if warn_when_not_sorted: warnings.warn( ( "Precomputed sparse input was not sorted by row values. Use the" " function sklearn.neighbors.sort_graph_by_row_values to sort the input" " by row values, with warn_when_not_sorted=False to remove this" " warning." ), EfficiencyWarning, ) if graph.format not in ("csr", "csc", "coo", "lil"): raise TypeError( f"Sparse matrix in {graph.format!r} format is not supported due to " "its handling of explicit zeros" ) elif graph.format != "csr": if not copy: raise ValueError( "The input graph is not in CSR format. Use copy=True to allow " "the conversion to CSR format." ) graph = graph.asformat("csr") elif copy: # csr format with copy=True graph = graph.copy() row_nnz = np.diff(graph.indptr) if row_nnz.max() == row_nnz.min(): # if each sample has the same number of provided neighbors n_samples = graph.shape[0] distances = graph.data.reshape(n_samples, -1) order = np.argsort(distances, kind="mergesort") order += np.arange(n_samples)[:, None] * row_nnz[0] order = order.ravel() graph.data = graph.data[order] graph.indices = graph.indices[order] else: for start, stop in zip(graph.indptr, graph.indptr[1:]): order = np.argsort(graph.data[start:stop], kind="mergesort") graph.data[start:stop] = graph.data[start:stop][order] graph.indices[start:stop] = graph.indices[start:stop][order] return graph
Sort a sparse graph such that each row is stored with increasing values. .. versionadded:: 1.2 Parameters ---------- graph : sparse matrix of shape (n_samples, n_samples) Distance matrix to other samples, where only non-zero elements are considered neighbors. Matrix is converted to CSR format if not already. copy : bool, default=False If True, the graph is copied before sorting. If False, the sorting is performed inplace. If the graph is not of CSR format, `copy` must be True to allow the conversion to CSR format, otherwise an error is raised. warn_when_not_sorted : bool, default=True If True, a :class:`~sklearn.exceptions.EfficiencyWarning` is raised when the input graph is not sorted by row values. Returns ------- graph : sparse matrix of shape (n_samples, n_samples) Distance matrix to other samples, where only non-zero elements are considered neighbors. Matrix is in CSR format. Examples -------- >>> from scipy.sparse import csr_matrix >>> from sklearn.neighbors import sort_graph_by_row_values >>> X = csr_matrix( ... [[0., 3., 1.], ... [3., 0., 2.], ... [1., 2., 0.]]) >>> X.data array([3., 1., 3., 2., 1., 2.]) >>> X_ = sort_graph_by_row_values(X) >>> X_.data array([1., 3., 2., 3., 1., 2.])
python
sklearn/neighbors/_base.py
196
[ "graph", "copy", "warn_when_not_sorted" ]
false
11
7.6
scikit-learn/scikit-learn
64,340
numpy
false
postProcessAfterInstantiation
default boolean postProcessAfterInstantiation(Object bean, String beanName) throws BeansException { return true; }
Perform operations after the bean has been instantiated, via a constructor or factory method, but before Spring property population (from explicit properties or autowiring) occurs. <p>This is the ideal callback for performing custom field injection on the given bean instance, right before Spring's autowiring kicks in. <p>The default implementation returns {@code true}. @param bean the bean instance created, with properties not having been set yet @param beanName the name of the bean @return {@code true} if properties should be set on the bean; {@code false} if property population should be skipped. Normal implementations should return {@code true}. Returning {@code false} will also prevent any subsequent InstantiationAwareBeanPostProcessor instances being invoked on this bean instance. @throws org.springframework.beans.BeansException in case of errors @see #postProcessBeforeInstantiation
java
spring-beans/src/main/java/org/springframework/beans/factory/config/InstantiationAwareBeanPostProcessor.java
89
[ "bean", "beanName" ]
true
1
6.16
spring-projects/spring-framework
59,386
javadoc
false
dot
def dot(a, b, strict=False, out=None): """ Return the dot product of two arrays. This function is the equivalent of `numpy.dot` that takes masked values into account. Note that `strict` and `out` are in different position than in the method version. In order to maintain compatibility with the corresponding method, it is recommended that the optional arguments be treated as keyword only. At some point that may be mandatory. Parameters ---------- a, b : masked_array_like Inputs arrays. strict : bool, optional Whether masked data are propagated (True) or set to 0 (False) for the computation. Default is False. Propagating the mask means that if a masked value appears in a row or column, the whole row or column is considered masked. out : masked_array, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a,b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. See Also -------- numpy.dot : Equivalent function for ndarrays. Examples -------- >>> import numpy as np >>> a = np.ma.array([[1, 2, 3], [4, 5, 6]], mask=[[1, 0, 0], [0, 0, 0]]) >>> b = np.ma.array([[1, 2], [3, 4], [5, 6]], mask=[[1, 0], [0, 0], [0, 0]]) >>> np.ma.dot(a, b) masked_array( data=[[21, 26], [45, 64]], mask=[[False, False], [False, False]], fill_value=999999) >>> np.ma.dot(a, b, strict=True) masked_array( data=[[--, --], [--, 64]], mask=[[ True, True], [ True, False]], fill_value=999999) """ if strict is True: if np.ndim(a) == 0 or np.ndim(b) == 0: pass elif b.ndim == 1: a = _mask_propagate(a, a.ndim - 1) b = _mask_propagate(b, b.ndim - 1) else: a = _mask_propagate(a, a.ndim - 1) b = _mask_propagate(b, b.ndim - 2) am = ~getmaskarray(a) bm = ~getmaskarray(b) if out is None: d = np.dot(filled(a, 0), filled(b, 0)) m = ~np.dot(am, bm) if np.ndim(d) == 0: d = np.asarray(d) r = d.view(get_masked_subclass(a, b)) r.__setmask__(m) return r else: d = np.dot(filled(a, 0), filled(b, 0), out._data) if out.mask.shape != d.shape: out._mask = np.empty(d.shape, MaskType) np.dot(am, bm, out._mask) np.logical_not(out._mask, out._mask) return out
Return the dot product of two arrays. This function is the equivalent of `numpy.dot` that takes masked values into account. Note that `strict` and `out` are in different position than in the method version. In order to maintain compatibility with the corresponding method, it is recommended that the optional arguments be treated as keyword only. At some point that may be mandatory. Parameters ---------- a, b : masked_array_like Inputs arrays. strict : bool, optional Whether masked data are propagated (True) or set to 0 (False) for the computation. Default is False. Propagating the mask means that if a masked value appears in a row or column, the whole row or column is considered masked. out : masked_array, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a,b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. See Also -------- numpy.dot : Equivalent function for ndarrays. Examples -------- >>> import numpy as np >>> a = np.ma.array([[1, 2, 3], [4, 5, 6]], mask=[[1, 0, 0], [0, 0, 0]]) >>> b = np.ma.array([[1, 2], [3, 4], [5, 6]], mask=[[1, 0], [0, 0], [0, 0]]) >>> np.ma.dot(a, b) masked_array( data=[[21, 26], [45, 64]], mask=[[False, False], [False, False]], fill_value=999999) >>> np.ma.dot(a, b, strict=True) masked_array( data=[[--, --], [--, 64]], mask=[[ True, True], [ True, False]], fill_value=999999)
python
numpy/ma/core.py
8,158
[ "a", "b", "strict", "out" ]
false
10
7.6
numpy/numpy
31,054
numpy
false
get_overlapping_candidate
def get_overlapping_candidate(): """ Return the next node in the ready queue that's neither a collective or a wait. """ candidates = [ x for x in ready if not contains_collective(x.snode) and not contains_wait(x.snode) ] if len(candidates) == 0: return None return min(candidates, key=lambda x: x.score)
Return the next node in the ready queue that's neither a collective or a wait.
python
torch/_inductor/comms.py
1,307
[]
false
3
6.24
pytorch/pytorch
96,034
unknown
false
create_token
def create_token( body: LoginBody, expiration_time_in_seconds: int = conf.getint("api_auth", "jwt_expiration_time") ) -> str: """ Authenticate user with given configuration. :param body: LoginBody should include username and password :param expiration_time_in_seconds: int expiration time in seconds """ is_simple_auth_manager_all_admins = conf.getboolean("core", "simple_auth_manager_all_admins") if is_simple_auth_manager_all_admins: return SimpleAuthManagerLogin._create_anonymous_admin_user( expiration_time_in_seconds=expiration_time_in_seconds ) if not body.username or not body.password: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail="Username and password must be provided", ) users = SimpleAuthManager.get_users() passwords = SimpleAuthManager.get_passwords() found_users = [ user for user in users if user["username"] == body.username and passwords[user["username"]] == body.password ] if len(found_users) == 0: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid credentials", ) user = SimpleAuthManagerUser( username=body.username, role=found_users[0]["role"], ) return get_auth_manager().generate_jwt( user=user, expiration_time_in_seconds=expiration_time_in_seconds )
Authenticate user with given configuration. :param body: LoginBody should include username and password :param expiration_time_in_seconds: int expiration time in seconds
python
airflow-core/src/airflow/api_fastapi/auth/managers/simple/services/login.py
33
[ "body", "expiration_time_in_seconds" ]
str
true
6
6.08
apache/airflow
43,597
sphinx
false
rest
function rest(func, start) { if (typeof func != 'function') { throw new TypeError(FUNC_ERROR_TEXT); } start = start === undefined ? start : toInteger(start); return baseRest(func, start); }
Creates a function that invokes `func` with the `this` binding of the created function and arguments from `start` and beyond provided as an array. **Note:** This method is based on the [rest parameter](https://mdn.io/rest_parameters). @static @memberOf _ @since 4.0.0 @category Function @param {Function} func The function to apply a rest parameter to. @param {number} [start=func.length-1] The start position of the rest parameter. @returns {Function} Returns the new function. @example var say = _.rest(function(what, names) { return what + ' ' + _.initial(names).join(', ') + (_.size(names) > 1 ? ', & ' : '') + _.last(names); }); say('hello', 'fred', 'barney', 'pebbles'); // => 'hello fred, barney, & pebbles'
javascript
lodash.js
10,902
[ "func", "start" ]
false
3
7.52
lodash/lodash
61,490
jsdoc
false
SuspenseTimelineInput
function SuspenseTimelineInput() { const bridge = useContext(BridgeContext); const treeDispatch = useContext(TreeDispatcherContext); const suspenseTreeDispatch = useContext(SuspenseTreeDispatcherContext); const scrollToHostInstance = useScrollToHostInstance(); const {timeline, timelineIndex, hoveredTimelineIndex, playing, autoScroll} = useContext(SuspenseTreeStateContext); const min = 0; const max = timeline.length > 0 ? timeline.length - 1 : 0; function switchSuspenseNode(nextTimelineIndex: number) { const nextSelectedSuspenseID = timeline[nextTimelineIndex].id; treeDispatch({ type: 'SELECT_ELEMENT_BY_ID', payload: nextSelectedSuspenseID, }); suspenseTreeDispatch({ type: 'SUSPENSE_SET_TIMELINE_INDEX', payload: nextTimelineIndex, }); } function handleChange(pendingTimelineIndex: number) { switchSuspenseNode(pendingTimelineIndex); } function handleFocus() { switchSuspenseNode(timelineIndex); } function handleHoverSegment(hoveredIndex: number) { const nextSelectedSuspenseID = timeline[hoveredIndex].id; suspenseTreeDispatch({ type: 'HOVER_TIMELINE_FOR_ID', payload: nextSelectedSuspenseID, }); } function handleUnhoverSegment() { suspenseTreeDispatch({ type: 'HOVER_TIMELINE_FOR_ID', payload: -1, }); } function skipPrevious() { const nextSelectedSuspenseID = timeline[timelineIndex - 1].id; treeDispatch({ type: 'SELECT_ELEMENT_BY_ID', payload: nextSelectedSuspenseID, }); suspenseTreeDispatch({ type: 'SUSPENSE_SKIP_TIMELINE_INDEX', payload: false, }); } function skipForward() { const nextSelectedSuspenseID = timeline[timelineIndex + 1].id; treeDispatch({ type: 'SELECT_ELEMENT_BY_ID', payload: nextSelectedSuspenseID, }); suspenseTreeDispatch({ type: 'SUSPENSE_SKIP_TIMELINE_INDEX', payload: true, }); } function togglePlaying() { suspenseTreeDispatch({ type: 'SUSPENSE_PLAY_PAUSE', payload: 'toggle', }); } // TODO: useEffectEvent here once it's supported in all versions DevTools supports. // For now we just exclude it from deps since we don't lint those anyway. function changeTimelineIndex(newIndex: number) { // Synchronize timeline index with what is resuspended. // We suspend everything after the current selection. The root isn't showing // anything suspended in the root. The step after that should have one less // thing suspended. I.e. the first suspense boundary should be unsuspended // when it's selected. This also lets you show everything in the last step. const suspendedSet = timeline.slice(timelineIndex + 1).map(step => step.id); bridge.send('overrideSuspenseMilestone', { suspendedSet, }); } useEffect(() => { changeTimelineIndex(timelineIndex); }, [timelineIndex]); useEffect(() => { if (autoScroll.id > 0) { const scrollToId = autoScroll.id; // Consume the scroll ref so that we only trigger this scroll once. autoScroll.id = 0; scrollToHostInstance(scrollToId); } }, [autoScroll]); useEffect(() => { if (!playing) { return undefined; } // While playing, advance one step every second. const PLAY_SPEED_INTERVAL = 1000; const timer = setInterval(() => { suspenseTreeDispatch({ type: 'SUSPENSE_PLAY_TICK', }); }, PLAY_SPEED_INTERVAL); return () => { clearInterval(timer); }; }, [playing]); if (timeline.length === 0) { return ( <div className={styles.SuspenseTimelineInput}> Root contains no Suspense nodes. </div> ); } return ( <> <Button disabled={timelineIndex === 0} title={'Previous'} onClick={skipPrevious}> <ButtonIcon type={'skip-previous'} /> </Button> <Button disabled={max === 0 && !playing} title={playing ? 'Pause' : 'Play'} onClick={togglePlaying}> <ButtonIcon type={playing ? 'pause' : 'play'} /> </Button> <Button disabled={timelineIndex === max} title={'Next'} onClick={skipForward}> <ButtonIcon type={'skip-next'} /> </Button> <div className={styles.SuspenseTimelineInput}> <SuspenseScrubber min={min} max={max} timeline={timeline} value={timelineIndex} highlight={hoveredTimelineIndex} onChange={handleChange} onFocus={handleFocus} onHoverSegment={handleHoverSegment} onHoverLeave={handleUnhoverSegment} /> </div> </> ); }
Copyright (c) Meta Platforms, Inc. and affiliates. This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. @flow
javascript
packages/react-devtools-shared/src/devtools/views/SuspenseTab/SuspenseTimeline.js
24
[]
false
8
6.16
facebook/react
241,750
jsdoc
false
newConfiguration
protected Configuration newConfiguration() throws IOException, TemplateException { return new Configuration(Configuration.DEFAULT_INCOMPATIBLE_IMPROVEMENTS); }
Return a new {@link Configuration} object. <p>Subclasses can override this for custom initialization &mdash; for example, to specify a FreeMarker compatibility level (which is a new feature in FreeMarker 2.3.21), or to use a mock object for testing. <p>Called by {@link #createConfiguration()}. @return the {@code Configuration} object @throws IOException if a config file wasn't found @throws TemplateException on FreeMarker initialization failure @see #createConfiguration()
java
spring-context-support/src/main/java/org/springframework/ui/freemarker/FreeMarkerConfigurationFactory.java
350
[]
Configuration
true
1
6.16
spring-projects/spring-framework
59,386
javadoc
false
gen_batches
def gen_batches(n, batch_size, *, min_batch_size=0): """Generator to create slices containing `batch_size` elements from 0 to `n`. The last slice may contain less than `batch_size` elements, when `batch_size` does not divide `n`. Parameters ---------- n : int Size of the sequence. batch_size : int Number of elements in each batch. min_batch_size : int, default=0 Minimum number of elements in each batch. Yields ------ slice of `batch_size` elements See Also -------- gen_even_slices: Generator to create n_packs slices going up to n. Examples -------- >>> from sklearn.utils import gen_batches >>> list(gen_batches(7, 3)) [slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)] >>> list(gen_batches(6, 3)) [slice(0, 3, None), slice(3, 6, None)] >>> list(gen_batches(2, 3)) [slice(0, 2, None)] >>> list(gen_batches(7, 3, min_batch_size=0)) [slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)] >>> list(gen_batches(7, 3, min_batch_size=2)) [slice(0, 3, None), slice(3, 7, None)] """ start = 0 for _ in range(int(n // batch_size)): end = start + batch_size if end + min_batch_size > n: continue yield slice(start, end) start = end if start < n: yield slice(start, n)
Generator to create slices containing `batch_size` elements from 0 to `n`. The last slice may contain less than `batch_size` elements, when `batch_size` does not divide `n`. Parameters ---------- n : int Size of the sequence. batch_size : int Number of elements in each batch. min_batch_size : int, default=0 Minimum number of elements in each batch. Yields ------ slice of `batch_size` elements See Also -------- gen_even_slices: Generator to create n_packs slices going up to n. Examples -------- >>> from sklearn.utils import gen_batches >>> list(gen_batches(7, 3)) [slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)] >>> list(gen_batches(6, 3)) [slice(0, 3, None), slice(3, 6, None)] >>> list(gen_batches(2, 3)) [slice(0, 2, None)] >>> list(gen_batches(7, 3, min_batch_size=0)) [slice(0, 3, None), slice(3, 6, None), slice(6, 7, None)] >>> list(gen_batches(7, 3, min_batch_size=2)) [slice(0, 3, None), slice(3, 7, None)]
python
sklearn/utils/_chunking.py
33
[ "n", "batch_size", "min_batch_size" ]
false
4
7.2
scikit-learn/scikit-learn
64,340
numpy
false
onHeartbeatFailure
private void onHeartbeatFailure() { // The leave group request is sent out once (not retried), so we should complete the leave // operation once the request completes, regardless of the response. if (state == MemberState.UNSUBSCRIBED && maybeCompleteLeaveInProgress()) { log.warn("Member {} with epoch {} received a failed response to the heartbeat to " + "leave the group and completed the leave operation. ", memberId, memberEpoch); } }
Notify the member that a fatal error heartbeat response was received.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsMembershipManager.java
761
[]
void
true
3
7.04
apache/kafka
31,560
javadoc
false
remove
@CanIgnoreReturnValue int remove(@CompatibleWith("E") @Nullable Object element, int occurrences);
Removes a number of occurrences of the specified element from this multiset. If the multiset contains fewer than this number of occurrences to begin with, all occurrences will be removed. Note that if {@code occurrences == 1}, this is functionally equivalent to the call {@code remove(element)}. @param element the element to conditionally remove occurrences of @param occurrences the number of occurrences of the element to remove. May be zero, in which case no change will be made. @return the count of the element before the operation; possibly zero @throws IllegalArgumentException if {@code occurrences} is negative
java
android/guava/src/com/google/common/collect/Multiset.java
175
[ "element", "occurrences" ]
true
1
6.32
google/guava
51,352
javadoc
false
get
@Override public @Nullable V get(@Nullable Object key) { int index = keySet.indexOf(key); return (index == -1) ? null : valueList.get(index); }
A builder for creating immutable sorted map instances, especially {@code public static final} maps ("constant maps"). Example: {@snippet : static final ImmutableSortedMap<Integer, String> INT_TO_WORD = new ImmutableSortedMap.Builder<Integer, String>(Ordering.natural()) .put(1, "one") .put(2, "two") .put(3, "three") .buildOrThrow(); } <p>For <i>small</i> immutable sorted maps, the {@code ImmutableSortedMap.of()} methods are even more convenient. <p>Builder instances can be reused - it is safe to call {@link #buildOrThrow} multiple times to build multiple maps in series. Each map is a superset of the maps created before it. @since 2.0
java
android/guava/src/com/google/common/collect/ImmutableSortedMap.java
833
[ "key" ]
V
true
2
6.64
google/guava
51,352
javadoc
false
resolveClass
@Override protected Class<?> resolveClass(final ObjectStreamClass desc) throws IOException, ClassNotFoundException { final String name = desc.getName(); try { return Class.forName(name, false, classLoader); } catch (final ClassNotFoundException ex) { try { return Class.forName(name, false, Thread.currentThread().getContextClassLoader()); } catch (final ClassNotFoundException cnfe) { final Class<?> cls = ClassUtils.getPrimitiveClass(name); if (cls != null) { return cls; } throw cnfe; } } }
Overridden version that uses the parameterized {@link ClassLoader} or the {@link ClassLoader} of the current {@link Thread} to resolve the class. @param desc An instance of class {@link ObjectStreamClass}. @return A {@link Class} object corresponding to {@code desc}. @throws IOException Any of the usual Input/Output exceptions. @throws ClassNotFoundException If class of a serialized object cannot be found.
java
src/main/java/org/apache/commons/lang3/SerializationUtils.java
92
[ "desc" ]
true
4
7.6
apache/commons-lang
2,896
javadoc
false
rewriteCallStack
private static CacheOperationInvoker.ThrowableWrapper rewriteCallStack( Throwable exception, String className, String methodName) { Throwable clone = cloneException(exception); if (clone == null) { return new CacheOperationInvoker.ThrowableWrapper(exception); } StackTraceElement[] callStack = new Exception().getStackTrace(); StackTraceElement[] cachedCallStack = exception.getStackTrace(); int index = findCommonAncestorIndex(callStack, className, methodName); int cachedIndex = findCommonAncestorIndex(cachedCallStack, className, methodName); if (index == -1 || cachedIndex == -1) { return new CacheOperationInvoker.ThrowableWrapper(exception); // Cannot find common ancestor } StackTraceElement[] result = new StackTraceElement[cachedIndex + callStack.length - index]; System.arraycopy(cachedCallStack, 0, result, 0, cachedIndex); System.arraycopy(callStack, index, result, cachedIndex, callStack.length - index); clone.setStackTrace(result); return new CacheOperationInvoker.ThrowableWrapper(clone); }
Rewrite the call stack of the specified {@code exception} so that it matches the current call stack up to (included) the specified method invocation. <p>Clone the specified exception. If the exception is not {@code serializable}, the original exception is returned. If no common ancestor can be found, returns the original exception. <p>Used to make sure that a cached exception has a valid invocation context. @param exception the exception to merge with the current call stack @param className the class name of the common ancestor @param methodName the method name of the common ancestor @return a clone exception with a rewritten call stack composed of the current call stack up to (included) the common ancestor specified by the {@code className} and {@code methodName} arguments, followed by stack trace elements of the specified {@code exception} after the common ancestor.
java
spring-context-support/src/main/java/org/springframework/cache/jcache/interceptor/CacheResultInterceptor.java
124
[ "exception", "className", "methodName" ]
true
4
7.92
spring-projects/spring-framework
59,386
javadoc
false
deleteStreamsGroupOffsets
DeleteStreamsGroupOffsetsResult deleteStreamsGroupOffsets(String groupId, Set<TopicPartition> partitions, DeleteStreamsGroupOffsetsOptions options);
Delete committed offsets for a set of partitions in a streams group. This will succeed at the partition level only if the group is not actively subscribed to the corresponding topic. <em>Note</em>: this method effectively does the same as the corresponding consumer group method {@link Admin#deleteConsumerGroupOffsets} does. @param options The options to use when deleting offsets in a streams group. @return The DeleteStreamsGroupOffsetsResult.
java
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
1,047
[ "groupId", "partitions", "options" ]
DeleteStreamsGroupOffsetsResult
true
1
6.16
apache/kafka
31,560
javadoc
false
hashCode
@Override public int hashCode() { int result = this.beanName.hashCode(); result = 29 * result + (this.toParent ? 1 : 0); return result; }
Set the configuration source {@code Object} for this metadata element. <p>The exact type of the object will depend on the configuration mechanism used.
java
spring-beans/src/main/java/org/springframework/beans/factory/config/RuntimeBeanReference.java
164
[]
true
2
6.88
spring-projects/spring-framework
59,386
javadoc
false
matches
@Override public boolean matches(Class<?> clazz) { Assert.state(this.aspectJTypePatternMatcher != null, "No type pattern has been set"); return this.aspectJTypePatternMatcher.matches(clazz); }
Should the pointcut apply to the given interface or target class? @param clazz candidate target class @return whether the advice should apply to this candidate target class @throws IllegalStateException if no {@link #setTypePattern(String)} has been set
java
spring-aop/src/main/java/org/springframework/aop/aspectj/TypePatternClassFilter.java
102
[ "clazz" ]
true
1
6.24
spring-projects/spring-framework
59,386
javadoc
false
_categorize_task_instances
def _categorize_task_instances( self, task_keys: set[tuple[str, str, str, int]] ) -> tuple[ dict[tuple[str, str, str, int], TI], set[tuple[str, str, str, int]], set[tuple[str, str, str, int]] ]: """ Categorize the given task_keys into matched and not_found based on existing task instances. :param task_keys: set of task_keys (tuple of dag_id, dag_run_id, task_id, and map_index) :return: tuple of (task_instances_map, matched_task_keys, not_found_task_keys) """ # Filter at database level using exact tuple matching instead of fetching all combinations # and filtering in Python task_keys_list = list(task_keys) query = select(TI).where(tuple_(TI.dag_id, TI.run_id, TI.task_id, TI.map_index).in_(task_keys_list)) task_instances = self.session.scalars(query).all() task_instances_map = { (ti.dag_id, ti.run_id, ti.task_id, ti.map_index if ti.map_index is not None else -1): ti for ti in task_instances } matched_task_keys = set(task_instances_map.keys()) not_found_task_keys = task_keys - matched_task_keys return task_instances_map, matched_task_keys, not_found_task_keys
Categorize the given task_keys into matched and not_found based on existing task instances. :param task_keys: set of task_keys (tuple of dag_id, dag_run_id, task_id, and map_index) :return: tuple of (task_instances_map, matched_task_keys, not_found_task_keys)
python
airflow-core/src/airflow/api_fastapi/core_api/services/public/task_instances.py
234
[ "self", "task_keys" ]
tuple[ dict[tuple[str, str, str, int], TI], set[tuple[str, str, str, int]], set[tuple[str, str, str, int]] ]
true
2
7.92
apache/airflow
43,597
sphinx
false
set_locale
def set_locale( new_locale: str | tuple[str, str], lc_var: int = locale.LC_ALL ) -> Generator[str | tuple[str, str]]: """ Context manager for temporarily setting a locale. Parameters ---------- new_locale : str or tuple A string of the form <language_country>.<encoding>. For example to set the current locale to US English with a UTF8 encoding, you would pass "en_US.UTF-8". lc_var : int, default `locale.LC_ALL` The category of the locale being set. Notes ----- This is useful when you want to run a particular block of code under a particular locale, without globally setting the locale. This probably isn't thread-safe. """ # getlocale is not always compliant with setlocale, use setlocale. GH#46595 current_locale = locale.setlocale(lc_var) try: locale.setlocale(lc_var, new_locale) normalized_code, normalized_encoding = locale.getlocale() if normalized_code is not None and normalized_encoding is not None: yield f"{normalized_code}.{normalized_encoding}" else: yield new_locale finally: locale.setlocale(lc_var, current_locale)
Context manager for temporarily setting a locale. Parameters ---------- new_locale : str or tuple A string of the form <language_country>.<encoding>. For example to set the current locale to US English with a UTF8 encoding, you would pass "en_US.UTF-8". lc_var : int, default `locale.LC_ALL` The category of the locale being set. Notes ----- This is useful when you want to run a particular block of code under a particular locale, without globally setting the locale. This probably isn't thread-safe.
python
pandas/_config/localization.py
26
[ "new_locale", "lc_var" ]
Generator[str | tuple[str, str]]
true
4
6.88
pandas-dev/pandas
47,362
numpy
false
updateNodeLatencyStats
public void updateNodeLatencyStats(Integer nodeId, long nowMs, boolean canDrain) { // Don't bother with updating stats if the feature is turned off. if (partitionAvailabilityTimeoutMs <= 0) return; // When the sender gets a node (returned by the ready() function) that has data to send // but the node is not ready (and so we cannot drain the data), we only update the // ready time, then the difference would reflect for how long a node wasn't ready // to send the data. Then we can temporarily remove partitions that are handled by the // node from the list of available partitions so that the partitioner wouldn't pick // this partition. // NOTE: there is no synchronization for metric updates, so drainTimeMs is updated // first to avoid accidentally marking a partition unavailable if the reader gets // values between updates. NodeLatencyStats nodeLatencyStats = nodeStats.computeIfAbsent(nodeId, id -> new NodeLatencyStats(nowMs)); if (canDrain) nodeLatencyStats.drainTimeMs = nowMs; nodeLatencyStats.readyTimeMs = nowMs; }
Drain all the data for the given nodes and collate them into a list of batches that will fit within the specified size on a per-node basis. This method attempts to avoid choosing the same topic-node over and over. @param metadataSnapshot The current cluster metadata @param nodes The list of node to drain @param maxSize The maximum number of bytes to drain @param now The current unix time in milliseconds @return A list of {@link ProducerBatch} for each node specified with total size less than the requested maxSize.
java
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
971
[ "nodeId", "nowMs", "canDrain" ]
void
true
3
8.08
apache/kafka
31,560
javadoc
false
getAnnotationElementValues
Map<String, Object> getAnnotationElementValues(AnnotationMirror annotation) { Map<String, Object> values = new LinkedHashMap<>(); annotation.getElementValues() .forEach((name, value) -> values.put(name.getSimpleName().toString(), getAnnotationValue(value))); return values; }
Collect the annotations that are annotated or meta-annotated with the specified {@link TypeElement annotation}. @param element the element to inspect @param annotationType the annotation to discover @return the annotations that are annotated or meta-annotated with this annotation
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/MetadataGenerationEnvironment.java
312
[ "annotation" ]
true
1
6.08
spring-projects/spring-boot
79,428
javadoc
false
_get_index_expr
def _get_index_expr(self, index: sympy.Expr) -> tuple[str, bool]: """ Get the index expression string and whether it needs flattening. Returns: Tuple of (index_str, needs_flatten) where needs_flatten indicates if the buffer should be flattened before indexing (for mixed indexing). """ has_indirect = self._has_indirect_vars(index) has_iter_vars = self._has_iteration_vars(index) if has_indirect and has_iter_vars: return self._handle_mixed_indexing(index), True elif has_indirect: return self.kexpr(index), False else: index_str = self._get_index_str(index) # Check if index contains ModularIndexing - this requires flattened access # ModularIndexing is used for roll/wrap-around operations needs_flatten = index.has(ModularIndexing) and index_str != "..." # If index_str is an actual expression (not "..." or a slice pattern), # we need flattened access because it uses block variables if not needs_flatten and index_str != "...": # Check if it's a simple slice pattern (::N or M::N) if not ("::" in index_str or index_str.lstrip("-").isdigit()): needs_flatten = True return index_str, needs_flatten
Get the index expression string and whether it needs flattening. Returns: Tuple of (index_str, needs_flatten) where needs_flatten indicates if the buffer should be flattened before indexing (for mixed indexing).
python
torch/_inductor/codegen/pallas.py
1,200
[ "self", "index" ]
tuple[str, bool]
true
10
7.6
pytorch/pytorch
96,034
unknown
false
calculate_tflops
def calculate_tflops( config: ExperimentConfig, time_us: float, is_backward: bool = False, sparsity: float = 0.0, ) -> float: """ Calculate TFLOPS for scaled dot product attention. Parameters: - config: The experiment configuration - time_us: The execution time in microseconds - is_backward: Whether to calculate for backward pass (includes gradient computation) - sparsity: Sparsity factor between 0.0 and 1.0, where 0.0 means no sparsity and 1.0 means fully sparse Returns: - TFLOPS value """ B = config.batch_size H = config.num_heads M = config.q_seq_len N = config.kv_seq_len D = config.head_dim # Calculate density factor (1.0 - sparsity) density = 1.0 - sparsity # Forward pass FLOPs qk_flops = ( M * N * D * 2 ) # Q*K^T matmul: (M,D) @ (D,N) with 2 FLOPs per multiply-add softmax_flops = M * N * 2 # Softmax operations (exp and div) av_flops = ( M * N * D * 2 ) # Attention @ V: (M,N) @ (N,D) with 2 FLOPs per multiply-add total_flops = B * H * (qk_flops + softmax_flops + av_flops) # Apply density factor to account for sparsity total_flops *= density # For backward pass flash uses 2.5x more flops will use this if is_backward: total_flops *= 2.5 # Convert to TFLOPS: flops / (time_us * 1e-6) / 1e12 tflops = total_flops / (time_us * 1e-6) / 1e12 return tflops
Calculate TFLOPS for scaled dot product attention. Parameters: - config: The experiment configuration - time_us: The execution time in microseconds - is_backward: Whether to calculate for backward pass (includes gradient computation) - sparsity: Sparsity factor between 0.0 and 1.0, where 0.0 means no sparsity and 1.0 means fully sparse Returns: - TFLOPS value
python
benchmarks/transformer/sdpa.py
82
[ "config", "time_us", "is_backward", "sparsity" ]
float
true
2
8.24
pytorch/pytorch
96,034
google
false
_ensure_nanosecond_dtype
def _ensure_nanosecond_dtype(dtype: DtypeObj) -> None: """ Convert dtypes with granularity less than nanosecond to nanosecond >>> _ensure_nanosecond_dtype(np.dtype("M8[us]")) >>> _ensure_nanosecond_dtype(np.dtype("M8[D]")) Traceback (most recent call last): ... TypeError: dtype=datetime64[D] is not supported. Supported resolutions are 's', 'ms', 'us', and 'ns' >>> _ensure_nanosecond_dtype(np.dtype("m8[ps]")) Traceback (most recent call last): ... TypeError: dtype=timedelta64[ps] is not supported. Supported resolutions are 's', 'ms', 'us', and 'ns' """ # noqa: E501 msg = ( f"The '{dtype.name}' dtype has no unit. " f"Please pass in '{dtype.name}[ns]' instead." ) # unpack e.g. SparseDtype dtype = getattr(dtype, "subtype", dtype) if not isinstance(dtype, np.dtype): # i.e. datetime64tz pass elif dtype.kind in "mM": if not is_supported_dtype(dtype): # pre-2.0 we would silently swap in nanos for lower-resolutions, # raise for above-nano resolutions if dtype.name in ["datetime64", "timedelta64"]: raise ValueError(msg) # TODO: ValueError or TypeError? existing test # test_constructor_generic_timestamp_bad_frequency expects TypeError raise TypeError( f"dtype={dtype} is not supported. Supported resolutions are 's', " "'ms', 'us', and 'ns'" )
Convert dtypes with granularity less than nanosecond to nanosecond >>> _ensure_nanosecond_dtype(np.dtype("M8[us]")) >>> _ensure_nanosecond_dtype(np.dtype("M8[D]")) Traceback (most recent call last): ... TypeError: dtype=datetime64[D] is not supported. Supported resolutions are 's', 'ms', 'us', and 'ns' >>> _ensure_nanosecond_dtype(np.dtype("m8[ps]")) Traceback (most recent call last): ... TypeError: dtype=timedelta64[ps] is not supported. Supported resolutions are 's', 'ms', 'us', and 'ns'
python
pandas/core/dtypes/cast.py
1,117
[ "dtype" ]
None
true
5
8
pandas-dev/pandas
47,362
unknown
false
contains
public boolean contains(final T element) { if (element == null) { return false; } return comparator.compare(element, minimum) > -1 && comparator.compare(element, maximum) < 1; }
Checks whether the specified element occurs within this range. @param element the element to check for, null returns false. @return true if the specified element occurs within this range.
java
src/main/java/org/apache/commons/lang3/Range.java
247
[ "element" ]
true
3
8.24
apache/commons-lang
2,896
javadoc
false
rpartition
def rpartition(a, sep): """ Partition (split) each element around the right-most separator. Calls :meth:`str.rpartition` element-wise. For each element in `a`, split the element as the last occurrence of `sep`, and return 3 strings containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return 3 strings containing the string itself, followed by two empty strings. Parameters ---------- a : array-like, with ``StringDType``, ``bytes_``, or ``str_`` dtype Input array sep : str or unicode Right-most separator to split each element in array. Returns ------- out : ndarray Output array of ``StringDType``, ``bytes_`` or ``str_`` dtype, depending on input types. The output array will have an extra dimension with 3 elements per input element. See Also -------- str.rpartition Examples -------- >>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.char.rpartition(a, 'A') array([['aAaAa', 'A', ''], [' a', 'A', ' '], ['abB', 'A', 'Bba']], dtype='<U5') """ return np.stack(strings_rpartition(a, sep), axis=-1)
Partition (split) each element around the right-most separator. Calls :meth:`str.rpartition` element-wise. For each element in `a`, split the element as the last occurrence of `sep`, and return 3 strings containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return 3 strings containing the string itself, followed by two empty strings. Parameters ---------- a : array-like, with ``StringDType``, ``bytes_``, or ``str_`` dtype Input array sep : str or unicode Right-most separator to split each element in array. Returns ------- out : ndarray Output array of ``StringDType``, ``bytes_`` or ``str_`` dtype, depending on input types. The output array will have an extra dimension with 3 elements per input element. See Also -------- str.rpartition Examples -------- >>> import numpy as np >>> a = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> np.char.rpartition(a, 'A') array([['aAaAa', 'A', ''], [' a', 'A', ' '], ['abB', 'A', 'Bba']], dtype='<U5')
python
numpy/_core/defchararray.py
361
[ "a", "sep" ]
false
1
6.32
numpy/numpy
31,054
numpy
false
calcDeadlineMs
private long calcDeadlineMs(long now, Integer optionTimeoutMs) { if (optionTimeoutMs != null) return now + Math.max(0, optionTimeoutMs); return now + defaultApiTimeoutMs; }
Get the deadline for a particular call. @param now The current time in milliseconds. @param optionTimeoutMs The timeout option given by the user. @return The deadline in milliseconds.
java
clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java
494
[ "now", "optionTimeoutMs" ]
true
2
7.92
apache/kafka
31,560
javadoc
false
determineHighestPriorityCandidate
protected @Nullable String determineHighestPriorityCandidate(Map<String, Object> candidates, Class<?> requiredType) { String highestPriorityBeanName = null; Integer highestPriority = null; boolean highestPriorityConflictDetected = false; for (Map.Entry<String, Object> entry : candidates.entrySet()) { String candidateBeanName = entry.getKey(); Object beanInstance = entry.getValue(); if (beanInstance != null) { Integer candidatePriority = getPriority(beanInstance); if (candidatePriority != null) { if (highestPriority != null) { if (candidatePriority.equals(highestPriority)) { highestPriorityConflictDetected = true; } else if (candidatePriority < highestPriority) { highestPriorityBeanName = candidateBeanName; highestPriority = candidatePriority; highestPriorityConflictDetected = false; } } else { highestPriorityBeanName = candidateBeanName; highestPriority = candidatePriority; } } } } if (highestPriorityConflictDetected) { throw new NoUniqueBeanDefinitionException(requiredType, candidates.size(), "Multiple beans found with the same highest priority (" + highestPriority + ") among candidates: " + candidates.keySet()); } return highestPriorityBeanName; }
Determine the candidate with the highest priority in the given set of beans. <p>Based on {@code @jakarta.annotation.Priority}. As defined by the related {@link org.springframework.core.Ordered} interface, the lowest value has the highest priority. @param candidates a Map of candidate names and candidate instances (or candidate classes if not created yet) that match the required type @param requiredType the target dependency type to match against @return the name of the candidate with the highest priority, or {@code null} if none found @throws NoUniqueBeanDefinitionException if multiple beans are detected with the same highest priority value @see #getPriority(Object)
java
spring-beans/src/main/java/org/springframework/beans/factory/support/DefaultListableBeanFactory.java
2,137
[ "candidates", "requiredType" ]
String
true
7
7.44
spring-projects/spring-framework
59,386
javadoc
false
_check_ne_builtin_clash
def _check_ne_builtin_clash(expr: Expr) -> None: """ Attempt to prevent foot-shooting in a helpful way. Parameters ---------- expr : Expr Terms can contain """ names = expr.names overlap = names & _ne_builtins if overlap: s = ", ".join([repr(x) for x in overlap]) raise NumExprClobberingError( f'Variables in expression "{expr}" overlap with builtins: ({s})' )
Attempt to prevent foot-shooting in a helpful way. Parameters ---------- expr : Expr Terms can contain
python
pandas/core/computation/engines.py
29
[ "expr" ]
None
true
2
6.56
pandas-dev/pandas
47,362
numpy
false
evaluate_knapsack_output
def evaluate_knapsack_output( self, saved_nodes_idxs: list[int], recomputable_node_idxs: list[int], account_for_backward_pass: bool = False, ) -> dict[str, float]: """ Evaluate the theoretical runtime and peak memory usage of a given checkpointing strategy. Args: - saved_nodes_idxs (List[int]): The indices of nodes that are saved. - recomputable_node_idxs (List[int]): The indices of nodes that need to be recomputed. """ self._validate_all_indexes_accounted_for_in_provided_output( saved_nodes_idxs, recomputable_node_idxs ) recomputation_runtime = sum( self._graph_info_provider.all_node_runtimes[ self._graph_info_provider.all_recomputable_banned_nodes[node] ] for node in recomputable_node_idxs ) if account_for_backward_pass: memory_list = self._get_backward_memory_from_topologically_sorted_graph( node_graph=self._graph_info_provider.recomputable_node_only_graph_with_larger_graph_context, saved_nodes_set={ self._graph_info_provider.all_recomputable_banned_nodes[i] for i in saved_nodes_idxs }, node_memories=self._graph_info_provider.all_node_memories, peak_memory_after_forward_pass=sum( self._graph_info_provider.all_node_memories[ self._graph_info_provider.all_recomputable_banned_nodes[i] ] for i in saved_nodes_idxs ), ) peak_memory = max(memory_list, key=operator.itemgetter(0))[0] else: peak_memory = sum( self._graph_info_provider.all_node_memories[ self._graph_info_provider.all_recomputable_banned_nodes[node] ] for node in saved_nodes_idxs ) return { "peak_memory": peak_memory, "recomputation_runtime": recomputation_runtime, "non_ac_peak_memory": self._graph_info_provider.get_non_ac_peak_memory(), "theoretical_max_runtime": self._graph_info_provider.get_theoretical_max_runtime(), "percentage_of_theoretical_peak_memory": peak_memory / self._graph_info_provider.get_non_ac_peak_memory(), "percentage_of_theoretical_peak_runtime": recomputation_runtime / self._graph_info_provider.get_theoretical_max_runtime(), }
Evaluate the theoretical runtime and peak memory usage of a given checkpointing strategy. Args: - saved_nodes_idxs (List[int]): The indices of nodes that are saved. - recomputable_node_idxs (List[int]): The indices of nodes that need to be recomputed.
python
torch/_functorch/_activation_checkpointing/knapsack_evaluator.py
133
[ "self", "saved_nodes_idxs", "recomputable_node_idxs", "account_for_backward_pass" ]
dict[str, float]
true
3
6.64
pytorch/pytorch
96,034
google
false
writeMetadata
private void writeMetadata(ConfigurationMetadata metadata, FileObjectSupplier fileObjectProvider) throws IOException { if (!metadata.getItems().isEmpty()) { try (OutputStream outputStream = fileObjectProvider.get().openOutputStream()) { new JsonMarshaller().write(metadata, outputStream); } } }
Write the metadata to the {@link FileObject} provided by the given supplier. @param metadata the metadata to provide @param fileObjectProvider a supplier for the {@link FileObject} to use
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/MetadataStore.java
122
[ "metadata", "fileObjectProvider" ]
void
true
2
6.56
spring-projects/spring-boot
79,428
javadoc
false