function_name
stringlengths 1
57
| function_code
stringlengths 20
4.99k
| documentation
stringlengths 50
2k
| language
stringclasses 5
values | file_path
stringlengths 8
166
| line_number
int32 4
16.7k
| parameters
listlengths 0
20
| return_type
stringlengths 0
131
| has_type_hints
bool 2
classes | complexity
int32 1
51
| quality_score
float32 6
9.68
| repo_name
stringclasses 34
values | repo_stars
int32 2.9k
242k
| docstring_style
stringclasses 7
values | is_async
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
removeElement
|
public static boolean[] removeElement(final boolean[] array, final boolean element) {
final int index = indexOf(array, element);
return index == INDEX_NOT_FOUND ? clone(array) : remove(array, index);
}
|
Removes the first occurrence of the specified element from the
specified array. All subsequent elements are shifted to the left
(subtracts one from their indices). If the array doesn't contain
such an element, no elements are removed from the array.
<p>
This method returns a new array with the same elements of the input
array except the first occurrence of the specified element. The component
type of the returned array is always the same as that of the input
array.
</p>
<pre>
ArrayUtils.removeElement(null, true) = null
ArrayUtils.removeElement([], true) = []
ArrayUtils.removeElement([true], false) = [true]
ArrayUtils.removeElement([true, false], false) = [true]
ArrayUtils.removeElement([true, false, true], true) = [false, true]
</pre>
@param array the input array, may be {@code null}.
@param element the element to be removed.
@return A new array containing the existing elements except the first
occurrence of the specified element.
@since 2.1
|
java
|
src/main/java/org/apache/commons/lang3/ArrayUtils.java
| 5,642
|
[
"array",
"element"
] | true
| 2
| 7.84
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
peekLast
|
public @Nullable E peekLast() {
return isEmpty() ? null : elementData(getMaxElementIndex());
}
|
Retrieves, but does not remove, the greatest element of this queue, or returns {@code null} if
the queue is empty.
|
java
|
android/guava/src/com/google/common/collect/MinMaxPriorityQueue.java
| 389
|
[] |
E
| true
| 2
| 6.96
|
google/guava
| 51,352
|
javadoc
| false
|
whenHasUnescapedPath
|
default ValueProcessor<T> whenHasUnescapedPath(String path) {
return whenHasPath((candidate) -> candidate.toString(false).equals(path));
}
|
Return a new processor from this one that only applied to members with the
given path (ignoring escape characters).
@param path the patch to match
@return a new {@link ValueProcessor} that only applies when the path matches
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
| 990
|
[
"path"
] | true
| 1
| 6.48
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
delete_bucket_tagging
|
def delete_bucket_tagging(self, bucket_name: str | None = None) -> None:
"""
Delete all tags from a bucket.
.. seealso::
- :external+boto3:py:meth:`S3.Client.delete_bucket_tagging`
:param bucket_name: The name of the bucket.
:return: None
"""
s3_client = self.get_conn()
s3_client.delete_bucket_tagging(Bucket=bucket_name)
|
Delete all tags from a bucket.
.. seealso::
- :external+boto3:py:meth:`S3.Client.delete_bucket_tagging`
:param bucket_name: The name of the bucket.
:return: None
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/s3.py
| 1,708
|
[
"self",
"bucket_name"
] |
None
| true
| 1
| 6.24
|
apache/airflow
| 43,597
|
sphinx
| false
|
generate_inverse_formula
|
def generate_inverse_formula(
expr: sympy.Expr, var: sympy.Symbol
) -> Optional[sympy.Expr]:
"""
Analyze an expression to see if it matches a specific invertible pattern that we
know how to reverse.
We're looking for expressions that are sums of terms where each term extracts a
distinct bounded range from the input variable, like:
y = c₀*a₀ + c₁*a₁ + c₂*a₂ + ... + cₙ*aₙ
where each aᵢ must be one of these specific patterns:
- ModularIndexing(var, divisor, modulo)
- FloorDiv(ModularIndexing(var, 1, modulo), divisor)
- FloorDiv(var, divisor)
- var (the variable itself)
The key pattern we need is:
- Coefficients are strictly decreasing: c₀ > c₁ > c₂ > ... > cₙ
- Each coefficient matches the product of ranges of later terms (mixed-radix property)
- Each term extracts a bounded range, creating non-overlapping "slots"
If we find this pattern, we can generate the reconstruction transformation that
decomposes the variable and rebuilds it using the correct multipliers.
EXAMPLE:
Input: 100*((p//100)) + 10*((p%100)//10) + (p%10)
Returns the reconstruction expression:
remainder₀ = p
component₀ = remainder₀ // 100 # hundreds digit
remainder₁ = remainder₀ % 100
component₁ = remainder₁ // 10 # tens digit
remainder₂ = remainder₁ % 10
component₂ = remainder₂ # ones digit
result = component₀*100 + component₁*10 + component₂*1
This decomposes p into its components and rebuilds it using the original
multipliers, which should equal the input expression.
Args:
expr: Expression to analyze (sum of terms with ModularIndexing, FloorDiv, etc.)
var: The variable being decomposed
Returns:
None if not invertible, or the reconstruction expression
References:
Mixed-radix systems: https://en.wikipedia.org/wiki/Mixed_radix
"""
# Step 1: Parse all terms
terms = parse_terms(expr, var)
if not terms:
return None
# Step 2: Sort by coefficient (descending)
coeffs = [t.coefficient for t in terms]
idxs = reversed(argsort_sym(V.graph.sizevars.shape_env, coeffs))
terms = [terms[i] for i in idxs]
# Step 3: Check invertibility conditions
if not check_invertibility(terms):
return None
return generate_reconstruction_expr(terms, var)
|
Analyze an expression to see if it matches a specific invertible pattern that we
know how to reverse.
We're looking for expressions that are sums of terms where each term extracts a
distinct bounded range from the input variable, like:
y = c₀*a₀ + c₁*a₁ + c₂*a₂ + ... + cₙ*aₙ
where each aᵢ must be one of these specific patterns:
- ModularIndexing(var, divisor, modulo)
- FloorDiv(ModularIndexing(var, 1, modulo), divisor)
- FloorDiv(var, divisor)
- var (the variable itself)
The key pattern we need is:
- Coefficients are strictly decreasing: c₀ > c₁ > c₂ > ... > cₙ
- Each coefficient matches the product of ranges of later terms (mixed-radix property)
- Each term extracts a bounded range, creating non-overlapping "slots"
If we find this pattern, we can generate the reconstruction transformation that
decomposes the variable and rebuilds it using the correct multipliers.
EXAMPLE:
Input: 100*((p//100)) + 10*((p%100)//10) + (p%10)
Returns the reconstruction expression:
remainder₀ = p
component₀ = remainder₀ // 100 # hundreds digit
remainder₁ = remainder₀ % 100
component₁ = remainder₁ // 10 # tens digit
remainder₂ = remainder₁ % 10
component₂ = remainder₂ # ones digit
result = component₀*100 + component₁*10 + component₂*1
This decomposes p into its components and rebuilds it using the original
multipliers, which should equal the input expression.
Args:
expr: Expression to analyze (sum of terms with ModularIndexing, FloorDiv, etc.)
var: The variable being decomposed
Returns:
None if not invertible, or the reconstruction expression
References:
Mixed-radix systems: https://en.wikipedia.org/wiki/Mixed_radix
|
python
|
torch/_inductor/invert_expr_analysis.py
| 24
|
[
"expr",
"var"
] |
Optional[sympy.Expr]
| true
| 3
| 7.84
|
pytorch/pytorch
| 96,034
|
google
| false
|
_cg
|
def _cg(fhess_p, fgrad, maxiter, tol, verbose=0):
"""
Solve iteratively the linear system 'fhess_p . xsupi = fgrad'
with a conjugate gradient descent.
Parameters
----------
fhess_p : callable
Function that takes the gradient as a parameter and returns the
matrix product of the Hessian and gradient.
fgrad : ndarray of shape (n_features,) or (n_features + 1,)
Gradient vector.
maxiter : int
Number of CG iterations.
tol : float
Stopping criterion.
Returns
-------
xsupi : ndarray of shape (n_features,) or (n_features + 1,)
Estimated solution.
"""
eps = 16 * np.finfo(np.float64).eps
xsupi = np.zeros(len(fgrad), dtype=fgrad.dtype)
ri = np.copy(fgrad) # residual = fgrad - fhess_p @ xsupi
psupi = -ri
i = 0
dri0 = np.dot(ri, ri)
# We also keep track of |p_i|^2.
psupi_norm2 = dri0
is_verbose = verbose >= 2
while i <= maxiter:
if np.sum(np.abs(ri)) <= tol:
if is_verbose:
print(
f" Inner CG solver iteration {i} stopped with\n"
f" sum(|residuals|) <= tol: {np.sum(np.abs(ri))} <= {tol}"
)
break
Ap = fhess_p(psupi)
# check curvature
curv = np.dot(psupi, Ap)
if 0 <= curv <= eps * psupi_norm2:
# See https://arxiv.org/abs/1803.02924, Algo 1 Capped Conjugate Gradient.
if is_verbose:
print(
f" Inner CG solver iteration {i} stopped with\n"
f" tiny_|p| = eps * ||p||^2, eps = {eps}, "
f"squared L2 norm ||p||^2 = {psupi_norm2}\n"
f" curvature <= tiny_|p|: {curv} <= {eps * psupi_norm2}"
)
break
elif curv < 0:
if i > 0:
if is_verbose:
print(
f" Inner CG solver iteration {i} stopped with negative "
f"curvature, curvature = {curv}"
)
break
else:
# fall back to steepest descent direction
xsupi += dri0 / curv * psupi
if is_verbose:
print(" Inner CG solver iteration 0 fell back to steepest descent")
break
alphai = dri0 / curv
xsupi += alphai * psupi
ri += alphai * Ap
dri1 = np.dot(ri, ri)
betai = dri1 / dri0
psupi = -ri + betai * psupi
# We use |p_i|^2 = |r_i|^2 + beta_i^2 |p_{i-1}|^2
psupi_norm2 = dri1 + betai**2 * psupi_norm2
i = i + 1
dri0 = dri1 # update np.dot(ri,ri) for next time.
if is_verbose and i > maxiter:
print(
f" Inner CG solver stopped reaching maxiter={i - 1} with "
f"sum(|residuals|) = {np.sum(np.abs(ri))}"
)
return xsupi
|
Solve iteratively the linear system 'fhess_p . xsupi = fgrad'
with a conjugate gradient descent.
Parameters
----------
fhess_p : callable
Function that takes the gradient as a parameter and returns the
matrix product of the Hessian and gradient.
fgrad : ndarray of shape (n_features,) or (n_features + 1,)
Gradient vector.
maxiter : int
Number of CG iterations.
tol : float
Stopping criterion.
Returns
-------
xsupi : ndarray of shape (n_features,) or (n_features + 1,)
Estimated solution.
|
python
|
sklearn/utils/optimize.py
| 113
|
[
"fhess_p",
"fgrad",
"maxiter",
"tol",
"verbose"
] | false
| 13
| 6
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
_use_flex_flash_attention_backward
|
def _use_flex_flash_attention_backward(
fw_subgraph: Subgraph,
mask_graph: Subgraph,
backend: Literal["AUTO", "TRITON", "FLASH", "TRITON_DECODE"],
joint_outputs: Optional[Any] = None,
score_mod_other_buffers: Optional[Sequence[TensorBox]] = None,
) -> bool:
"""Determine if we should use flex flash attention for the given inputs.
Args:
fw_subgraph: The forward score modification subgraph
mask_graph: The mask modification subgraph
backend: Implementation selector (AUTO, TRITON, FLASH, TRITON_DECODE)
joint_outputs: Processed joint outputs (for PR1 constraint checking)
score_mod_other_buffers: Additional buffers used by score_mod
Returns:
True if flash attention should be used, False otherwise
"""
# Flash is experimental and must be explicitly requested
if backend != "FLASH":
return False
can_use, reason = _can_use_flex_flash_attention_backward(
fw_subgraph,
mask_graph,
joint_outputs,
score_mod_other_buffers,
)
if not can_use:
raise RuntimeError(
f"BACKEND='FLASH' but flash attention cannot be used: {reason}"
)
return True
|
Determine if we should use flex flash attention for the given inputs.
Args:
fw_subgraph: The forward score modification subgraph
mask_graph: The mask modification subgraph
backend: Implementation selector (AUTO, TRITON, FLASH, TRITON_DECODE)
joint_outputs: Processed joint outputs (for PR1 constraint checking)
score_mod_other_buffers: Additional buffers used by score_mod
Returns:
True if flash attention should be used, False otherwise
|
python
|
torch/_inductor/kernel/flex/flex_flash_attention.py
| 383
|
[
"fw_subgraph",
"mask_graph",
"backend",
"joint_outputs",
"score_mod_other_buffers"
] |
bool
| true
| 3
| 7.44
|
pytorch/pytorch
| 96,034
|
google
| false
|
lenientParsing
|
@GwtIncompatible // To be supported
@CanIgnoreReturnValue
CacheBuilder<K, V> lenientParsing() {
strictParsing = false;
return this;
}
|
Enables lenient parsing. Useful for tests and spec parsing.
@return this {@code CacheBuilder} instance (for chaining)
|
java
|
android/guava/src/com/google/common/cache/CacheBuilder.java
| 352
|
[] | true
| 1
| 6.4
|
google/guava
| 51,352
|
javadoc
| false
|
|
setValue
|
@Override
public R setValue(final R value) {
final R result = getRight();
setRight(value);
return result;
}
|
Sets the {@code Map.Entry} value.
This sets the right element of the pair.
@param value the right value to set, not null.
@return the old value for the right element.
|
java
|
src/main/java/org/apache/commons/lang3/tuple/MutablePair.java
| 186
|
[
"value"
] |
R
| true
| 1
| 7.04
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
all
|
public KafkaFuture<Void> all() {
final KafkaFutureImpl<Void> result = new KafkaFutureImpl<>();
this.future.whenComplete((topicPartitions, throwable) -> {
if (throwable != null) {
result.completeExceptionally(throwable);
} else {
for (TopicPartition partition : partitions) {
if (maybeCompleteExceptionally(topicPartitions, partition, result)) {
return;
}
}
result.complete(null);
}
});
return result;
}
|
Return a future which succeeds only if all the deletions succeed.
If not, the first partition error shall be returned.
|
java
|
clients/src/main/java/org/apache/kafka/clients/admin/DeleteConsumerGroupOffsetsResult.java
| 63
|
[] | true
| 3
| 7.04
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
formatParameters
|
private static String formatParameters(Map<String, String> params) {
String joined = params.entrySet().stream().map(e -> e.getKey() + "=" + e.getValue()).collect(Collectors.joining(";"));
return joined.isEmpty() ? "" : ";" + joined;
}
|
Resolves this instance to a MediaType instance defined in given MediaTypeRegistry.
Performs validation against parameters.
@param mediaTypeRegistry a registry where a mapping between a raw media type to an instance MediaType is defined
@return a MediaType instance or null if no media type could be found or if a known parameter do not passes validation
|
java
|
libs/x-content/src/main/java/org/elasticsearch/xcontent/ParsedMediaType.java
| 166
|
[
"params"
] |
String
| true
| 2
| 7.52
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
read_all_dags
|
def read_all_dags(cls, session: Session = NEW_SESSION) -> dict[str, SerializedDAG]:
"""
Read all DAGs in serialized_dag table.
:param session: ORM Session
:returns: a dict of DAGs read from database
"""
latest_serialized_dag_subquery = (
select(cls.dag_id, func.max(cls.created_at).label("max_created")).group_by(cls.dag_id).subquery()
)
serialized_dags = session.scalars(
select(cls).join(
latest_serialized_dag_subquery,
(cls.dag_id == latest_serialized_dag_subquery.c.dag_id)
and (cls.created_at == latest_serialized_dag_subquery.c.max_created),
)
)
dags = {}
for row in serialized_dags:
log.debug("Deserializing DAG: %s", row.dag_id)
dag = row.dag
# Coherence check
if dag.dag_id == row.dag_id:
dags[row.dag_id] = dag
else:
log.warning(
"dag_id Mismatch in DB: Row with dag_id '%s' has Serialised DAG with '%s' dag_id",
row.dag_id,
dag.dag_id,
)
return dags
|
Read all DAGs in serialized_dag table.
:param session: ORM Session
:returns: a dict of DAGs read from database
|
python
|
airflow-core/src/airflow/models/serialized_dag.py
| 523
|
[
"cls",
"session"
] |
dict[str, SerializedDAG]
| true
| 5
| 8.4
|
apache/airflow
| 43,597
|
sphinx
| false
|
connectionDelay
|
public long connectionDelay(String id, long now) {
NodeConnectionState state = nodeState.get(id);
if (state == null) return 0;
if (state.state == ConnectionState.CONNECTING) {
return connectionSetupTimeoutMs(id);
} else if (state.state.isDisconnected()) {
long timeWaited = now - state.lastConnectAttemptMs;
return Math.max(state.reconnectBackoffMs - timeWaited, 0);
} else {
// When connected, we should be able to delay indefinitely since other events (connection or
// data acked) will cause a wakeup once data can be sent.
return Long.MAX_VALUE;
}
}
|
Returns the number of milliseconds to wait, based on the connection state, before attempting to send data. When
disconnected, this respects the reconnect backoff time. When connecting, return a delay based on the connection timeout.
When connected, wait indefinitely (i.e. until a wakeup).
@param id the connection to check
@param now the current time in ms
|
java
|
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
| 105
|
[
"id",
"now"
] | true
| 4
| 7.2
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
charAt
|
@Override
public char charAt(final int index) {
if (index < 0 || index >= length()) {
throw new StringIndexOutOfBoundsException(index);
}
return buffer[index];
}
|
Gets the character at the specified index.
@see #setCharAt(int, char)
@see #deleteCharAt(int)
@param index the index to retrieve, must be valid
@return the character at the index
@throws IndexOutOfBoundsException if the index is invalid
|
java
|
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
| 1,587
|
[
"index"
] | true
| 3
| 7.76
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
getBeansWithAnnotation
|
@Override
public Map<String, Object> getBeansWithAnnotation(Class<? extends Annotation> annotationType)
throws BeansException {
Map<String, Object> results = new LinkedHashMap<>();
for (String beanName : this.beans.keySet()) {
if (findAnnotationOnBean(beanName, annotationType) != null) {
results.put(beanName, getBean(beanName));
}
}
return results;
}
|
Add a new singleton bean.
<p>Will overwrite any existing instance for the given name.
@param name the name of the bean
@param bean the bean instance
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/StaticListableBeanFactory.java
| 452
|
[
"annotationType"
] | true
| 2
| 6.88
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
processAssignmentReceived
|
private void processAssignmentReceived(Map<String, SortedSet<Integer>> activeTasks,
Map<String, SortedSet<Integer>> standbyTasks,
Map<String, SortedSet<Integer>> warmupTasks,
boolean isGroupReady) {
replaceTargetAssignmentWithNewAssignment(activeTasks, standbyTasks, warmupTasks, isGroupReady);
if (!targetAssignmentReconciled()) {
transitionTo(MemberState.RECONCILING);
} else {
log.debug("Target assignment {} received from the broker is equals to the member " +
"current assignment {}. Nothing to reconcile.",
targetAssignment, currentAssignment);
if (state == MemberState.RECONCILING || state == MemberState.JOINING) {
transitionTo(MemberState.STABLE);
}
}
}
|
This will process the assignment received if it is different from the member's current
assignment. If a new assignment is received, this will make sure reconciliation is attempted
on the next call of `poll`. If another reconciliation is currently in process, the first `poll`
after that reconciliation will trigger the new reconciliation.
@param activeTasks Target active tasks assignment received from the broker.
@param standbyTasks Target standby tasks assignment received from the broker.
@param warmupTasks Target warm-up tasks assignment received from the broker.
@param isGroupReady True if the group is ready, false otherwise.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsMembershipManager.java
| 988
|
[
"activeTasks",
"standbyTasks",
"warmupTasks",
"isGroupReady"
] |
void
| true
| 4
| 6.72
|
apache/kafka
| 31,560
|
javadoc
| false
|
inspectorWaitForDebugger
|
function inspectorWaitForDebugger() {
if (!waitForDebugger())
throw new ERR_INSPECTOR_NOT_ACTIVE();
}
|
Blocks until a client (existing or connected later)
has sent the `Runtime.runIfWaitingForDebugger`
command.
@returns {void}
|
javascript
|
lib/inspector.js
| 204
|
[] | false
| 2
| 6.4
|
nodejs/node
| 114,839
|
jsdoc
| false
|
|
add_prefix
|
def add_prefix(self, prefix: str, axis: Axis | None = None) -> Self:
"""
Prefix labels with string `prefix`.
For Series, the row labels are prefixed.
For DataFrame, the column labels are prefixed.
Parameters
----------
prefix : str
The string to add before each label.
axis : {0 or 'index', 1 or 'columns', None}, default None
Axis to add prefix on
.. versionadded:: 2.0.0
Returns
-------
Series or DataFrame
New Series or DataFrame with updated labels.
See Also
--------
Series.add_suffix: Suffix row labels with string `suffix`.
DataFrame.add_suffix: Suffix column labels with string `suffix`.
Examples
--------
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_prefix("item_")
item_0 1
item_1 2
item_2 3
item_3 4
dtype: int64
>>> df = pd.DataFrame({"A": [1, 2, 3, 4], "B": [3, 4, 5, 6]})
>>> df
A B
0 1 3
1 2 4
2 3 5
3 4 6
>>> df.add_prefix("col_")
col_A col_B
0 1 3
1 2 4
2 3 5
3 4 6
"""
f = lambda x: f"{prefix}{x}"
axis_name = self._info_axis_name
if axis is not None:
axis_name = self._get_axis_name(axis)
mapper = {axis_name: f}
# error: Keywords must be strings
# error: No overload variant of "_rename" of "NDFrame" matches
# argument type "dict[Literal['index', 'columns'], Callable[[Any], str]]"
return self._rename(**mapper) # type: ignore[call-overload, misc]
|
Prefix labels with string `prefix`.
For Series, the row labels are prefixed.
For DataFrame, the column labels are prefixed.
Parameters
----------
prefix : str
The string to add before each label.
axis : {0 or 'index', 1 or 'columns', None}, default None
Axis to add prefix on
.. versionadded:: 2.0.0
Returns
-------
Series or DataFrame
New Series or DataFrame with updated labels.
See Also
--------
Series.add_suffix: Suffix row labels with string `suffix`.
DataFrame.add_suffix: Suffix column labels with string `suffix`.
Examples
--------
>>> s = pd.Series([1, 2, 3, 4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_prefix("item_")
item_0 1
item_1 2
item_2 3
item_3 4
dtype: int64
>>> df = pd.DataFrame({"A": [1, 2, 3, 4], "B": [3, 4, 5, 6]})
>>> df
A B
0 1 3
1 2 4
2 3 5
3 4 6
>>> df.add_prefix("col_")
col_A col_B
0 1 3
1 2 4
2 3 5
3 4 6
|
python
|
pandas/core/generic.py
| 4,702
|
[
"self",
"prefix",
"axis"
] |
Self
| true
| 2
| 8.56
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
assertEndNotReached
|
private void assertEndNotReached() {
if (hasNextValue == false) {
throw new IllegalStateException("Iterator has no more buckets");
}
}
|
Creates a new scale-adjusting iterator.
@param delegate the iterator to wrap
@param targetScale the target scale for the new iterator
|
java
|
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ScaleAdjustingBucketIterator.java
| 85
|
[] |
void
| true
| 2
| 6.56
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
polyline
|
def polyline(off, scl):
"""
Returns an array representing a linear polynomial.
Parameters
----------
off, scl : scalars
The "y-intercept" and "slope" of the line, respectively.
Returns
-------
y : ndarray
This module's representation of the linear polynomial ``off +
scl*x``.
See Also
--------
numpy.polynomial.chebyshev.chebline
numpy.polynomial.legendre.legline
numpy.polynomial.laguerre.lagline
numpy.polynomial.hermite.hermline
numpy.polynomial.hermite_e.hermeline
Examples
--------
>>> from numpy.polynomial import polynomial as P
>>> P.polyline(1, -1)
array([ 1, -1])
>>> P.polyval(1, P.polyline(1, -1)) # should be 0
0.0
"""
if scl != 0:
return np.array([off, scl])
else:
return np.array([off])
|
Returns an array representing a linear polynomial.
Parameters
----------
off, scl : scalars
The "y-intercept" and "slope" of the line, respectively.
Returns
-------
y : ndarray
This module's representation of the linear polynomial ``off +
scl*x``.
See Also
--------
numpy.polynomial.chebyshev.chebline
numpy.polynomial.legendre.legline
numpy.polynomial.laguerre.lagline
numpy.polynomial.hermite.hermline
numpy.polynomial.hermite_e.hermeline
Examples
--------
>>> from numpy.polynomial import polynomial as P
>>> P.polyline(1, -1)
array([ 1, -1])
>>> P.polyval(1, P.polyline(1, -1)) # should be 0
0.0
|
python
|
numpy/polynomial/polynomial.py
| 113
|
[
"off",
"scl"
] | false
| 3
| 6.88
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
tile
|
def tile(A, reps):
"""
Construct an array by repeating A the number of times given by reps.
If `reps` has length ``d``, the result will have dimension of
``max(d, A.ndim)``.
If ``A.ndim < d``, `A` is promoted to be d-dimensional by prepending new
axes. So a shape (3,) array is promoted to (1, 3) for 2-D replication,
or shape (1, 1, 3) for 3-D replication. If this is not the desired
behavior, promote `A` to d-dimensions manually before calling this
function.
If ``A.ndim > d``, `reps` is promoted to `A`.ndim by prepending 1's to it.
Thus for an `A` of shape (2, 3, 4, 5), a `reps` of (2, 2) is treated as
(1, 1, 2, 2).
Note : Although tile may be used for broadcasting, it is strongly
recommended to use numpy's broadcasting operations and functions.
Parameters
----------
A : array_like
The input array.
reps : array_like
The number of repetitions of `A` along each axis.
Returns
-------
c : ndarray
The tiled output array.
See Also
--------
repeat : Repeat elements of an array.
broadcast_to : Broadcast an array to a new shape
Examples
--------
>>> import numpy as np
>>> a = np.array([0, 1, 2])
>>> np.tile(a, 2)
array([0, 1, 2, 0, 1, 2])
>>> np.tile(a, (2, 2))
array([[0, 1, 2, 0, 1, 2],
[0, 1, 2, 0, 1, 2]])
>>> np.tile(a, (2, 1, 2))
array([[[0, 1, 2, 0, 1, 2]],
[[0, 1, 2, 0, 1, 2]]])
>>> b = np.array([[1, 2], [3, 4]])
>>> np.tile(b, 2)
array([[1, 2, 1, 2],
[3, 4, 3, 4]])
>>> np.tile(b, (2, 1))
array([[1, 2],
[3, 4],
[1, 2],
[3, 4]])
>>> c = np.array([1,2,3,4])
>>> np.tile(c,(4,1))
array([[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4]])
"""
try:
tup = tuple(reps)
except TypeError:
tup = (reps,)
d = len(tup)
if all(x == 1 for x in tup) and isinstance(A, _nx.ndarray):
# Fixes the problem that the function does not make a copy if A is a
# numpy array and the repetitions are 1 in all dimensions
return _nx.array(A, copy=True, subok=True, ndmin=d)
else:
# Note that no copy of zero-sized arrays is made. However since they
# have no data there is no risk of an inadvertent overwrite.
c = _nx.array(A, copy=None, subok=True, ndmin=d)
if (d < c.ndim):
tup = (1,) * (c.ndim - d) + tup
shape_out = tuple(s * t for s, t in zip(c.shape, tup))
n = c.size
if n > 0:
for dim_in, nrep in zip(c.shape, tup):
if nrep != 1:
c = c.reshape(-1, n).repeat(nrep, 0)
n //= dim_in
return c.reshape(shape_out)
|
Construct an array by repeating A the number of times given by reps.
If `reps` has length ``d``, the result will have dimension of
``max(d, A.ndim)``.
If ``A.ndim < d``, `A` is promoted to be d-dimensional by prepending new
axes. So a shape (3,) array is promoted to (1, 3) for 2-D replication,
or shape (1, 1, 3) for 3-D replication. If this is not the desired
behavior, promote `A` to d-dimensions manually before calling this
function.
If ``A.ndim > d``, `reps` is promoted to `A`.ndim by prepending 1's to it.
Thus for an `A` of shape (2, 3, 4, 5), a `reps` of (2, 2) is treated as
(1, 1, 2, 2).
Note : Although tile may be used for broadcasting, it is strongly
recommended to use numpy's broadcasting operations and functions.
Parameters
----------
A : array_like
The input array.
reps : array_like
The number of repetitions of `A` along each axis.
Returns
-------
c : ndarray
The tiled output array.
See Also
--------
repeat : Repeat elements of an array.
broadcast_to : Broadcast an array to a new shape
Examples
--------
>>> import numpy as np
>>> a = np.array([0, 1, 2])
>>> np.tile(a, 2)
array([0, 1, 2, 0, 1, 2])
>>> np.tile(a, (2, 2))
array([[0, 1, 2, 0, 1, 2],
[0, 1, 2, 0, 1, 2]])
>>> np.tile(a, (2, 1, 2))
array([[[0, 1, 2, 0, 1, 2]],
[[0, 1, 2, 0, 1, 2]]])
>>> b = np.array([[1, 2], [3, 4]])
>>> np.tile(b, 2)
array([[1, 2, 1, 2],
[3, 4, 3, 4]])
>>> np.tile(b, (2, 1))
array([[1, 2],
[3, 4],
[1, 2],
[3, 4]])
>>> c = np.array([1,2,3,4])
>>> np.tile(c,(4,1))
array([[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4]])
|
python
|
numpy/lib/_shape_base_impl.py
| 1,158
|
[
"A",
"reps"
] | false
| 8
| 7.76
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
findAdvisorsThatCanApply
|
public static List<Advisor> findAdvisorsThatCanApply(List<Advisor> candidateAdvisors, Class<?> clazz) {
if (candidateAdvisors.isEmpty()) {
return candidateAdvisors;
}
List<Advisor> eligibleAdvisors = new ArrayList<>();
for (Advisor candidate : candidateAdvisors) {
if (candidate instanceof IntroductionAdvisor && canApply(candidate, clazz)) {
eligibleAdvisors.add(candidate);
}
}
boolean hasIntroductions = !eligibleAdvisors.isEmpty();
for (Advisor candidate : candidateAdvisors) {
if (candidate instanceof IntroductionAdvisor) {
// already processed
continue;
}
if (canApply(candidate, clazz, hasIntroductions)) {
eligibleAdvisors.add(candidate);
}
}
return eligibleAdvisors;
}
|
Determine the sublist of the {@code candidateAdvisors} list
that is applicable to the given class.
@param candidateAdvisors the Advisors to evaluate
@param clazz the target class
@return sublist of Advisors that can apply to an object of the given class
(may be the incoming List as-is)
|
java
|
spring-aop/src/main/java/org/springframework/aop/support/AopUtils.java
| 319
|
[
"candidateAdvisors",
"clazz"
] | true
| 6
| 7.92
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
replaceParameters
|
private String replaceParameters(String message, Locale locale) {
return replaceParameters(message, locale, new LinkedHashSet<>(4));
}
|
Recursively replaces all message parameters.
<p>
The message parameter prefix <code>{</code> and suffix <code>}</code> can
be escaped using {@code \}, e.g. <code>\{escaped\}</code>.
@param message the message containing the parameters to be replaced
@param locale the locale to use when resolving replacements
@return the message with parameters replaced
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/validation/MessageSourceMessageInterpolator.java
| 75
|
[
"message",
"locale"
] |
String
| true
| 1
| 6.48
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
transformCatch
|
function transformCatch(node: PromiseReturningCallExpression<"then" | "catch">, onRejected: Expression | undefined, transformer: Transformer, hasContinuation: boolean, continuationArgName?: SynthBindingName): readonly Statement[] {
if (!onRejected || isNullOrUndefined(transformer, onRejected)) {
// Ignore this call as it has no effect on the result
return transformExpression(/* returnContextNode */ node, node.expression.expression, transformer, hasContinuation, continuationArgName);
}
const inputArgName = getArgBindingName(onRejected, transformer);
const possibleNameForVarDecl = getPossibleNameForVarDecl(node, transformer, continuationArgName);
// Transform the left-hand-side of `.then`/`.catch` into an array of inlined statements. We pass `true` for hasContinuation as `node` is the outer continuation.
const inlinedLeftHandSide = transformExpression(/*returnContextNode*/ node, node.expression.expression, transformer, /*hasContinuation*/ true, possibleNameForVarDecl);
if (hasFailed()) return silentFail(); // shortcut out of more work
// Transform the callback argument into an array of inlined statements. We pass whether we have an outer continuation here
// as that indicates whether `return` is valid.
const inlinedCallback = transformCallbackArgument(onRejected, hasContinuation, possibleNameForVarDecl, inputArgName, node, transformer);
if (hasFailed()) return silentFail(); // shortcut out of more work
const tryBlock = factory.createBlock(inlinedLeftHandSide);
const catchClause = factory.createCatchClause(inputArgName && getSynthesizedDeepClone(declareSynthBindingName(inputArgName)), factory.createBlock(inlinedCallback));
const tryStatement = factory.createTryStatement(tryBlock, catchClause, /*finallyBlock*/ undefined);
return finishCatchOrFinallyTransform(node, transformer, tryStatement, possibleNameForVarDecl, continuationArgName);
}
|
@param hasContinuation Whether another `then`, `catch`, or `finally` continuation follows this continuation.
@param continuationArgName The argument name for the continuation that follows this call.
|
typescript
|
src/services/codefixes/convertToAsyncFunction.ts
| 521
|
[
"node",
"onRejected",
"transformer",
"hasContinuation",
"continuationArgName?"
] | true
| 6
| 6.08
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
declareLongOrNull
|
@UpdateForV10(owner = UpdateForV10.Owner.CORE_INFRA) // https://github.com/elastic/elasticsearch/issues/130797
public void declareLongOrNull(BiConsumer<Value, Long> consumer, long nullValue, ParseField field) {
// Using a method reference here angers some compilers
declareField(
consumer,
p -> p.currentToken() == XContentParser.Token.VALUE_NULL ? nullValue : p.longValue(),
field,
ValueType.LONG_OR_NULL
);
}
|
Declare a double field that parses explicit {@code null}s in the json to a default value.
|
java
|
libs/x-content/src/main/java/org/elasticsearch/xcontent/AbstractObjectParser.java
| 240
|
[
"consumer",
"nullValue",
"field"
] |
void
| true
| 2
| 6
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
hasDashedElement
|
boolean hasDashedElement() {
Boolean hasDashedElement = this.hasDashedElement;
if (hasDashedElement != null) {
return hasDashedElement;
}
for (int i = 0; i < getNumberOfElements(); i++) {
if (getElement(i, Form.DASHED).indexOf('-') != -1) {
this.hasDashedElement = true;
return true;
}
}
this.hasDashedElement = false;
return false;
}
|
Returns {@code true} if this element is an ancestor (immediate or nested parent) of
the specified name.
@param name the name to check
@return {@code true} if this name is an ancestor
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertyName.java
| 607
|
[] | true
| 4
| 8.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
bind
|
def bind(self, sizes: Sequence[int]) -> None:
"""
Bind this DimList to specific sizes.
Args:
sizes: Sequence of sizes for each dimension
Raises:
ValueError: If sizes is not a sequence
"""
if not hasattr(sizes, "__len__") or not hasattr(sizes, "__getitem__"):
raise ValueError("expected a sequence")
size = len(sizes)
self.bind_len(size)
for i, dim_size in enumerate(sizes):
self._dims[i].size = int(dim_size)
|
Bind this DimList to specific sizes.
Args:
sizes: Sequence of sizes for each dimension
Raises:
ValueError: If sizes is not a sequence
|
python
|
functorch/dim/__init__.py
| 211
|
[
"self",
"sizes"
] |
None
| true
| 4
| 6.56
|
pytorch/pytorch
| 96,034
|
google
| false
|
of
|
@Contract("!null -> !null")
public static @Nullable PemContent of(@Nullable String text) {
return (text != null) ? new PemContent(text) : null;
}
|
Return a new {@link PemContent} instance containing the given text.
@param text the text containing PEM encoded content
@return a new {@link PemContent} instance
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/ssl/pem/PemContent.java
| 162
|
[
"text"
] |
PemContent
| true
| 2
| 7.68
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
get
|
public static @Nullable ConfigurationPropertiesBean get(ApplicationContext applicationContext, Object bean,
String beanName) {
Method factoryMethod = findFactoryMethod(applicationContext, beanName);
Bindable<Object> bindTarget = createBindTarget(bean, bean.getClass(), factoryMethod);
if (bindTarget == null) {
return null;
}
bindTarget = bindTarget.withBindMethod(BindMethodAttribute.get(applicationContext, beanName));
if (bindTarget.getBindMethod() == null && factoryMethod != null) {
bindTarget = bindTarget.withBindMethod(JAVA_BEAN_BIND_METHOD);
}
if (bindTarget.getBindMethod() != VALUE_OBJECT_BIND_METHOD) {
bindTarget = bindTarget.withExistingValue(bean);
}
return create(beanName, bean, bindTarget);
}
|
Return a {@link ConfigurationPropertiesBean @ConfigurationPropertiesBean} instance
for the given bean details or {@code null} if the bean is not a
{@link ConfigurationProperties @ConfigurationProperties} object. Annotations are
considered both on the bean itself, as well as any factory method (for example a
{@link Bean @Bean} method).
@param applicationContext the source application context
@param bean the bean to consider
@param beanName the bean name
@return a configuration properties bean or {@code null} if the neither the bean nor
factory method are annotated with
{@link ConfigurationProperties @ConfigurationProperties}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/ConfigurationPropertiesBean.java
| 203
|
[
"applicationContext",
"bean",
"beanName"
] |
ConfigurationPropertiesBean
| true
| 5
| 7.28
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
onApplicationEvent
|
@Override
public void onApplicationEvent(ApplicationEvent event) {
if (event.getSource() == this.source) {
onApplicationEventInternal(event);
}
}
|
Create a SourceFilteringListener for the given event source,
expecting subclasses to override the {@link #onApplicationEventInternal}
method (instead of specifying a delegate listener).
@param source the event source that this listener filters for,
only processing events from this source
|
java
|
spring-context/src/main/java/org/springframework/context/event/SourceFilteringListener.java
| 70
|
[
"event"
] |
void
| true
| 2
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
autograd_not_implemented_inner
|
def autograd_not_implemented_inner(
operator: OperatorBase, delayed_error: bool, *args: Any, **kwargs: Any
) -> Any:
"""If autograd is enabled and any of the arguments require grad this will either
raise an error or return a DelayedError depending on the value of delayed.
Args:
operator: The Operator to call with the *args and **kwargs with
op_name: The name of the Operator
delayed_error: If True, return a DelayedError instead of raising an error
args: The flattened operands to the Operator
kwargs: The keyword arguments to the Operator
Raises:
RuntimeError: If autograd is enabled and any of the arguments to the Operator
"""
with torch._C._AutoDispatchBelowAutograd():
result = operator(*args, **kwargs)
flat_operands = pytree.arg_tree_leaves(*args)
if torch.is_grad_enabled() and any(
f.requires_grad for f in flat_operands if isinstance(f, torch.Tensor)
):
if delayed_error:
err_fn = torch._C._functions.DelayedError(
f"Autograd not implemented for {str(operator)}",
1,
)
def fake_requires_grad(tensor):
if torch.is_floating_point(tensor) or torch.is_complex(tensor):
tensor = tensor.detach()
tensor.requires_grad = True
return tensor
return pytree.tree_map_only(
torch.Tensor, lambda x: err_fn(fake_requires_grad(x)), result
)
else:
raise RuntimeError(f"Autograd not implemented for {str(operator)}")
return result
|
If autograd is enabled and any of the arguments require grad this will either
raise an error or return a DelayedError depending on the value of delayed.
Args:
operator: The Operator to call with the *args and **kwargs with
op_name: The name of the Operator
delayed_error: If True, return a DelayedError instead of raising an error
args: The flattened operands to the Operator
kwargs: The keyword arguments to the Operator
Raises:
RuntimeError: If autograd is enabled and any of the arguments to the Operator
|
python
|
torch/_higher_order_ops/utils.py
| 36
|
[
"operator",
"delayed_error"
] |
Any
| true
| 7
| 6.4
|
pytorch/pytorch
| 96,034
|
google
| false
|
evaluate
|
def evaluate(op, left_op, right_op, use_numexpr: bool = True):
"""
Evaluate and return the expression of the op on left_op and right_op.
Parameters
----------
op : the actual operand
left_op : left operand
right_op : right operand
use_numexpr : bool, default True
Whether to try to use numexpr.
"""
op_str = _op_str_mapping[op]
if op_str is not None:
if use_numexpr:
# error: "None" not callable
return _evaluate(op, op_str, left_op, right_op) # type: ignore[misc]
return _evaluate_standard(op, op_str, left_op, right_op)
|
Evaluate and return the expression of the op on left_op and right_op.
Parameters
----------
op : the actual operand
left_op : left operand
right_op : right operand
use_numexpr : bool, default True
Whether to try to use numexpr.
|
python
|
pandas/core/computation/expressions.py
| 227
|
[
"op",
"left_op",
"right_op",
"use_numexpr"
] | true
| 3
| 6.56
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
|
doConvertTextValue
|
private Object doConvertTextValue(@Nullable Object oldValue, String newTextValue, PropertyEditor editor) {
try {
editor.setValue(oldValue);
}
catch (Exception ex) {
if (logger.isDebugEnabled()) {
logger.debug("PropertyEditor [" + editor.getClass().getName() + "] does not support setValue call", ex);
}
// Swallow and proceed.
}
editor.setAsText(newTextValue);
return editor.getValue();
}
|
Convert the given text value using the given property editor.
@param oldValue the previous value, if available (may be {@code null})
@param newTextValue the proposed text value
@param editor the PropertyEditor to use
@return the converted value
|
java
|
spring-beans/src/main/java/org/springframework/beans/TypeConverterDelegate.java
| 424
|
[
"oldValue",
"newTextValue",
"editor"
] |
Object
| true
| 3
| 7.76
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
joinGroupEpoch
|
abstract int joinGroupEpoch();
|
Returns the epoch a member uses to join the group. This is group-type-specific.
@return the epoch to join the group
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractMembershipManager.java
| 1,291
|
[] | true
| 1
| 6.8
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
startCodePath
|
function startCodePath(origin) {
if (codePath) {
// Emits onCodePathSegmentStart events if updated.
forwardCurrentToHead(analyzer, node);
}
// Create the code path of this scope.
codePath = analyzer.codePath = new CodePath({
id: analyzer.idGenerator.next(),
origin,
upper: codePath,
onLooped: analyzer.onLooped,
});
state = CodePath.getState(codePath);
// Emits onCodePathStart events.
analyzer.emitter.emit('onCodePathStart', codePath, node);
}
|
Creates a new code path and trigger the onCodePathStart event
based on the currently selected node.
@param {string} origin The reason the code path was started.
@returns {void}
|
javascript
|
packages/eslint-plugin-react-hooks/src/code-path-analysis/code-path-analyzer.js
| 391
|
[
"origin"
] | false
| 2
| 6.08
|
facebook/react
| 241,750
|
jsdoc
| false
|
|
reset
|
public synchronized void reset() throws IOException {
try {
close();
} finally {
if (memory == null) {
memory = new MemoryOutput();
} else {
memory.reset();
}
out = memory;
if (file != null) {
File deleteMe = file;
file = null;
if (!deleteMe.delete()) {
throw new IOException("Could not delete: " + deleteMe);
}
}
}
}
|
Calls {@link #close} if not already closed, and then resets this object back to its initial
state, for reuse. If data was buffered to a file, it will be deleted.
@throws IOException if an I/O error occurred while deleting the file buffer
|
java
|
android/guava/src/com/google/common/io/FileBackedOutputStream.java
| 181
|
[] |
void
| true
| 4
| 7.04
|
google/guava
| 51,352
|
javadoc
| false
|
to
|
public void to(Consumer<? super T> consumer) {
Assert.notNull(consumer, "'consumer' must not be null");
T value = getValue();
if (value != null && test(value)) {
consumer.accept(value);
}
}
|
Complete the mapping by passing any non-filtered value to the specified
consumer. The method is designed to be used with mutable objects.
@param consumer the consumer that should accept the value if it's not been
filtered
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/PropertyMapper.java
| 288
|
[
"consumer"
] |
void
| true
| 3
| 6.88
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
leaderFor
|
public Node leaderFor(TopicPartition topicPartition) {
PartitionInfo info = partitionsByTopicPartition.get(topicPartition);
if (info == null)
return null;
else
return info.leader();
}
|
Get the current leader for the given topic-partition
@param topicPartition The topic and partition we want to know the leader for
@return The node that is the leader for this topic-partition, or null if there is currently no leader
|
java
|
clients/src/main/java/org/apache/kafka/common/Cluster.java
| 272
|
[
"topicPartition"
] |
Node
| true
| 2
| 7.12
|
apache/kafka
| 31,560
|
javadoc
| false
|
identity
|
static <E extends Throwable> FailableDoubleUnaryOperator<E> identity() {
return t -> t;
}
|
Returns a unary operator that always returns its input argument.
@param <E> The kind of thrown exception or error.
@return a unary operator that always returns its input argument
|
java
|
src/main/java/org/apache/commons/lang3/function/FailableDoubleUnaryOperator.java
| 41
|
[] | true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
toString
|
@Override
public String toString() {
return "ConsumerRecord(topic = " + topic
+ ", partition = " + partition
+ ", leaderEpoch = " + leaderEpoch.orElse(null)
+ ", offset = " + offset
+ ", " + timestampType + " = " + timestamp
+ ", deliveryCount = " + deliveryCount.orElse(null)
+ ", serialized key size = " + serializedKeySize
+ ", serialized value size = " + serializedValueSize
+ ", headers = " + headers
+ ", key = " + key
+ ", value = " + value + ")";
}
|
Get the delivery count for the record if available. Deliveries
are counted for records delivered by share groups.
@return the delivery count or empty when deliveries not counted
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java
| 260
|
[] |
String
| true
| 1
| 6.88
|
apache/kafka
| 31,560
|
javadoc
| false
|
ensureCapacity
|
public static boolean[] ensureCapacity(boolean[] array, int minLength, int padding) {
checkArgument(minLength >= 0, "Invalid minLength: %s", minLength);
checkArgument(padding >= 0, "Invalid padding: %s", padding);
return (array.length < minLength) ? Arrays.copyOf(array, minLength + padding) : array;
}
|
Returns an array containing the same values as {@code array}, but guaranteed to be of a
specified minimum length. If {@code array} already has a length of at least {@code minLength},
it is returned directly. Otherwise, a new array of size {@code minLength + padding} is
returned, containing the values of {@code array}, and zeroes in the remaining places.
@param array the source array
@param minLength the minimum length the returned array must guarantee
@param padding an extra amount to "grow" the array by if growth is necessary
@throws IllegalArgumentException if {@code minLength} or {@code padding} is negative
@return an array containing the values of {@code array}, with guaranteed minimum length {@code
minLength}
|
java
|
android/guava/src/com/google/common/primitives/Booleans.java
| 271
|
[
"array",
"minLength",
"padding"
] | true
| 2
| 7.92
|
google/guava
| 51,352
|
javadoc
| false
|
|
toArray
|
public static Uuid[] toArray(List<Uuid> list) {
if (list == null) return null;
Uuid[] array = new Uuid[list.size()];
for (int i = 0; i < list.size(); i++) {
array[i] = list.get(i);
}
return array;
}
|
Convert a list of Uuid to an array of Uuid.
@param list The input list
@return The output array
|
java
|
clients/src/main/java/org/apache/kafka/common/Uuid.java
| 175
|
[
"list"
] | true
| 3
| 7.76
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
zero_one_loss
|
def zero_one_loss(y_true, y_pred, *, normalize=True, sample_weight=None):
"""Zero-one classification loss.
If normalize is ``True``, returns the fraction of misclassifications, else returns
the number of misclassifications. The best performance is 0.
Read more in the :ref:`User Guide <zero_one_loss>`.
Parameters
----------
y_true : 1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels. Sparse matrix is only supported when
labels are of :term:`multilabel` type.
y_pred : 1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier. Sparse matrix is only
supported when labels are of :term:`multilabel` type.
normalize : bool, default=True
If ``False``, return the number of misclassifications.
Otherwise, return the fraction of misclassifications.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
Returns
-------
loss : float
If ``normalize == True``, returns the fraction of misclassifications, else
returns the number of misclassifications.
See Also
--------
accuracy_score : Compute the accuracy score. By default, the function will
return the fraction of correct predictions divided by the total number
of predictions.
hamming_loss : Compute the average Hamming loss or Hamming distance between
two sets of samples.
jaccard_score : Compute the Jaccard similarity coefficient score.
Notes
-----
In multilabel classification, the zero_one_loss function corresponds to
the subset zero-one loss: for each sample, the entire set of labels must be
correctly predicted, otherwise the loss for that sample is equal to one.
Examples
--------
>>> from sklearn.metrics import zero_one_loss
>>> y_pred = [1, 2, 3, 4]
>>> y_true = [2, 2, 3, 4]
>>> zero_one_loss(y_true, y_pred)
0.25
>>> zero_one_loss(y_true, y_pred, normalize=False)
1.0
In the multilabel case with binary label indicators:
>>> import numpy as np
>>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5
"""
xp, _ = get_namespace(y_true, y_pred)
score = accuracy_score(
y_true, y_pred, normalize=normalize, sample_weight=sample_weight
)
if normalize:
return 1 - score
else:
if sample_weight is not None:
n_samples = xp.sum(sample_weight)
else:
n_samples = _num_samples(y_true)
return n_samples - score
|
Zero-one classification loss.
If normalize is ``True``, returns the fraction of misclassifications, else returns
the number of misclassifications. The best performance is 0.
Read more in the :ref:`User Guide <zero_one_loss>`.
Parameters
----------
y_true : 1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels. Sparse matrix is only supported when
labels are of :term:`multilabel` type.
y_pred : 1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier. Sparse matrix is only
supported when labels are of :term:`multilabel` type.
normalize : bool, default=True
If ``False``, return the number of misclassifications.
Otherwise, return the fraction of misclassifications.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
Returns
-------
loss : float
If ``normalize == True``, returns the fraction of misclassifications, else
returns the number of misclassifications.
See Also
--------
accuracy_score : Compute the accuracy score. By default, the function will
return the fraction of correct predictions divided by the total number
of predictions.
hamming_loss : Compute the average Hamming loss or Hamming distance between
two sets of samples.
jaccard_score : Compute the Jaccard similarity coefficient score.
Notes
-----
In multilabel classification, the zero_one_loss function corresponds to
the subset zero-one loss: for each sample, the entire set of labels must be
correctly predicted, otherwise the loss for that sample is equal to one.
Examples
--------
>>> from sklearn.metrics import zero_one_loss
>>> y_pred = [1, 2, 3, 4]
>>> y_true = [2, 2, 3, 4]
>>> zero_one_loss(y_true, y_pred)
0.25
>>> zero_one_loss(y_true, y_pred, normalize=False)
1.0
In the multilabel case with binary label indicators:
>>> import numpy as np
>>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5
|
python
|
sklearn/metrics/_classification.py
| 1,305
|
[
"y_true",
"y_pred",
"normalize",
"sample_weight"
] | false
| 5
| 6.96
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
ediff1d
|
def ediff1d(ary, to_end=None, to_begin=None):
"""
The differences between consecutive elements of an array.
Parameters
----------
ary : array_like
If necessary, will be flattened before the differences are taken.
to_end : array_like, optional
Number(s) to append at the end of the returned differences.
to_begin : array_like, optional
Number(s) to prepend at the beginning of the returned differences.
Returns
-------
ediff1d : ndarray
The differences. Loosely, this is ``ary.flat[1:] - ary.flat[:-1]``.
See Also
--------
diff, gradient
Notes
-----
When applied to masked arrays, this function drops the mask information
if the `to_begin` and/or `to_end` parameters are used.
Examples
--------
>>> import numpy as np
>>> x = np.array([1, 2, 4, 7, 0])
>>> np.ediff1d(x)
array([ 1, 2, 3, -7])
>>> np.ediff1d(x, to_begin=-99, to_end=np.array([88, 99]))
array([-99, 1, 2, ..., -7, 88, 99])
The returned array is always 1D.
>>> y = [[1, 2, 4], [1, 6, 24]]
>>> np.ediff1d(y)
array([ 1, 2, -3, 5, 18])
"""
conv = _array_converter(ary)
# Convert to (any) array and ravel:
ary = conv[0].ravel()
# enforce that the dtype of `ary` is used for the output
dtype_req = ary.dtype
# fast track default case
if to_begin is None and to_end is None:
return ary[1:] - ary[:-1]
if to_begin is None:
l_begin = 0
else:
to_begin = np.asanyarray(to_begin)
if not np.can_cast(to_begin, dtype_req, casting="same_kind"):
raise TypeError("dtype of `to_begin` must be compatible "
"with input `ary` under the `same_kind` rule.")
to_begin = to_begin.ravel()
l_begin = len(to_begin)
if to_end is None:
l_end = 0
else:
to_end = np.asanyarray(to_end)
if not np.can_cast(to_end, dtype_req, casting="same_kind"):
raise TypeError("dtype of `to_end` must be compatible "
"with input `ary` under the `same_kind` rule.")
to_end = to_end.ravel()
l_end = len(to_end)
# do the calculation in place and copy to_begin and to_end
l_diff = max(len(ary) - 1, 0)
result = np.empty_like(ary, shape=l_diff + l_begin + l_end)
if l_begin > 0:
result[:l_begin] = to_begin
if l_end > 0:
result[l_begin + l_diff:] = to_end
np.subtract(ary[1:], ary[:-1], result[l_begin:l_begin + l_diff])
return conv.wrap(result)
|
The differences between consecutive elements of an array.
Parameters
----------
ary : array_like
If necessary, will be flattened before the differences are taken.
to_end : array_like, optional
Number(s) to append at the end of the returned differences.
to_begin : array_like, optional
Number(s) to prepend at the beginning of the returned differences.
Returns
-------
ediff1d : ndarray
The differences. Loosely, this is ``ary.flat[1:] - ary.flat[:-1]``.
See Also
--------
diff, gradient
Notes
-----
When applied to masked arrays, this function drops the mask information
if the `to_begin` and/or `to_end` parameters are used.
Examples
--------
>>> import numpy as np
>>> x = np.array([1, 2, 4, 7, 0])
>>> np.ediff1d(x)
array([ 1, 2, 3, -7])
>>> np.ediff1d(x, to_begin=-99, to_end=np.array([88, 99]))
array([-99, 1, 2, ..., -7, 88, 99])
The returned array is always 1D.
>>> y = [[1, 2, 4], [1, 6, 24]]
>>> np.ediff1d(y)
array([ 1, 2, -3, 5, 18])
|
python
|
numpy/lib/_arraysetops_impl.py
| 41
|
[
"ary",
"to_end",
"to_begin"
] | false
| 11
| 7.6
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
get_link
|
def get_link(
self,
operator: BaseOperator,
*,
ti_key: TaskInstanceKey,
) -> str:
"""
Link to Amazon Web Services Console.
:param operator: airflow operator
:param ti_key: TaskInstance ID to return link for
:return: link to external system
"""
conf = XCom.get_value(key=self.key, ti_key=ti_key)
return self.format_link(**conf) if conf else ""
|
Link to Amazon Web Services Console.
:param operator: airflow operator
:param ti_key: TaskInstance ID to return link for
:return: link to external system
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/links/base_aws.py
| 65
|
[
"self",
"operator",
"ti_key"
] |
str
| true
| 2
| 7.76
|
apache/airflow
| 43,597
|
sphinx
| false
|
unauthorizedTopics
|
public Set<String> unauthorizedTopics() {
return unauthorizedTopics;
}
|
Get the set of topics which failed authorization. May be empty if the set is not known
in the context the exception was raised in.
@return possibly empty set of unauthorized topics
|
java
|
clients/src/main/java/org/apache/kafka/common/errors/TopicAuthorizationException.java
| 44
|
[] | true
| 1
| 6.96
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
lagval3d
|
def lagval3d(x, y, z, c):
"""
Evaluate a 3-D Laguerre series at points (x, y, z).
This function returns the values:
.. math:: p(x,y,z) = \\sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z)
The parameters `x`, `y`, and `z` are converted to arrays only if
they are tuples or a lists, otherwise they are treated as a scalars and
they must have the same shape after conversion. In either case, either
`x`, `y`, and `z` or their elements must support multiplication and
addition both with themselves and with the elements of `c`.
If `c` has fewer than 3 dimensions, ones are implicitly appended to its
shape to make it 3-D. The shape of the result will be c.shape[3:] +
x.shape.
Parameters
----------
x, y, z : array_like, compatible object
The three dimensional series is evaluated at the points
``(x, y, z)``, where `x`, `y`, and `z` must have the same shape. If
any of `x`, `y`, or `z` is a list or tuple, it is first converted
to an ndarray, otherwise it is left unchanged and if it isn't an
ndarray it is treated as a scalar.
c : array_like
Array of coefficients ordered so that the coefficient of the term of
multi-degree i,j,k is contained in ``c[i,j,k]``. If `c` has dimension
greater than 3 the remaining indices enumerate multiple sets of
coefficients.
Returns
-------
values : ndarray, compatible object
The values of the multidimensional polynomial on points formed with
triples of corresponding values from `x`, `y`, and `z`.
See Also
--------
lagval, lagval2d, laggrid2d, laggrid3d
Examples
--------
>>> from numpy.polynomial.laguerre import lagval3d
>>> c = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
>>> lagval3d(1, 1, 2, c)
-1.0
"""
return pu._valnd(lagval, c, x, y, z)
|
Evaluate a 3-D Laguerre series at points (x, y, z).
This function returns the values:
.. math:: p(x,y,z) = \\sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z)
The parameters `x`, `y`, and `z` are converted to arrays only if
they are tuples or a lists, otherwise they are treated as a scalars and
they must have the same shape after conversion. In either case, either
`x`, `y`, and `z` or their elements must support multiplication and
addition both with themselves and with the elements of `c`.
If `c` has fewer than 3 dimensions, ones are implicitly appended to its
shape to make it 3-D. The shape of the result will be c.shape[3:] +
x.shape.
Parameters
----------
x, y, z : array_like, compatible object
The three dimensional series is evaluated at the points
``(x, y, z)``, where `x`, `y`, and `z` must have the same shape. If
any of `x`, `y`, or `z` is a list or tuple, it is first converted
to an ndarray, otherwise it is left unchanged and if it isn't an
ndarray it is treated as a scalar.
c : array_like
Array of coefficients ordered so that the coefficient of the term of
multi-degree i,j,k is contained in ``c[i,j,k]``. If `c` has dimension
greater than 3 the remaining indices enumerate multiple sets of
coefficients.
Returns
-------
values : ndarray, compatible object
The values of the multidimensional polynomial on points formed with
triples of corresponding values from `x`, `y`, and `z`.
See Also
--------
lagval, lagval2d, laggrid2d, laggrid3d
Examples
--------
>>> from numpy.polynomial.laguerre import lagval3d
>>> c = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
>>> lagval3d(1, 1, 2, c)
-1.0
|
python
|
numpy/polynomial/laguerre.py
| 995
|
[
"x",
"y",
"z",
"c"
] | false
| 1
| 6.32
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
unstructured_to_structured
|
def unstructured_to_structured(arr, dtype=None, names=None, align=False,
copy=False, casting='unsafe'):
"""
Converts an n-D unstructured array into an (n-1)-D structured array.
The last dimension of the input array is converted into a structure, with
number of field-elements equal to the size of the last dimension of the
input array. By default all output fields have the input array's dtype, but
an output structured dtype with an equal number of fields-elements can be
supplied instead.
Nested fields, as well as each element of any subarray fields, all count
towards the number of field-elements.
Parameters
----------
arr : ndarray
Unstructured array or dtype to convert.
dtype : dtype, optional
The structured dtype of the output array
names : list of strings, optional
If dtype is not supplied, this specifies the field names for the output
dtype, in order. The field dtypes will be the same as the input array.
align : boolean, optional
Whether to create an aligned memory layout.
copy : bool, optional
See copy argument to `numpy.ndarray.astype`. If true, always return a
copy. If false, and `dtype` requirements are satisfied, a view is
returned.
casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
See casting argument of `numpy.ndarray.astype`. Controls what kind of
data casting may occur.
Returns
-------
structured : ndarray
Structured array with fewer dimensions.
Examples
--------
>>> import numpy as np
>>> from numpy.lib import recfunctions as rfn
>>> dt = np.dtype([('a', 'i4'), ('b', 'f4,u2'), ('c', 'f4', 2)])
>>> a = np.arange(20).reshape((4,5))
>>> a
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
>>> rfn.unstructured_to_structured(a, dt)
array([( 0, ( 1., 2), [ 3., 4.]), ( 5, ( 6., 7), [ 8., 9.]),
(10, (11., 12), [13., 14.]), (15, (16., 17), [18., 19.])],
dtype=[('a', '<i4'), ('b', [('f0', '<f4'), ('f1', '<u2')]), ('c', '<f4', (2,))])
""" # noqa: E501
if arr.shape == ():
raise ValueError('arr must have at least one dimension')
n_elem = arr.shape[-1]
if n_elem == 0:
# too many bugs elsewhere for this to work now
raise NotImplementedError("last axis with size 0 is not supported")
if dtype is None:
if names is None:
names = [f'f{n}' for n in range(n_elem)]
out_dtype = np.dtype([(n, arr.dtype) for n in names], align=align)
fields = _get_fields_and_offsets(out_dtype)
dts, counts, offsets = zip(*fields)
else:
if names is not None:
raise ValueError("don't supply both dtype and names")
# if dtype is the args of np.dtype, construct it
dtype = np.dtype(dtype)
# sanity check of the input dtype
fields = _get_fields_and_offsets(dtype)
if len(fields) == 0:
dts, counts, offsets = [], [], []
else:
dts, counts, offsets = zip(*fields)
if n_elem != sum(counts):
raise ValueError('The length of the last dimension of arr must '
'be equal to the number of fields in dtype')
out_dtype = dtype
if align and not out_dtype.isalignedstruct:
raise ValueError("align was True but dtype is not aligned")
names = [f'f{n}' for n in range(len(fields))]
# Use a series of views and casts to convert to a structured array:
# first view as a packed structured array of one dtype
packed_fields = np.dtype({'names': names,
'formats': [(arr.dtype, dt.shape) for dt in dts]})
arr = np.ascontiguousarray(arr).view(packed_fields)
# next cast to an unpacked but flattened format with varied dtypes
flattened_fields = np.dtype({'names': names,
'formats': dts,
'offsets': offsets,
'itemsize': out_dtype.itemsize})
arr = arr.astype(flattened_fields, copy=copy, casting=casting)
# finally view as the final nested dtype and remove the last axis
return arr.view(out_dtype)[..., 0]
|
Converts an n-D unstructured array into an (n-1)-D structured array.
The last dimension of the input array is converted into a structure, with
number of field-elements equal to the size of the last dimension of the
input array. By default all output fields have the input array's dtype, but
an output structured dtype with an equal number of fields-elements can be
supplied instead.
Nested fields, as well as each element of any subarray fields, all count
towards the number of field-elements.
Parameters
----------
arr : ndarray
Unstructured array or dtype to convert.
dtype : dtype, optional
The structured dtype of the output array
names : list of strings, optional
If dtype is not supplied, this specifies the field names for the output
dtype, in order. The field dtypes will be the same as the input array.
align : boolean, optional
Whether to create an aligned memory layout.
copy : bool, optional
See copy argument to `numpy.ndarray.astype`. If true, always return a
copy. If false, and `dtype` requirements are satisfied, a view is
returned.
casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
See casting argument of `numpy.ndarray.astype`. Controls what kind of
data casting may occur.
Returns
-------
structured : ndarray
Structured array with fewer dimensions.
Examples
--------
>>> import numpy as np
>>> from numpy.lib import recfunctions as rfn
>>> dt = np.dtype([('a', 'i4'), ('b', 'f4,u2'), ('c', 'f4', 2)])
>>> a = np.arange(20).reshape((4,5))
>>> a
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
>>> rfn.unstructured_to_structured(a, dt)
array([( 0, ( 1., 2), [ 3., 4.]), ( 5, ( 6., 7), [ 8., 9.]),
(10, (11., 12), [13., 14.]), (15, (16., 17), [18., 19.])],
dtype=[('a', '<i4'), ('b', [('f0', '<f4'), ('f1', '<u2')]), ('c', '<f4', (2,))])
|
python
|
numpy/lib/recfunctions.py
| 1,075
|
[
"arr",
"dtype",
"names",
"align",
"copy",
"casting"
] | false
| 12
| 7.76
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
get_import_mappings
|
def get_import_mappings(tree) -> dict[str, str]:
"""
Retrieve a mapping of local import names to their fully qualified module paths from an AST tree.
:param tree: The AST tree to analyze for import statements.
:return: A dictionary where the keys are the local names (aliases) used in the current module
and the values are the fully qualified names of the imported modules or their members.
Example:
>>> import ast
>>> code = '''
... import os
... import numpy as np
... from collections import defaultdict
... from datetime import datetime as dt
... '''
>>> get_import_mappings(ast.parse(code))
{'os': 'os', 'np': 'numpy', 'defaultdict': 'collections.defaultdict', 'dt': 'datetime.datetime'}
"""
imports = {}
for node in ast.walk(tree):
if isinstance(node, (ast.Import, ast.ImportFrom)):
for alias in node.names:
module_prefix = f"{node.module}." if hasattr(node, "module") and node.module else ""
imports[alias.asname or alias.name] = f"{module_prefix}{alias.name}"
return imports
|
Retrieve a mapping of local import names to their fully qualified module paths from an AST tree.
:param tree: The AST tree to analyze for import statements.
:return: A dictionary where the keys are the local names (aliases) used in the current module
and the values are the fully qualified names of the imported modules or their members.
Example:
>>> import ast
>>> code = '''
... import os
... import numpy as np
... from collections import defaultdict
... from datetime import datetime as dt
... '''
>>> get_import_mappings(ast.parse(code))
{'os': 'os', 'np': 'numpy', 'defaultdict': 'collections.defaultdict', 'dt': 'datetime.datetime'}
|
python
|
devel-common/src/sphinx_exts/providers_extensions.py
| 121
|
[
"tree"
] |
dict[str, str]
| true
| 7
| 9.52
|
apache/airflow
| 43,597
|
sphinx
| false
|
onApplicationEvent
|
@Override
public void onApplicationEvent(SpringApplicationEvent event) {
if (this.triggerEventType.isInstance(event) && created.compareAndSet(false, true)) {
try {
writePidFile(event);
}
catch (Exception ex) {
String message = String.format("Cannot create pid file %s", this.file);
if (failOnWriteError(event)) {
throw new IllegalStateException(message, ex);
}
logger.warn(message, ex);
}
}
}
|
Sets the type of application event that will trigger writing of the PID file.
Defaults to {@link ApplicationPreparedEvent}. NOTE: If you use the
{@link org.springframework.boot.context.event.ApplicationStartingEvent} to trigger
the write, you will not be able to specify the PID filename in the Spring
{@link Environment}.
@param triggerEventType the trigger event type
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/ApplicationPidFileWriter.java
| 136
|
[
"event"
] |
void
| true
| 5
| 6.4
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
prepareEnvironment
|
private ConfigurableEnvironment prepareEnvironment(SpringApplicationRunListeners listeners,
DefaultBootstrapContext bootstrapContext, ApplicationArguments applicationArguments) {
// Create and configure the environment
ConfigurableEnvironment environment = getOrCreateEnvironment();
configureEnvironment(environment, applicationArguments.getSourceArgs());
ConfigurationPropertySources.attach(environment);
listeners.environmentPrepared(bootstrapContext, environment);
ApplicationInfoPropertySource.moveToEnd(environment);
DefaultPropertiesPropertySource.moveToEnd(environment);
Assert.state(!environment.containsProperty("spring.main.environment-prefix"),
"Environment prefix cannot be set via properties.");
bindToSpringApplication(environment);
if (!this.isCustomEnvironment) {
EnvironmentConverter environmentConverter = new EnvironmentConverter(getClassLoader());
environment = environmentConverter.convertEnvironmentIfNecessary(environment, deduceEnvironmentClass());
}
ConfigurationPropertySources.attach(environment);
return environment;
}
|
Run the Spring application, creating and refreshing a new
{@link ApplicationContext}.
@param args the application arguments (usually passed from a Java main method)
@return a running {@link ApplicationContext}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/SpringApplication.java
| 350
|
[
"listeners",
"bootstrapContext",
"applicationArguments"
] |
ConfigurableEnvironment
| true
| 2
| 7.44
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
wrapperChain
|
function wrapperChain() {
return chain(this);
}
|
Creates a `lodash` wrapper instance with explicit method chain sequences enabled.
@name chain
@memberOf _
@since 0.1.0
@category Seq
@returns {Object} Returns the new `lodash` wrapper instance.
@example
var users = [
{ 'user': 'barney', 'age': 36 },
{ 'user': 'fred', 'age': 40 }
];
// A sequence without explicit chaining.
_(users).head();
// => { 'user': 'barney', 'age': 36 }
// A sequence with explicit chaining.
_(users)
.chain()
.head()
.pick('user')
.value();
// => { 'user': 'barney' }
|
javascript
|
lodash.js
| 8,968
|
[] | false
| 1
| 7.44
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
setCharAt
|
public StrBuilder setCharAt(final int index, final char ch) {
if (index < 0 || index >= length()) {
throw new StringIndexOutOfBoundsException(index);
}
buffer[index] = ch;
return this;
}
|
Sets the character at the specified index.
@see #charAt(int)
@see #deleteCharAt(int)
@param index the index to set
@param ch the new character
@return {@code this} instance.
@throws IndexOutOfBoundsException if the index is invalid
|
java
|
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
| 2,787
|
[
"index",
"ch"
] |
StrBuilder
| true
| 3
| 7.44
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
all
|
public KafkaFuture<Map<String, ConsumerGroupDescription>> all() {
return KafkaFuture.allOf(futures.values().toArray(new KafkaFuture<?>[0])).thenApply(
nil -> {
Map<String, ConsumerGroupDescription> descriptions = new HashMap<>(futures.size());
futures.forEach((key, future) -> {
try {
descriptions.put(key, future.get());
} catch (InterruptedException | ExecutionException e) {
// This should be unreachable, since the KafkaFuture#allOf already ensured
// that all of the futures completed successfully.
throw new RuntimeException(e);
}
});
return descriptions;
});
}
|
Return a future which yields all ConsumerGroupDescription objects, if all the describes succeed.
|
java
|
clients/src/main/java/org/apache/kafka/clients/admin/DescribeConsumerGroupsResult.java
| 48
|
[] | true
| 2
| 6.4
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
summarize_range
|
def summarize_range(self, start: int, end: int) -> T:
"""
Query a range of values in the segment tree.
Args:
start: Start index of the range to query (inclusive)
end: End index of the range to query (inclusive)
Returns:
Summary value for the range according to the summary operation
Raises:
ValueError: If start > end or indices are out of bounds
"""
if start > end:
raise ValueError("Start index must be less than or equal to end index")
if start < 0 or start >= self.n:
raise ValueError(f"Start index {start} out of bounds [0, {self.n - 1}]")
if end < 0 or end >= self.n:
raise ValueError(f"End index {end} out of bounds [0, {self.n - 1}]")
return self._query_range_helper(1, 0, self.n - 1, start, end)
|
Query a range of values in the segment tree.
Args:
start: Start index of the range to query (inclusive)
end: End index of the range to query (inclusive)
Returns:
Summary value for the range according to the summary operation
Raises:
ValueError: If start > end or indices are out of bounds
|
python
|
torch/_inductor/codegen/segmented_tree.py
| 219
|
[
"self",
"start",
"end"
] |
T
| true
| 6
| 7.76
|
pytorch/pytorch
| 96,034
|
google
| false
|
moveToIncompleteAcks
|
public boolean moveToIncompleteAcks(TopicIdPartition tip) {
Acknowledgements acks = inFlightAcknowledgements.remove(tip);
if (acks != null) {
incompleteAcknowledgements.put(tip, acks);
return true;
} else {
log.error("Invalid partition {} received in ShareAcknowledge response", tip);
return false;
}
}
|
Moves the in-flight acknowledgements for a given partition to incomplete acknowledgements to retry
in the next request.
@param tip The TopicIdPartition for which we move the acknowledgements.
@return True if the partition was sent in the request.
<p> False if the partition was not part of the request, we log an error and ignore such partitions. </p>
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareConsumeRequestManager.java
| 1,415
|
[
"tip"
] | true
| 2
| 7.92
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
_set_reflect_both
|
def _set_reflect_both(padded, axis, width_pair, method,
original_period, include_edge=False):
"""
Pad `axis` of `arr` with reflection.
Parameters
----------
padded : ndarray
Input array of arbitrary shape.
axis : int
Axis along which to pad `arr`.
width_pair : (int, int)
Pair of widths that mark the pad area on both sides in the given
dimension.
method : str
Controls method of reflection; options are 'even' or 'odd'.
original_period : int
Original length of data on `axis` of `arr`.
include_edge : bool
If true, edge value is included in reflection, otherwise the edge
value forms the symmetric axis to the reflection.
Returns
-------
pad_amt : tuple of ints, length 2
New index positions of padding to do along the `axis`. If these are
both 0, padding is done in this dimension.
"""
left_pad, right_pad = width_pair
old_length = padded.shape[axis] - right_pad - left_pad
if include_edge:
# Avoid wrapping with only a subset of the original area
# by ensuring period can only be a multiple of the original
# area's length.
old_length = old_length // original_period * original_period
# Edge is included, we need to offset the pad amount by 1
edge_offset = 1
else:
# Avoid wrapping with only a subset of the original area
# by ensuring period can only be a multiple of the original
# area's length.
old_length = ((old_length - 1) // (original_period - 1)
* (original_period - 1) + 1)
edge_offset = 0 # Edge is not included, no need to offset pad amount
old_length -= 1 # but must be omitted from the chunk
if left_pad > 0:
# Pad with reflected values on left side:
# First limit chunk size which can't be larger than pad area
chunk_length = min(old_length, left_pad)
# Slice right to left, stop on or next to edge, start relative to stop
stop = left_pad - edge_offset
start = stop + chunk_length
left_slice = _slice_at_axis(slice(start, stop, -1), axis)
left_chunk = padded[left_slice]
if method == "odd":
# Negate chunk and align with edge
edge_slice = _slice_at_axis(slice(left_pad, left_pad + 1), axis)
left_chunk = 2 * padded[edge_slice] - left_chunk
# Insert chunk into padded area
start = left_pad - chunk_length
stop = left_pad
pad_area = _slice_at_axis(slice(start, stop), axis)
padded[pad_area] = left_chunk
# Adjust pointer to left edge for next iteration
left_pad -= chunk_length
if right_pad > 0:
# Pad with reflected values on right side:
# First limit chunk size which can't be larger than pad area
chunk_length = min(old_length, right_pad)
# Slice right to left, start on or next to edge, stop relative to start
start = -right_pad + edge_offset - 2
stop = start - chunk_length
right_slice = _slice_at_axis(slice(start, stop, -1), axis)
right_chunk = padded[right_slice]
if method == "odd":
# Negate chunk and align with edge
edge_slice = _slice_at_axis(
slice(-right_pad - 1, -right_pad), axis)
right_chunk = 2 * padded[edge_slice] - right_chunk
# Insert chunk into padded area
start = padded.shape[axis] - right_pad
stop = start + chunk_length
pad_area = _slice_at_axis(slice(start, stop), axis)
padded[pad_area] = right_chunk
# Adjust pointer to right edge for next iteration
right_pad -= chunk_length
return left_pad, right_pad
|
Pad `axis` of `arr` with reflection.
Parameters
----------
padded : ndarray
Input array of arbitrary shape.
axis : int
Axis along which to pad `arr`.
width_pair : (int, int)
Pair of widths that mark the pad area on both sides in the given
dimension.
method : str
Controls method of reflection; options are 'even' or 'odd'.
original_period : int
Original length of data on `axis` of `arr`.
include_edge : bool
If true, edge value is included in reflection, otherwise the edge
value forms the symmetric axis to the reflection.
Returns
-------
pad_amt : tuple of ints, length 2
New index positions of padding to do along the `axis`. If these are
both 0, padding is done in this dimension.
|
python
|
numpy/lib/_arraypad_impl.py
| 297
|
[
"padded",
"axis",
"width_pair",
"method",
"original_period",
"include_edge"
] | false
| 7
| 6.16
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
initializingPartitions
|
public synchronized Set<TopicPartition> initializingPartitions() {
return collectPartitions(TopicPartitionState::shouldInitialize);
}
|
Unset the preferred read replica. This causes the fetcher to go back to the leader for fetches.
@param tp The topic partition
@return the removed preferred read replica if set, Empty otherwise.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
| 837
|
[] | true
| 1
| 6.96
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
parseSignatureMember
|
function parseSignatureMember(kind: SyntaxKind.CallSignature | SyntaxKind.ConstructSignature): CallSignatureDeclaration | ConstructSignatureDeclaration {
const pos = getNodePos();
const hasJSDoc = hasPrecedingJSDocComment();
if (kind === SyntaxKind.ConstructSignature) {
parseExpected(SyntaxKind.NewKeyword);
}
const typeParameters = parseTypeParameters();
const parameters = parseParameters(SignatureFlags.Type);
const type = parseReturnType(SyntaxKind.ColonToken, /*isType*/ true);
parseTypeMemberSemicolon();
const node = kind === SyntaxKind.CallSignature
? factory.createCallSignature(typeParameters, parameters, type)
: factory.createConstructSignature(typeParameters, parameters, type);
return withJSDoc(finishNode(node, pos), hasJSDoc);
}
|
Reports a diagnostic error for the current token being an invalid name.
@param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName).
@param nameDiagnostic Diagnostic to report for all other cases.
@param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
|
typescript
|
src/compiler/parser.ts
| 4,184
|
[
"kind"
] | true
| 3
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
setTimeZone
|
public static void setTimeZone(@Nullable TimeZone timeZone, boolean inheritable) {
LocaleContext localeContext = getLocaleContext();
Locale locale = (localeContext != null ? localeContext.getLocale() : null);
if (timeZone != null) {
localeContext = new SimpleTimeZoneAwareLocaleContext(locale, timeZone);
}
else if (locale != null) {
localeContext = new SimpleLocaleContext(locale);
}
else {
localeContext = null;
}
setLocaleContext(localeContext, inheritable);
}
|
Associate the given TimeZone with the current thread,
preserving any Locale that may have been set already.
<p>Will implicitly create a LocaleContext for the given Locale.
@param timeZone the current TimeZone, or {@code null} to reset
the time zone part of the thread-bound context
@param inheritable whether to expose the LocaleContext as inheritable
for child threads (using an {@link InheritableThreadLocal})
@see #setLocale(Locale, boolean)
@see SimpleTimeZoneAwareLocaleContext#SimpleTimeZoneAwareLocaleContext(Locale, TimeZone)
|
java
|
spring-context/src/main/java/org/springframework/context/i18n/LocaleContextHolder.java
| 255
|
[
"timeZone",
"inheritable"
] |
void
| true
| 4
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
analyze
|
@Nullable FailureAnalysis analyze(Throwable failure);
|
Returns an analysis of the given {@code failure}, or {@code null} if no analysis
was possible.
@param failure the failure
@return the analysis or {@code null}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/diagnostics/FailureAnalyzer.java
| 37
|
[
"failure"
] |
FailureAnalysis
| true
| 1
| 6.64
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
get_unicode_from_response
|
def get_unicode_from_response(r):
"""Returns the requested content back in unicode.
:param r: Response object to get unicode content from.
Tried:
1. charset from content-type
2. fall back and replace all unicode characters
:rtype: str
"""
warnings.warn(
(
"In requests 3.0, get_unicode_from_response will be removed. For "
"more information, please see the discussion on issue #2266. (This"
" warning should only appear once.)"
),
DeprecationWarning,
)
tried_encodings = []
# Try charset from content-type
encoding = get_encoding_from_headers(r.headers)
if encoding:
try:
return str(r.content, encoding)
except UnicodeError:
tried_encodings.append(encoding)
# Fall back:
try:
return str(r.content, encoding, errors="replace")
except TypeError:
return r.content
|
Returns the requested content back in unicode.
:param r: Response object to get unicode content from.
Tried:
1. charset from content-type
2. fall back and replace all unicode characters
:rtype: str
|
python
|
src/requests/utils.py
| 581
|
[
"r"
] | false
| 2
| 6.4
|
psf/requests
| 53,586
|
sphinx
| false
|
|
resolve
|
public void resolve(RegisteredBean registeredBean, ThrowingConsumer<AutowiredArguments> action) {
Assert.notNull(registeredBean, "'registeredBean' must not be null");
Assert.notNull(action, "'action' must not be null");
AutowiredArguments resolved = resolve(registeredBean);
if (resolved != null) {
action.accept(resolved);
}
}
|
Resolve the method arguments for the specified registered bean and
provide it to the given action.
@param registeredBean the registered bean
@param action the action to execute with the resolved method arguments
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/aot/AutowiredMethodArgumentsResolver.java
| 120
|
[
"registeredBean",
"action"
] |
void
| true
| 2
| 6.56
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
forValueObject
|
static ConfigurationPropertiesBean forValueObject(Class<?> beanType, String beanName) {
Bindable<Object> bindTarget = createBindTarget(null, beanType, null);
Assert.state(bindTarget != null && deduceBindMethod(bindTarget) == VALUE_OBJECT_BIND_METHOD,
() -> "Bean '" + beanName + "' is not a @ConfigurationProperties value object");
return create(beanName, null, bindTarget.withBindMethod(VALUE_OBJECT_BIND_METHOD));
}
|
Return a {@link ConfigurationPropertiesBean @ConfigurationPropertiesBean} instance
for the given bean details or {@code null} if the bean is not a
{@link ConfigurationProperties @ConfigurationProperties} object. Annotations are
considered both on the bean itself, as well as any factory method (for example a
{@link Bean @Bean} method).
@param applicationContext the source application context
@param bean the bean to consider
@param beanName the bean name
@return a configuration properties bean or {@code null} if the neither the bean nor
factory method are annotated with
{@link ConfigurationProperties @ConfigurationProperties}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/ConfigurationPropertiesBean.java
| 242
|
[
"beanType",
"beanName"
] |
ConfigurationPropertiesBean
| true
| 2
| 7.28
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
nextBatchSize
|
Integer nextBatchSize() throws CorruptRecordException {
int remaining = buffer.remaining();
if (remaining < LOG_OVERHEAD)
return null;
int recordSize = buffer.getInt(buffer.position() + SIZE_OFFSET);
// V0 has the smallest overhead, stricter checking is done later
if (recordSize < LegacyRecord.RECORD_OVERHEAD_V0)
throw new CorruptRecordException(String.format("Record size %d is less than the minimum record overhead (%d)",
recordSize, LegacyRecord.RECORD_OVERHEAD_V0));
if (recordSize > maxMessageSize)
throw new CorruptRecordException(String.format("Record size %d exceeds the largest allowable message size (%d).",
recordSize, maxMessageSize));
if (remaining < HEADER_SIZE_UP_TO_MAGIC)
return null;
byte magic = buffer.get(buffer.position() + MAGIC_OFFSET);
if (magic < 0 || magic > RecordBatch.CURRENT_MAGIC_VALUE)
throw new CorruptRecordException("Invalid magic found in record: " + magic);
return recordSize + LOG_OVERHEAD;
}
|
Validates the header of the next batch and returns batch size.
@return next batch size including LOG_OVERHEAD if buffer contains header up to
magic byte, null otherwise
@throws CorruptRecordException if record size or magic is invalid
|
java
|
clients/src/main/java/org/apache/kafka/common/record/ByteBufferLogInputStream.java
| 66
|
[] |
Integer
| true
| 7
| 7.92
|
apache/kafka
| 31,560
|
javadoc
| false
|
normalizeTriggerValue
|
function normalizeTriggerValue(value: any): any {
// we use `!= null` here because it's the most simple
// way to test against a "falsy" value without mixing
// in empty strings or a zero value. DO NOT OPTIMIZE.
return value != null ? value : null;
}
|
@license
Copyright Google LLC All Rights Reserved.
Use of this source code is governed by an MIT-style license that can be
found in the LICENSE file at https://angular.dev/license
|
typescript
|
packages/animations/browser/src/render/transition_animation_engine.ts
| 1,773
|
[
"value"
] | true
| 2
| 6
|
angular/angular
| 99,544
|
jsdoc
| false
|
|
get_log_events_async
|
async def get_log_events_async(
self,
log_group: str,
log_stream_name: str,
start_time: int = 0,
skip: int = 0,
start_from_head: bool = True,
) -> AsyncGenerator[Any, dict[str, Any]]:
"""
Yield all the available items in a single log stream.
:param log_group: The name of the log group.
:param log_stream_name: The name of the specific stream.
:param start_time: The time stamp value to start reading the logs from (default: 0).
:param skip: The number of log entries to skip at the start (default: 0).
This is for when there are multiple entries at the same timestamp.
:param start_from_head: whether to start from the beginning (True) of the log or
at the end of the log (False).
"""
next_token = None
while True:
if next_token is not None:
token_arg: dict[str, str] = {"nextToken": next_token}
else:
token_arg = {}
async with await self.get_async_conn() as client:
response = await client.get_log_events(
logGroupName=log_group,
logStreamName=log_stream_name,
startTime=start_time,
startFromHead=start_from_head,
**token_arg,
)
events = response["events"]
event_count = len(events)
if event_count > skip:
events = events[skip:]
skip = 0
else:
skip -= event_count
events = []
for event in events:
await asyncio.sleep(1)
yield event
if next_token != response["nextForwardToken"]:
next_token = response["nextForwardToken"]
|
Yield all the available items in a single log stream.
:param log_group: The name of the log group.
:param log_stream_name: The name of the specific stream.
:param start_time: The time stamp value to start reading the logs from (default: 0).
:param skip: The number of log entries to skip at the start (default: 0).
This is for when there are multiple entries at the same timestamp.
:param start_from_head: whether to start from the beginning (True) of the log or
at the end of the log (False).
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/logs.py
| 179
|
[
"self",
"log_group",
"log_stream_name",
"start_time",
"skip",
"start_from_head"
] |
AsyncGenerator[Any, dict[str, Any]]
| true
| 8
| 6.72
|
apache/airflow
| 43,597
|
sphinx
| false
|
listOffsets
|
ListOffsetsResult listOffsets(Map<TopicPartition, OffsetSpec> topicPartitionOffsets, ListOffsetsOptions options);
|
<p>List offset for the specified partitions. This operation enables to find
the beginning offset, end offset as well as the offset matching a timestamp in partitions.
@param topicPartitionOffsets The mapping from partition to the OffsetSpec to look up.
@param options The options to use when retrieving the offsets
@return The ListOffsetsResult.
|
java
|
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
| 1,344
|
[
"topicPartitionOffsets",
"options"
] |
ListOffsetsResult
| true
| 1
| 6.32
|
apache/kafka
| 31,560
|
javadoc
| false
|
toJSONArray
|
public JSONArray toJSONArray(JSONArray names) {
JSONArray result = new JSONArray();
if (names == null) {
return null;
}
int length = names.length();
if (length == 0) {
return null;
}
for (int i = 0; i < length; i++) {
String name = JSON.toString(names.opt(i));
result.put(opt(name));
}
return result;
}
|
Returns an array with the values corresponding to {@code names}. The array contains
null for names that aren't mapped. This method returns null if {@code names} is
either null or empty.
@param names the names of the properties
@return the array
|
java
|
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONObject.java
| 653
|
[
"names"
] |
JSONArray
| true
| 4
| 8.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
unicodeToArray
|
function unicodeToArray(string) {
return string.match(reUnicode) || [];
}
|
Converts a Unicode `string` to an array.
@private
@param {string} string The string to convert.
@returns {Array} Returns the converted array.
|
javascript
|
lodash.js
| 1,402
|
[
"string"
] | false
| 2
| 6.16
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
get
|
static JavaVersion get(final String versionStr) {
if (versionStr == null) {
return null;
}
switch (versionStr) {
case "0.9":
return JAVA_0_9;
case "1.1":
return JAVA_1_1;
case "1.2":
return JAVA_1_2;
case "1.3":
return JAVA_1_3;
case "1.4":
return JAVA_1_4;
case "1.5":
return JAVA_1_5;
case "1.6":
return JAVA_1_6;
case "1.7":
return JAVA_1_7;
case "1.8":
return JAVA_1_8;
case "9":
return JAVA_9;
case "10":
return JAVA_10;
case "11":
return JAVA_11;
case "12":
return JAVA_12;
case "13":
return JAVA_13;
case "14":
return JAVA_14;
case "15":
return JAVA_15;
case "16":
return JAVA_16;
case "17":
return JAVA_17;
case "18":
return JAVA_18;
case "19":
return JAVA_19;
case "20":
return JAVA_20;
case "21":
return JAVA_21;
case "22":
return JAVA_22;
case "23":
return JAVA_23;
case "24":
return JAVA_24;
case "25":
return JAVA_25;
case "26":
return JAVA_26;
case "27":
return JAVA_27;
default:
final float v = toFloatVersion(versionStr);
if (v - 1. < 1.) { // then we need to check decimals > .9
final int firstComma = Math.max(versionStr.indexOf('.'), versionStr.indexOf(','));
final int end = Math.max(versionStr.length(), versionStr.indexOf(',', firstComma));
if (Float.parseFloat(versionStr.substring(firstComma + 1, end)) > .9f) {
return JAVA_RECENT;
}
} else if (v > 10) {
return JAVA_RECENT;
}
return null;
}
}
|
Transforms the given string with a Java version number to the corresponding constant of this enumeration class. This method is used internally.
@param versionStr the Java version as string.
@return the corresponding enumeration constant or <strong>null</strong> if the version is unknown.
|
java
|
src/main/java/org/apache/commons/lang3/JavaVersion.java
| 226
|
[
"versionStr"
] |
JavaVersion
| true
| 5
| 7.52
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
maybe_layout_constraints
|
def maybe_layout_constraints(fn: Callable[..., Any]) -> Optional[Callable[..., Any]]:
"""Get layout constraints. Returns None if there are no layout constraints."""
if not isinstance(fn, torch._ops.OpOverload):
# Only OpOverloads have layout constraints.
return None
if maybe_layout_tag := get_layout_constraint_tag(fn, with_default=False):
return tag_to_layout_constraint(maybe_layout_tag)
if fn in _maybe_layout_constraints:
return _maybe_layout_constraints[fn]
return None
|
Get layout constraints. Returns None if there are no layout constraints.
|
python
|
torch/_inductor/lowering.py
| 171
|
[
"fn"
] |
Optional[Callable[..., Any]]
| true
| 4
| 6
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
get_fernet
|
def get_fernet() -> FernetProtocol:
"""
Deferred load of Fernet key.
This function could fail either because Cryptography is not installed
or because the Fernet key is invalid.
:return: Fernet object
:raises: airflow.exceptions.AirflowException if there's a problem trying to load Fernet
"""
from cryptography.fernet import Fernet, MultiFernet
try:
fernet_key = conf.get("core", "FERNET_KEY")
if not fernet_key:
log.warning("empty cryptography key - values will not be stored encrypted.")
return _NullFernet()
fernet = MultiFernet([Fernet(fernet_part.encode("utf-8")) for fernet_part in fernet_key.split(",")])
return _RealFernet(fernet)
except (ValueError, TypeError) as value_error:
raise AirflowException(f"Could not create Fernet object: {value_error}")
|
Deferred load of Fernet key.
This function could fail either because Cryptography is not installed
or because the Fernet key is invalid.
:return: Fernet object
:raises: airflow.exceptions.AirflowException if there's a problem trying to load Fernet
|
python
|
airflow-core/src/airflow/models/crypto.py
| 97
|
[] |
FernetProtocol
| true
| 2
| 7.6
|
apache/airflow
| 43,597
|
unknown
| false
|
localeLookupList
|
public static List<Locale> localeLookupList(final Locale locale, final Locale defaultLocale) {
final List<Locale> list = new ArrayList<>(4);
if (locale != null) {
list.add(locale);
if (!hasVariant(locale)) {
list.add(new Locale(locale.getLanguage(), locale.getCountry()));
}
if (!hasCountry(locale)) {
list.add(new Locale(locale.getLanguage(), StringUtils.EMPTY));
}
if (!list.contains(defaultLocale)) {
list.add(defaultLocale);
}
}
return Collections.unmodifiableList(list);
}
|
Obtains the list of locales to search through when performing a locale search.
<pre>
localeLookupList(Locale("fr", "CA", "xxx"), Locale("en"))
= [Locale("fr", "CA", "xxx"), Locale("fr", "CA"), Locale("fr"), Locale("en"]
</pre>
<p>
The result list begins with the most specific locale, then the next more general and so on, finishing with the default locale. The list will never
contain the same locale twice.
</p>
@param locale the locale to start from, null returns empty list.
@param defaultLocale the default locale to use if no other is found.
@return the unmodifiable list of Locale objects, 0 being locale, not null.
|
java
|
src/main/java/org/apache/commons/lang3/LocaleUtils.java
| 289
|
[
"locale",
"defaultLocale"
] | true
| 5
| 8.08
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
_json_to_math_instruction
|
def _json_to_math_instruction(
cls, json_dict: Optional[str]
) -> Optional["MathInstruction"]: # type: ignore[name-defined] # noqa: F821
"""Convert JSON string to MathInstruction object.
Args:
json_dict: JSON string representation
Returns:
Optional[MathInstruction]: Reconstructed object or None
"""
if json_dict is None:
return None
from cutlass_library import DataType
from cutlass_library.library import MathInstruction, MathOperation, OpcodeClass
mi_dict = json.loads(json_dict)
# Convert string enum names back to enum values
element_a = cls._json_to_enum(mi_dict["element_a"], DataType)
element_b = cls._json_to_enum(mi_dict["element_b"], DataType)
element_acc = cls._json_to_enum(mi_dict["element_accumulator"], DataType)
# Get the opcode_class enum
opcode_class = cls._json_to_enum(mi_dict["opcode_class"], OpcodeClass)
# Get the math_operation enum
math_op = cls._json_to_enum(mi_dict["math_operation"], MathOperation)
# Create the MathInstruction object
math_instruction_obj = MathInstruction(
instruction_shape=mi_dict["instruction_shape"],
element_a=element_a,
element_b=element_b,
element_accumulator=element_acc,
opcode_class=opcode_class,
math_operation=math_op,
)
# Add element_scale_factor if it exists
if (
"element_scale_factor" in mi_dict
and mi_dict["element_scale_factor"] is not None
):
math_instruction_obj.element_scale_factor = cls._json_to_enum(
mi_dict["element_scale_factor"], DataType
)
return math_instruction_obj
|
Convert JSON string to MathInstruction object.
Args:
json_dict: JSON string representation
Returns:
Optional[MathInstruction]: Reconstructed object or None
|
python
|
torch/_inductor/codegen/cuda/serialization.py
| 346
|
[
"cls",
"json_dict"
] |
Optional["MathInstruction"]
| true
| 4
| 7.44
|
pytorch/pytorch
| 96,034
|
google
| false
|
getUnconditionalClasses
|
public Set<String> getUnconditionalClasses() {
Set<String> filtered = new HashSet<>(this.unconditionalClasses);
this.exclusions.forEach(filtered::remove);
return Collections.unmodifiableSet(filtered);
}
|
Returns the names of the classes that were evaluated but were not conditional.
@return the names of the unconditional classes
|
java
|
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/condition/ConditionEvaluationReport.java
| 149
|
[] | true
| 1
| 6.88
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
reindex_indexer
|
def reindex_indexer(
self,
new_axis: Index,
indexer: npt.NDArray[np.intp] | None,
axis: AxisInt,
fill_value=None,
allow_dups: bool = False,
only_slice: bool = False,
*,
use_na_proxy: bool = False,
) -> Self:
"""
Parameters
----------
new_axis : Index
indexer : ndarray[intp] or None
axis : int
fill_value : object, default None
allow_dups : bool, default False
only_slice : bool, default False
Whether to take views, not copies, along columns.
use_na_proxy : bool, default False
Whether to use an np.void ndarray for newly introduced columns.
pandas-indexer with -1's only.
"""
if indexer is None:
if new_axis is self.axes[axis]:
return self
result = self.copy(deep=False)
result.axes = list(self.axes)
result.axes[axis] = new_axis
return result
# Should be intp, but in some cases we get int64 on 32bit builds
assert isinstance(indexer, np.ndarray)
# some axes don't allow reindexing with dups
if not allow_dups:
self.axes[axis]._validate_can_reindex(indexer)
if axis >= self.ndim:
raise IndexError("Requested axis not found in manager")
if axis == 0:
new_blocks = list(
self._slice_take_blocks_ax0(
indexer,
fill_value=fill_value,
only_slice=only_slice,
use_na_proxy=use_na_proxy,
)
)
else:
new_blocks = [
blk.take_nd(
indexer,
axis=1,
fill_value=(
fill_value if fill_value is not None else blk.fill_value
),
)
for blk in self.blocks
]
new_axes = list(self.axes)
new_axes[axis] = new_axis
new_mgr = type(self).from_blocks(new_blocks, new_axes)
if axis == 1:
# We can avoid the need to rebuild these
new_mgr._blknos = self.blknos.copy()
new_mgr._blklocs = self.blklocs.copy()
return new_mgr
|
Parameters
----------
new_axis : Index
indexer : ndarray[intp] or None
axis : int
fill_value : object, default None
allow_dups : bool, default False
only_slice : bool, default False
Whether to take views, not copies, along columns.
use_na_proxy : bool, default False
Whether to use an np.void ndarray for newly introduced columns.
pandas-indexer with -1's only.
|
python
|
pandas/core/internals/managers.py
| 788
|
[
"self",
"new_axis",
"indexer",
"axis",
"fill_value",
"allow_dups",
"only_slice",
"use_na_proxy"
] |
Self
| true
| 9
| 6.8
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
len
|
def len(self) -> Series:
"""
Return the length of each list in the Series.
Returns
-------
pandas.Series
The length of each list.
See Also
--------
str.len : Python built-in function returning the length of an object.
Series.size : Returns the length of the Series.
StringMethods.len : Compute the length of each element in the Series/Index.
Examples
--------
>>> import pyarrow as pa
>>> s = pd.Series(
... [
... [1, 2, 3],
... [3],
... ],
... dtype=pd.ArrowDtype(pa.list_(pa.int64())),
... )
>>> s.list.len()
0 3
1 1
dtype: int32[pyarrow]
"""
from pandas import Series
value_lengths = pc.list_value_length(self._pa_array)
return Series(
value_lengths,
dtype=ArrowDtype(value_lengths.type),
index=self._data.index,
name=self._data.name,
)
|
Return the length of each list in the Series.
Returns
-------
pandas.Series
The length of each list.
See Also
--------
str.len : Python built-in function returning the length of an object.
Series.size : Returns the length of the Series.
StringMethods.len : Compute the length of each element in the Series/Index.
Examples
--------
>>> import pyarrow as pa
>>> s = pd.Series(
... [
... [1, 2, 3],
... [3],
... ],
... dtype=pd.ArrowDtype(pa.list_(pa.int64())),
... )
>>> s.list.len()
0 3
1 1
dtype: int32[pyarrow]
|
python
|
pandas/core/arrays/arrow/accessors.py
| 83
|
[
"self"
] |
Series
| true
| 1
| 7.28
|
pandas-dev/pandas
| 47,362
|
unknown
| false
|
appendExportsOfImportEqualsDeclaration
|
function appendExportsOfImportEqualsDeclaration(statements: Statement[] | undefined, decl: ImportEqualsDeclaration): Statement[] | undefined {
if (currentModuleInfo.exportEquals) {
return statements;
}
return appendExportsOfDeclaration(statements, new IdentifierNameMap(), decl);
}
|
Appends the exports of an ImportEqualsDeclaration to a statement list, returning the
statement list.
@param statements A statement list to which the down-level export statements are to be
appended. If `statements` is `undefined`, a new array is allocated if statements are
appended.
@param decl The declaration whose exports are to be recorded.
|
typescript
|
src/compiler/transformers/module/module.ts
| 2,003
|
[
"statements",
"decl"
] | true
| 2
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
invocableClone
|
@Override
public MethodInvocation invocableClone(@Nullable Object... arguments) {
// Force initialization of the user attributes Map,
// for having a shared Map reference in the clone.
if (this.userAttributes == null) {
this.userAttributes = new HashMap<>();
}
// Create the MethodInvocation clone.
try {
ReflectiveMethodInvocation clone = (ReflectiveMethodInvocation) clone();
clone.arguments = arguments;
return clone;
}
catch (CloneNotSupportedException ex) {
throw new IllegalStateException(
"Should be able to clone object of type [" + getClass() + "]: " + ex);
}
}
|
This implementation returns a shallow copy of this invocation object,
using the given arguments array for the clone.
<p>We want a shallow copy in this case: We want to use the same interceptor
chain and other object references, but we want an independent value for the
current interceptor index.
@see java.lang.Object#clone()
|
java
|
spring-aop/src/main/java/org/springframework/aop/framework/ReflectiveMethodInvocation.java
| 220
|
[] |
MethodInvocation
| true
| 3
| 7.04
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
parseAsn
|
static Long parseAsn(final String asn) {
if (asn == null || Strings.hasText(asn) == false) {
return null;
} else {
String stripped = asn.toUpperCase(Locale.ROOT).replaceAll("AS", "").trim();
try {
return Long.parseLong(stripped);
} catch (NumberFormatException e) {
logger.trace("Unable to parse non-compliant ASN string [{}]", asn);
return null;
}
}
}
|
Lax-ly parses a string that (ideally) looks like 'AS123' into a Long like 123L (or null, if such parsing isn't possible).
@param asn a potentially empty (or null) ASN string that is expected to contain 'AS' and then a parsable long
@return the parsed asn
|
java
|
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/IpinfoIpDataLookups.java
| 130
|
[
"asn"
] |
Long
| true
| 4
| 8.24
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
definePackage
|
private void definePackage(String className, String packageName) {
if (this.undefinablePackages.contains(packageName)) {
return;
}
String packageEntryName = packageName.replace('.', '/') + "/";
String classEntryName = className.replace('.', '/') + ".class";
for (URL url : this.urls) {
try {
JarFile jarFile = getJarFile(url);
if (jarFile != null) {
if (hasEntry(jarFile, classEntryName) && hasEntry(jarFile, packageEntryName)
&& jarFile.getManifest() != null) {
definePackage(packageName, jarFile.getManifest(), url);
return;
}
}
}
catch (IOException ex) {
// Ignore
}
}
this.undefinablePackages.add(packageName);
}
|
Define a package before a {@code findClass} call is made. This is necessary to
ensure that the appropriate manifest for nested JARs is associated with the
package.
@param className the class name being found
|
java
|
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/net/protocol/jar/JarUrlClassLoader.java
| 138
|
[
"className",
"packageName"
] |
void
| true
| 7
| 6.72
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
hasText
|
private static boolean hasText(CharSequence str) {
if (str == null || str.length() == 0) {
return false;
}
int strLen = str.length();
for (int i = 0; i < strLen; i++) {
if (Character.isWhitespace(str.charAt(i)) == false) {
return true;
}
}
return false;
}
|
Parses a string representation of a boolean value to <code>boolean</code>.
@return <code>true</code> iff the provided value is "true". <code>false</code> iff the provided value is "false".
@throws IllegalArgumentException if the string cannot be parsed to boolean.
|
java
|
libs/core/src/main/java/org/elasticsearch/core/Booleans.java
| 66
|
[
"str"
] | true
| 5
| 6.4
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
truncateTo
|
public int truncateTo(int targetSize) throws IOException {
int originalSize = sizeInBytes();
if (targetSize > originalSize || targetSize < 0)
throw new KafkaException("Attempt to truncate log segment " + file + " to " + targetSize + " bytes failed, " +
" size of this log segment is " + originalSize + " bytes.");
if (targetSize < (int) channel.size()) {
channel.truncate(targetSize);
size.set(targetSize);
}
return originalSize - targetSize;
}
|
Truncate this file message set to the given size in bytes. Note that this API does no checking that the
given size falls on a valid message boundary.
In some versions of the JDK truncating to the same size as the file message set will cause an
update of the files mtime, so truncate is only performed if the targetSize is smaller than the
size of the underlying FileChannel.
It is expected that no other threads will do writes to the log when this function is called.
@param targetSize The size to truncate to. Must be between 0 and sizeInBytes.
@return The number of bytes truncated off
|
java
|
clients/src/main/java/org/apache/kafka/common/record/FileRecords.java
| 278
|
[
"targetSize"
] | true
| 4
| 8.08
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
create_logger
|
def create_logger(app: App) -> logging.Logger:
"""Get the Flask app's logger and configure it if needed.
The logger name will be the same as
:attr:`app.import_name <flask.Flask.name>`.
When :attr:`~flask.Flask.debug` is enabled, set the logger level to
:data:`logging.DEBUG` if it is not set.
If there is no handler for the logger's effective level, add a
:class:`~logging.StreamHandler` for
:func:`~flask.logging.wsgi_errors_stream` with a basic format.
"""
logger = logging.getLogger(app.name)
if app.debug and not logger.level:
logger.setLevel(logging.DEBUG)
if not has_level_handler(logger):
logger.addHandler(default_handler)
return logger
|
Get the Flask app's logger and configure it if needed.
The logger name will be the same as
:attr:`app.import_name <flask.Flask.name>`.
When :attr:`~flask.Flask.debug` is enabled, set the logger level to
:data:`logging.DEBUG` if it is not set.
If there is no handler for the logger's effective level, add a
:class:`~logging.StreamHandler` for
:func:`~flask.logging.wsgi_errors_stream` with a basic format.
|
python
|
src/flask/logging.py
| 58
|
[
"app"
] |
logging.Logger
| true
| 4
| 6.4
|
pallets/flask
| 70,946
|
unknown
| false
|
completeAsync
|
@Override
public CompletableFuture<T> completeAsync(Supplier<? extends T> supplier, Executor executor) {
throw erroneousCompletionException();
}
|
Completes this future exceptionally. For internal use by the Kafka clients, not by user code.
@param throwable the exception.
@return {@code true} if this invocation caused this CompletableFuture
to transition to a completed state, else {@code false}
|
java
|
clients/src/main/java/org/apache/kafka/common/internals/KafkaCompletableFuture.java
| 77
|
[
"supplier",
"executor"
] | true
| 1
| 6.64
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
initializeKeyParameterDetails
|
private static List<CacheParameterDetail> initializeKeyParameterDetails(List<CacheParameterDetail> allParameters) {
List<CacheParameterDetail> all = new ArrayList<>();
List<CacheParameterDetail> annotated = new ArrayList<>();
for (CacheParameterDetail allParameter : allParameters) {
if (!allParameter.isValue()) {
all.add(allParameter);
}
if (allParameter.isKey()) {
annotated.add(allParameter);
}
}
return (annotated.isEmpty() ? all : annotated);
}
|
Return the {@link CacheInvocationParameter} for the parameters that are to be
used to compute the key.
<p>Per the spec, if some method parameters are annotated with
{@link javax.cache.annotation.CacheKey}, only those parameters should be part
of the key. If none are annotated, all parameters except the parameter annotated
with {@link javax.cache.annotation.CacheValue} should be part of the key.
<p>The method arguments must match the signature of the related method invocation
@param values the parameters value for a particular invocation
@return the {@link CacheInvocationParameter} instances for the parameters to be
used to compute the key
|
java
|
spring-context-support/src/main/java/org/springframework/cache/jcache/interceptor/AbstractJCacheKeyOperation.java
| 93
|
[
"allParameters"
] | true
| 4
| 7.44
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
addContextValue
|
@Override
public ContextedException addContextValue(final String label, final Object value) {
exceptionContext.addContextValue(label, value);
return this;
}
|
Adds information helpful to a developer in diagnosing and correcting the problem.
For the information to be meaningful, the value passed should have a reasonable
toString() implementation.
Different values can be added with the same label multiple times.
<p>
Note: This exception is only serializable if the object added is serializable.
</p>
@param label a textual label associated with information, {@code null} not recommended
@param value information needed to understand exception, may be {@code null}
@return {@code this}, for method chaining, not {@code null}
|
java
|
src/main/java/org/apache/commons/lang3/exception/ContextedException.java
| 167
|
[
"label",
"value"
] |
ContextedException
| true
| 1
| 6.56
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
get
|
public static <O, T extends Throwable> O get(final FailableSupplier<O, T> supplier) {
try {
return supplier.get();
} catch (final Throwable t) {
throw rethrow(t);
}
}
|
Invokes a supplier, and returns the result.
@param supplier The supplier to invoke.
@param <O> The suppliers output type.
@param <T> The type of checked exception, which the supplier can throw.
@return The object, which has been created by the supplier
@since 3.10
|
java
|
src/main/java/org/apache/commons/lang3/Functions.java
| 476
|
[
"supplier"
] |
O
| true
| 2
| 8.24
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
convert_cross_package_dependencies_to_table
|
def convert_cross_package_dependencies_to_table(
cross_package_dependencies: list[str],
markdown: bool = True,
) -> str:
"""
Converts cross-package dependencies to a Markdown table
:param cross_package_dependencies: list of cross-package dependencies
:param markdown: if True, Markdown format is used else rst
:return: formatted table
"""
from tabulate import tabulate
headers = ["Dependent package", "Extra"]
table_data = []
prefix = "apache-airflow-providers-"
base_url = "https://airflow.apache.org/docs/"
for dependency in cross_package_dependencies:
pip_package_name = f"{prefix}{dependency.replace('.', '-')}"
url_suffix = f"{dependency.replace('.', '-')}"
if markdown:
url = f"[{pip_package_name}]({base_url}{url_suffix})"
else:
url = f"`{pip_package_name} <{base_url}{prefix}{url_suffix}>`_"
table_data.append((url, f"`{dependency}`" if markdown else f"``{dependency}``"))
return tabulate(table_data, headers=headers, tablefmt="pipe" if markdown else "rst")
|
Converts cross-package dependencies to a Markdown table
:param cross_package_dependencies: list of cross-package dependencies
:param markdown: if True, Markdown format is used else rst
:return: formatted table
|
python
|
dev/breeze/src/airflow_breeze/utils/packages.py
| 607
|
[
"cross_package_dependencies",
"markdown"
] |
str
| true
| 6
| 7.12
|
apache/airflow
| 43,597
|
sphinx
| false
|
_builtin_constant_ids
|
def _builtin_constant_ids() -> dict[int, str]:
"""
Collects constant builtins by eliminating callable items.
"""
rv = {
id(v): f"builtins.{k}"
for k, v in builtins.__dict__.items()
if not k.startswith("_") and not callable(v)
}
return rv
|
Collects constant builtins by eliminating callable items.
|
python
|
torch/_dynamo/trace_rules.py
| 3,218
|
[] |
dict[int, str]
| true
| 2
| 6.4
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
createApplicationListener
|
ApplicationListener<?> createApplicationListener(String beanName, Class<?> type, Method method);
|
Create an {@link ApplicationListener} for the specified method.
@param beanName the name of the bean
@param type the target type of the instance
@param method the {@link EventListener} annotated method
@return an application listener, suitable to invoke the specified method
|
java
|
spring-context/src/main/java/org/springframework/context/event/EventListenerFactory.java
| 46
|
[
"beanName",
"type",
"method"
] | true
| 1
| 6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
get
|
public static LoggingSystem get(ClassLoader classLoader) {
String loggingSystemClassName = System.getProperty(SYSTEM_PROPERTY);
if (StringUtils.hasLength(loggingSystemClassName)) {
if (NONE.equals(loggingSystemClassName)) {
return new NoOpLoggingSystem();
}
return get(classLoader, loggingSystemClassName);
}
LoggingSystem loggingSystem = SYSTEM_FACTORY.getLoggingSystem(classLoader);
Assert.state(loggingSystem != null, "No suitable logging system located");
return loggingSystem;
}
|
Detect and return the logging system in use. Supports Logback and Java Logging.
@param classLoader the classloader
@return the logging system
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/LoggingSystem.java
| 162
|
[
"classLoader"
] |
LoggingSystem
| true
| 3
| 7.76
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
seq_concat_item
|
def seq_concat_item(seq, item):
"""Return copy of sequence seq with item added.
Returns:
Sequence: if seq is a tuple, the result will be a tuple,
otherwise it depends on the implementation of ``__add__``.
"""
return seq + (item,) if isinstance(seq, tuple) else seq + [item]
|
Return copy of sequence seq with item added.
Returns:
Sequence: if seq is a tuple, the result will be a tuple,
otherwise it depends on the implementation of ``__add__``.
|
python
|
celery/utils/functional.py
| 373
|
[
"seq",
"item"
] | false
| 2
| 6.96
|
celery/celery
| 27,741
|
unknown
| false
|
|
create
|
public static ZeroBucket create(double zeroThreshold, long count) {
if (zeroThreshold == 0) {
return minimalWithCount(count);
}
return new ZeroBucket(zeroThreshold, count);
}
|
Creates a zero bucket from the given threshold represented as double.
@param zeroThreshold the zero threshold defining the bucket range [-zeroThreshold, +zeroThreshold], must be non-negative
@param count the number of values in the bucket
@return the new {@link ZeroBucket}
|
java
|
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ZeroBucket.java
| 127
|
[
"zeroThreshold",
"count"
] |
ZeroBucket
| true
| 2
| 7.28
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
visitForInitializer
|
function visitForInitializer(node: ForInitializer): ForInitializer {
if (shouldHoistForInitializer(node)) {
let expressions: Expression[] | undefined;
for (const variable of node.declarations) {
expressions = append(expressions, transformInitializedVariable(variable, /*isExportedDeclaration*/ false));
if (!variable.initializer) {
hoistBindingElement(variable);
}
}
return expressions ? factory.inlineExpressions(expressions) : factory.createOmittedExpression();
}
else {
return visitNode(node, discardedValueVisitor, isForInitializer);
}
}
|
Visits the initializer of a ForStatement, ForInStatement, or ForOfStatement
@param node The node to visit.
|
typescript
|
src/compiler/transformers/module/system.ts
| 1,370
|
[
"node"
] | true
| 5
| 6.08
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
intersection
|
public ComposablePointcut intersection(MethodMatcher other) {
this.methodMatcher = MethodMatchers.intersection(this.methodMatcher, other);
return this;
}
|
Apply an intersection with the given MethodMatcher.
@param other the MethodMatcher to apply an intersection with
@return this composable pointcut (for call chaining)
|
java
|
spring-aop/src/main/java/org/springframework/aop/support/ComposablePointcut.java
| 146
|
[
"other"
] |
ComposablePointcut
| true
| 1
| 6.16
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
equals
|
@Deprecated
public static boolean equals(final CharSequence cs1, final CharSequence cs2) {
return Strings.CS.equals(cs1, cs2);
}
|
Compares two CharSequences, returning {@code true} if they represent equal sequences of characters.
<p>
{@code null}s are handled without exceptions. Two {@code null} references are considered to be equal. The comparison is <strong>case-sensitive</strong>.
</p>
<pre>
StringUtils.equals(null, null) = true
StringUtils.equals(null, "abc") = false
StringUtils.equals("abc", null) = false
StringUtils.equals("abc", "abc") = true
StringUtils.equals("abc", "ABC") = false
</pre>
@param cs1 the first CharSequence, may be {@code null}.
@param cs2 the second CharSequence, may be {@code null}.
@return {@code true} if the CharSequences are equal (case-sensitive), or both {@code null}.
@since 3.0 Changed signature from equals(String, String) to equals(CharSequence, CharSequence)
@see Object#equals(Object)
@see #equalsIgnoreCase(CharSequence, CharSequence)
@deprecated Use {@link Strings#equals(CharSequence, CharSequence) Strings.CS.equals(CharSequence, CharSequence)}.
|
java
|
src/main/java/org/apache/commons/lang3/StringUtils.java
| 1,787
|
[
"cs1",
"cs2"
] | true
| 1
| 6.32
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
wrapper
|
def wrapper(fn: Callable[_P, _R]) -> Callable[_P, _R]:
"""Wrap the function to retrieve from cache.
Args:
fn: The function to wrap (not actually called).
Returns:
A wrapped version of the function.
"""
# If caching is disabled, always raise KeyError (cache miss)
if not config.IS_CACHING_MODULE_ENABLED():
def always_miss(*args: _P.args, **kwargs: _P.kwargs) -> _R:
raise KeyError("Caching is disabled")
return always_miss
def inner(*args: _P.args, **kwargs: _P.kwargs) -> _R:
"""Retrieve the cached result without calling the function.
Args:
*args: Positional arguments to generate the cache key.
**kwargs: Keyword arguments to generate the cache key.
Returns:
The cached result (decoded if decoder is provided).
Raises:
KeyError: If no cached result exists for the given parameters.
"""
# Generate cache key from parameters
cache_key = self._make_key(custom_params_encoder, *args, **kwargs)
# Check if result is cached
cached_hit = self._cache.get(cache_key)
if cached_hit is None:
raise KeyError(f"No cached result found for key: {cache_key}")
# Extract the cached value
cache_entry = cast(CacheEntry, cached_hit.value)
# Decode and return the cached result
if custom_result_decoder is not None:
# Get the decoder function by calling the factory with params
decoder_fn = custom_result_decoder(*args, **kwargs)
return decoder_fn(cast(_EncodedR, cache_entry.encoded_result))
return cast(_R, cache_entry.encoded_result)
return inner
|
Wrap the function to retrieve from cache.
Args:
fn: The function to wrap (not actually called).
Returns:
A wrapped version of the function.
|
python
|
torch/_inductor/runtime/caching/interfaces.py
| 543
|
[
"fn"
] |
Callable[_P, _R]
| true
| 4
| 8.08
|
pytorch/pytorch
| 96,034
|
google
| false
|
cartesian
|
def cartesian(arrays, out=None):
"""Generate a cartesian product of input arrays.
Parameters
----------
arrays : list of array-like
1-D arrays to form the cartesian product of.
out : ndarray of shape (M, len(arrays)), default=None
Array to place the cartesian product in.
Returns
-------
out : ndarray of shape (M, len(arrays))
Array containing the cartesian products formed of input arrays.
If not provided, the `dtype` of the output array is set to the most
permissive `dtype` of the input arrays, according to NumPy type
promotion.
.. versionadded:: 1.2
Add support for arrays of different types.
Notes
-----
This function may not be used on more than 32 arrays
because the underlying numpy functions do not support it.
Examples
--------
>>> from sklearn.utils.extmath import cartesian
>>> cartesian(([1, 2, 3], [4, 5], [6, 7]))
array([[1, 4, 6],
[1, 4, 7],
[1, 5, 6],
[1, 5, 7],
[2, 4, 6],
[2, 4, 7],
[2, 5, 6],
[2, 5, 7],
[3, 4, 6],
[3, 4, 7],
[3, 5, 6],
[3, 5, 7]])
"""
arrays = [np.asarray(x) for x in arrays]
shape = (len(x) for x in arrays)
ix = np.indices(shape)
ix = ix.reshape(len(arrays), -1).T
if out is None:
dtype = np.result_type(*arrays) # find the most permissive dtype
out = np.empty_like(ix, dtype=dtype)
for n, arr in enumerate(arrays):
out[:, n] = arrays[n][ix[:, n]]
return out
|
Generate a cartesian product of input arrays.
Parameters
----------
arrays : list of array-like
1-D arrays to form the cartesian product of.
out : ndarray of shape (M, len(arrays)), default=None
Array to place the cartesian product in.
Returns
-------
out : ndarray of shape (M, len(arrays))
Array containing the cartesian products formed of input arrays.
If not provided, the `dtype` of the output array is set to the most
permissive `dtype` of the input arrays, according to NumPy type
promotion.
.. versionadded:: 1.2
Add support for arrays of different types.
Notes
-----
This function may not be used on more than 32 arrays
because the underlying numpy functions do not support it.
Examples
--------
>>> from sklearn.utils.extmath import cartesian
>>> cartesian(([1, 2, 3], [4, 5], [6, 7]))
array([[1, 4, 6],
[1, 4, 7],
[1, 5, 6],
[1, 5, 7],
[2, 4, 6],
[2, 4, 7],
[2, 5, 6],
[2, 5, 7],
[3, 4, 6],
[3, 4, 7],
[3, 5, 6],
[3, 5, 7]])
|
python
|
sklearn/utils/extmath.py
| 861
|
[
"arrays",
"out"
] | false
| 3
| 7.68
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.