function_name
stringlengths 1
57
| function_code
stringlengths 20
4.99k
| documentation
stringlengths 50
2k
| language
stringclasses 5
values | file_path
stringlengths 8
166
| line_number
int32 4
16.7k
| parameters
listlengths 0
20
| return_type
stringlengths 0
131
| has_type_hints
bool 2
classes | complexity
int32 1
51
| quality_score
float32 6
9.68
| repo_name
stringclasses 34
values | repo_stars
int32 2.9k
242k
| docstring_style
stringclasses 7
values | is_async
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
maybe_droplevels
|
def maybe_droplevels(index: Index, key) -> Index:
"""
Attempt to drop level or levels from the given index.
Parameters
----------
index: Index
key : scalar or tuple
Returns
-------
Index
"""
# drop levels
original_index = index
if isinstance(key, tuple):
# Caller is responsible for ensuring the key is not an entry in the first
# level of the MultiIndex.
for _ in key:
try:
index = index._drop_level_numbers([0])
except ValueError:
# we have dropped too much, so back out
return original_index
else:
try:
index = index._drop_level_numbers([0])
except ValueError:
pass
return index
|
Attempt to drop level or levels from the given index.
Parameters
----------
index: Index
key : scalar or tuple
Returns
-------
Index
|
python
|
pandas/core/indexes/multi.py
| 4,378
|
[
"index",
"key"
] |
Index
| true
| 4
| 7.04
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
withHashes
|
public StandardStackTracePrinter withHashes(@Nullable ToIntFunction<StackTraceElement> frameHasher) {
return new StandardStackTracePrinter(this.options, this.maximumLength, this.lineSeparator, this.filter,
this.frameFilter, this.formatter, this.frameFormatter, frameHasher);
}
|
Return a new {@link StandardStackTracePrinter} from this one that changes if hashes
should be generated and printed for each stacktrace.
@param hashes if hashes should be added
@return a new {@link StandardStackTracePrinter} instance
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/StandardStackTracePrinter.java
| 276
|
[
"frameHasher"
] |
StandardStackTracePrinter
| true
| 1
| 6.16
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
setText
|
public void setText(String text, boolean html) throws MessagingException {
Assert.notNull(text, "Text must not be null");
MimePart partToUse;
if (isMultipart()) {
partToUse = getMainPart();
}
else {
partToUse = this.mimeMessage;
}
if (html) {
setHtmlTextToMimePart(partToUse, text);
}
else {
setPlainTextToMimePart(partToUse, text);
}
}
|
Set the given text directly as content in non-multipart mode
or as default body part in multipart mode.
The "html" flag determines the content type to apply.
<p><b>NOTE:</b> Invoke {@link #addInline} <i>after</i> {@code setText};
else, mail readers might not be able to resolve inline references correctly.
@param text the text for the message
@param html whether to apply content type "text/html" for an
HTML mail, using default content type ("text/plain") else
@throws MessagingException in case of errors
|
java
|
spring-context-support/src/main/java/org/springframework/mail/javamail/MimeMessageHelper.java
| 806
|
[
"text",
"html"
] |
void
| true
| 3
| 6.72
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
setCurrentProxiedBeanName
|
static void setCurrentProxiedBeanName(@Nullable String beanName) {
if (beanName != null) {
currentProxiedBeanName.set(beanName);
}
else {
currentProxiedBeanName.remove();
}
}
|
Set the name of the currently proxied bean instance.
@param beanName the name of the bean, or {@code null} to reset it
|
java
|
spring-aop/src/main/java/org/springframework/aop/framework/autoproxy/ProxyCreationContext.java
| 54
|
[
"beanName"
] |
void
| true
| 2
| 6.56
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
partition
|
public static <T extends @Nullable Object> UnmodifiableIterator<List<T>> partition(
Iterator<T> iterator, int size) {
return partitionImpl(iterator, size, false);
}
|
Divides an iterator into unmodifiable sublists of the given size (the final list may be
smaller). For example, partitioning an iterator containing {@code [a, b, c, d, e]} with a
partition size of 3 yields {@code [[a, b, c], [d, e]]} -- an outer iterator containing two
inner lists of three and two elements, all in the original order.
<p>The returned lists implement {@link java.util.RandomAccess}.
<p><b>Note:</b> The current implementation eagerly allocates storage for {@code size} elements.
As a consequence, passing values like {@code Integer.MAX_VALUE} can lead to {@link
OutOfMemoryError}.
@param iterator the iterator to return a partitioned view of
@param size the desired size of each partition (the last may be smaller)
@return an iterator of immutable lists containing the elements of {@code iterator} divided into
partitions
@throws IllegalArgumentException if {@code size} is nonpositive
|
java
|
android/guava/src/com/google/common/collect/Iterators.java
| 602
|
[
"iterator",
"size"
] | true
| 1
| 6.48
|
google/guava
| 51,352
|
javadoc
| false
|
|
detectMapValueReplacement
|
private @Nullable ConfigurationMetadataProperty detectMapValueReplacement(String fullId) {
int lastDot = fullId.lastIndexOf('.');
if (lastDot == -1) {
return null;
}
ConfigurationMetadataProperty metadata = this.allProperties.get(fullId.substring(0, lastDot));
if (metadata != null && isMapType(metadata)) {
return metadata;
}
return null;
}
|
Analyse the {@link ConfigurableEnvironment environment} and attempt to rename
legacy properties if a replacement exists.
@return a report of the migration
|
java
|
core/spring-boot-properties-migrator/src/main/java/org/springframework/boot/context/properties/migrator/PropertiesMigrationReporter.java
| 170
|
[
"fullId"
] |
ConfigurationMetadataProperty
| true
| 4
| 6.08
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
set
|
@CanIgnoreReturnValue
protected boolean set(@ParametricNullness V value) {
Object valueToSet = value == null ? NULL : value;
if (casValue(this, null, valueToSet)) {
complete(this, /* callInterruptTask= */ false);
return true;
}
return false;
}
|
Sets the result of this {@code Future} unless this {@code Future} has already been cancelled or
set (including {@linkplain #setFuture set asynchronously}). When a call to this method returns,
the {@code Future} is guaranteed to be {@linkplain #isDone done} <b>only if</b> the call was
accepted (in which case it returns {@code true}). If it returns {@code false}, the {@code
Future} may have previously been set asynchronously, in which case its result may not be known
yet. That result, though not yet known, cannot be overridden by a call to a {@code set*}
method, only by a call to {@link #cancel}.
<p>Beware of completing a future while holding a lock. Its listeners may do slow work or
acquire other locks, risking deadlocks.
@param value the value to be used as the result
@return true if the attempt was accepted, completing the {@code Future}
|
java
|
android/guava/src/com/google/common/util/concurrent/AbstractFuture.java
| 487
|
[
"value"
] | true
| 3
| 8.08
|
google/guava
| 51,352
|
javadoc
| false
|
|
codegen_block_ptr
|
def codegen_block_ptr(
self,
name: str,
var: str,
indexing: Union[BlockPtrOptions, TensorDescriptorOptions],
other="",
) -> tuple[str, str]:
"""Generate a block pointer or tensor descriptor for Triton kernel operations.
This method creates either a block pointer (for regular Triton operations) or
a tensor descriptor (for TMA operations) based on the indexing type. It handles
caching and reuse of descriptors for performance optimization.
Args:
name: The name of the buffer/tensor being accessed
var: The variable name for the pointer
indexing: Block pointer options or tensor descriptor options containing
indexing information and boundary check settings
other: Additional parameters string (e.g., padding options)
Returns:
A tuple containing:
- block_descriptor: The generated block pointer or tensor descriptor variable name
- other: Modified additional parameters string with boundary check options
"""
check = indexing.boundary_check()
if isinstance(indexing, TensorDescriptorOptions):
if check and other:
# The TMA API currently does not support padding values
# but the default is zero
assert other == ", other=0.0"
other = ""
else:
if not check:
# workaround https://github.com/triton-lang/triton/issues/2813
other = ""
elif other:
assert other == ", other=0.0"
other = f", boundary_check={check!r}, padding_option='zero'"
else:
other = f", boundary_check={check!r}"
if (
self.inside_reduction
and self.range_trees[-1].is_loop
and indexing.has_rindex()
) or indexing.can_lift:
if indexing.can_lift and var in self.prologue_cache:
# Check for epilogue subtiling to reuse the same
# tensor descriptor.
block_descriptor = self.prologue_cache[var]
else:
block_ptr_line = indexing.format(var, roffset=False)
block_var = self.cse.try_get(block_ptr_line)
# Early return if block descriptor already exists
if block_var:
return str(block_var), other
block_descriptor_id = next(self.block_ptr_id)
if isinstance(indexing, BlockPtrOptions):
block_descriptor = f"block_ptr{block_descriptor_id}"
else:
block_descriptor = f"tma_descriptor{block_descriptor_id}"
named_var = self.cse.namedvar(
block_descriptor, dtype=torch.uint64, shape=[]
)
self.cse.put(block_ptr_line, named_var)
line_body = DeferredLine(name, f"{block_descriptor} = {block_ptr_line}")
if indexing.can_lift:
self.prologue.writeline(line_body)
# Cache the descriptor for epilogue subtiling
self.prologue_cache[var] = block_descriptor
else:
self.body.writeline(line_body)
if isinstance(indexing, BlockPtrOptions):
# Store for later use. If the buffer is removed the below advancements
# are no longer necessary
self.block_ptr_to_buffer[block_descriptor] = name
# Generate block pointer advancements, for later use.
for symt in TritonSymbols.reduction_types:
advance_offsets = indexing.advance_roffset(symt)
# Ignore identity advancements.
if all(
V.graph.sizevars.statically_known_equals(
offset, sympy.Integer(0)
)
for offset in advance_offsets
):
continue
advancements = self.pointer_advancements[symt]
assert block_descriptor not in advancements, (
f"duplicate advancement for pointer '{block_descriptor}' at type '{symt}'"
)
advancements[block_descriptor] = advance_offsets
else:
block_descriptor = indexing.format(var)
return block_descriptor, other
|
Generate a block pointer or tensor descriptor for Triton kernel operations.
This method creates either a block pointer (for regular Triton operations) or
a tensor descriptor (for TMA operations) based on the indexing type. It handles
caching and reuse of descriptors for performance optimization.
Args:
name: The name of the buffer/tensor being accessed
var: The variable name for the pointer
indexing: Block pointer options or tensor descriptor options containing
indexing information and boundary check settings
other: Additional parameters string (e.g., padding options)
Returns:
A tuple containing:
- block_descriptor: The generated block pointer or tensor descriptor variable name
- other: Modified additional parameters string with boundary check options
|
python
|
torch/_inductor/codegen/triton.py
| 3,139
|
[
"self",
"name",
"var",
"indexing",
"other"
] |
tuple[str, str]
| true
| 24
| 6.32
|
pytorch/pytorch
| 96,034
|
google
| false
|
requireNonEmpty
|
public static <T> T requireNonEmpty(final T obj) {
return requireNonEmpty(obj, "object");
}
|
Checks that the specified object reference is not {@code null} or empty per {@link #isEmpty(Object)}. Use this
method for validation, for example:
<pre>
public Foo(Bar bar) {
this.bar = Objects.requireNonEmpty(bar);
}
</pre>
@param <T> the type of the reference.
@param obj the object reference to check for nullity.
@return {@code obj} if not {@code null}.
@throws NullPointerException if {@code obj} is {@code null}.
@throws IllegalArgumentException if {@code obj} is empty per {@link #isEmpty(Object)}.
@see #isEmpty(Object)
@since 3.12.0
|
java
|
src/main/java/org/apache/commons/lang3/ObjectUtils.java
| 1,186
|
[
"obj"
] |
T
| true
| 1
| 6.32
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
get_type_hint_captures
|
def get_type_hint_captures(fn):
"""
Get a dictionary containing type resolution mappings necessary to resolve types
for the literal annotations on 'fn'. These are not considered to be closed-over by fn
and must be obtained separately (e.g. using this function).
Args:
fn: A callable.
Returns:
A Dict[str, Any] containing a mapping from the literal annotations used on
fn to the Python objects they refer to.
"""
# First, try to get the source of the function. We'll need to parse it to find the actual string names
# that were used to annotate the types, since inspect.signature() will only return the class object that
# the annotation refers to, not the string name. If we can't get the source, simply return an empty dict.
# This may happen in cases where the function is synthesized dynamically at runtime.
src = loader.get_source(fn)
if src is None:
try:
src = inspect.getsource(fn)
except OSError as e:
raise OSError(
f"Failed to get source for {fn} using inspect.getsource"
) from e
# Gather a dictionary of parameter name -> type, skipping any parameters whose annotated
# types are strings. These are only understood by TorchScript in the context of a type annotation
# that refers to a class in its own definition, but trying to include a mapping for this in the result
# function would cause infinite recursion because the class is currently being compiled.
# In addition, there is logic in ScriptTypeParser to handle this.
signature = inspect.signature(fn)
name_to_type = {
name: parameter.annotation
for name, parameter in signature.parameters.items()
if parameter.annotation is not inspect.Parameter.empty
and not isinstance(parameter.annotation, str)
}
# Then, get the literal type annotations from the function declaration
# by source inspection. This accounts for the case in which aliases are used
# to annotate the arguments (e.g device_t = torch.device, and then d: device_t).
# frontend.py cannot be used here because it includes _jit_internal, so use ast instead.
a = ast.parse(textwrap.dedent(src))
if len(a.body) != 1 or not isinstance(a.body[0], ast.FunctionDef):
raise RuntimeError(f"Expected {fn} to be a function")
f = a.body[0]
# Prepare a dictionary of source annotation -> type, which will be the final result of this function,
# by using the parsed AST (f) to reconstruct source annotations as strings for each parameter and mapping
# them to the type object corresponding to the annotation via name_to_type using the parameter name.
annotation_to_type = {}
for arg in f.args.args:
# Get the source type annotation string for this argument if possible.
arg_annotation_str = (
get_annotation_str(arg.annotation) if arg.annotation else None
)
# If the argument has no annotation or get_annotation_str cannot convert it to a string,
# arg_annotation_str will be None. Skip this arg; ScriptTypeParser will probably handle
# this in the latter case.
if arg_annotation_str is None:
continue
# Insert {arg_annotation_str: type} into annotation_to_type if possible. One reason arg_name may not
# be present in name_to_type is that the annotation itself is a string and not a type object
# (common for self-refential annotations in classes). Once again, let ScriptTypeParser handle this.
arg_name = arg.arg
if arg_name in name_to_type:
annotation_to_type[arg_annotation_str] = name_to_type[arg_name]
# If there is a valid return annotation, include it in annotation_to_type. As with argument annotations,
# the literal annotation has to be convertible to a string by get_annotation_str, and the actual type
# of the annotation cannot be a string.
literal_return_annotation = get_annotation_str(f.returns)
valid_literal_annotation = literal_return_annotation is not None
return_annotation = signature.return_annotation
valid_return_annotation_type = (
return_annotation is not inspect.Parameter.empty
and not isinstance(return_annotation, str)
)
if valid_literal_annotation and valid_return_annotation_type:
annotation_to_type[literal_return_annotation] = return_annotation
return annotation_to_type
|
Get a dictionary containing type resolution mappings necessary to resolve types
for the literal annotations on 'fn'. These are not considered to be closed-over by fn
and must be obtained separately (e.g. using this function).
Args:
fn: A callable.
Returns:
A Dict[str, Any] containing a mapping from the literal annotations used on
fn to the Python objects they refer to.
|
python
|
torch/_jit_internal.py
| 486
|
[
"fn"
] | false
| 12
| 7.2
|
pytorch/pytorch
| 96,034
|
google
| false
|
|
getTokenLocation
|
@Override
public XContentLocation getTokenLocation() {
JsonLocation loc = parser.getTokenLocation();
if (loc == null) {
return null;
}
return new XContentLocation(loc.getLineNr(), loc.getColumnNr());
}
|
Handle parser exception depending on type.
This converts known exceptions to XContentParseException and rethrows them.
|
java
|
libs/x-content/impl/src/main/java/org/elasticsearch/xcontent/provider/json/JsonXContentParser.java
| 308
|
[] |
XContentLocation
| true
| 2
| 6.08
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
of
|
public static <L, M, R> Triple<L, M, R> of(final L left, final M middle, final R right) {
return ImmutableTriple.of(left, middle, right);
}
|
Obtains an immutable triple of three objects inferring the generic types.
@param <L> the left element type.
@param <M> the middle element type.
@param <R> the right element type.
@param left the left element, may be null.
@param middle the middle element, may be null.
@param right the right element, may be null.
@return an immutable triple formed from the three parameters, not null.
|
java
|
src/main/java/org/apache/commons/lang3/tuple/Triple.java
| 79
|
[
"left",
"middle",
"right"
] | true
| 1
| 6.96
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
forUriString
|
public static InetAddress forUriString(String hostAddr) {
InetAddress addr = forUriStringOrNull(hostAddr, /* parseScope= */ true);
if (addr == null) {
throw formatIllegalArgumentException("Not a valid URI IP literal: '%s'", hostAddr);
}
return addr;
}
|
Returns an InetAddress representing the literal IPv4 or IPv6 host portion of a URL, encoded in
the format specified by RFC 3986 section 3.2.2.
<p>This method is similar to {@link InetAddresses#forString(String)}, however, it requires that
IPv6 addresses are surrounded by square brackets.
<p>This method is the inverse of {@link InetAddresses#toUriString(java.net.InetAddress)}.
<p>This method accepts non-ASCII digits, for example {@code "192.168.0.1"} (those are fullwidth
characters). That is consistent with {@link InetAddress}, but not with various RFCs. If you
want to accept ASCII digits only, you can use something like {@code
CharMatcher.ascii().matchesAllOf(ipString)}.
@param hostAddr an RFC 3986 section 3.2.2 encoded IPv4 or IPv6 address
@return an InetAddress representing the address in {@code hostAddr}
@throws IllegalArgumentException if {@code hostAddr} is not a valid IPv4 address, or IPv6
address surrounded by square brackets, or if the address has a scope ID that fails
validation against the interfaces on the machine (as required by Java's {@link
InetAddress})
|
java
|
android/guava/src/com/google/common/net/InetAddresses.java
| 608
|
[
"hostAddr"
] |
InetAddress
| true
| 2
| 7.6
|
google/guava
| 51,352
|
javadoc
| false
|
normalizeUpperToObject
|
private static Type[] normalizeUpperToObject(final Type[] bounds) {
return bounds.length == 0 ? new Type[] { Object.class } : normalizeUpperBounds(bounds);
}
|
Delegates to {@link #normalizeUpperBounds(Type[])} unless {@code bounds} is empty in which case return an array with the element {@code Object.class}.
@param bounds bounds an array of types representing the upper bounds of either {@link WildcardType} or {@link TypeVariable}, not {@code null}.
@return result from {@link #normalizeUpperBounds(Type[])} unless {@code bounds} is empty in which case return an array with the element
{@code Object.class}.
|
java
|
src/main/java/org/apache/commons/lang3/reflect/TypeUtils.java
| 1,375
|
[
"bounds"
] | true
| 2
| 7.52
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
splitPreserveAllTokens
|
public static String[] splitPreserveAllTokens(final String str) {
return splitWorker(str, null, -1, true);
}
|
Splits the provided text into an array, using whitespace as the separator, preserving all tokens, including empty tokens created by adjacent separators.
This is an alternative to using StringTokenizer. Whitespace is defined by {@link Character#isWhitespace(char)}.
<p>
The separator is not included in the returned String array. Adjacent separators are treated as separators for empty tokens. For more control over the
split use the StrTokenizer class.
</p>
<p>
A {@code null} input String returns {@code null}.
</p>
<pre>
StringUtils.splitPreserveAllTokens(null) = null
StringUtils.splitPreserveAllTokens("") = []
StringUtils.splitPreserveAllTokens("abc def") = ["abc", "def"]
StringUtils.splitPreserveAllTokens("abc def") = ["abc", "", "def"]
StringUtils.splitPreserveAllTokens(" abc ") = ["", "abc", ""]
</pre>
@param str the String to parse, may be {@code null}.
@return an array of parsed Strings, {@code null} if null String input.
@since 2.1
|
java
|
src/main/java/org/apache/commons/lang3/StringUtils.java
| 7,435
|
[
"str"
] | true
| 1
| 6.32
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
definePackage
|
@Override
protected Package definePackage(String name, Manifest man, URL url) throws IllegalArgumentException {
return (!this.exploded) ? super.definePackage(name, man, url) : definePackageForExploded(name, man, url);
}
|
Create a new {@link LaunchedClassLoader} instance.
@param exploded if the underlying archive is exploded
@param rootArchive the root archive or {@code null}
@param urls the URLs from which to load classes and resources
@param parent the parent class loader for delegation
|
java
|
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/launch/LaunchedClassLoader.java
| 114
|
[
"name",
"man",
"url"
] |
Package
| true
| 2
| 6.48
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
clean_nn_module_stack_and_source_fn
|
def clean_nn_module_stack_and_source_fn(
graph_module: torch.fx.GraphModule, is_inline_builtin=False
) -> torch.fx.GraphModule:
"""
Clean up nn_module_stack metadata by removing export_root references.
Removes the _export_root module references from nn_module_stack metadata
in graph nodes, which are artifacts from the export process. Fixes two patterns:
1. Keys: Removes "__export_root_" and "__modules['_export_root']_" prefixes
- Normal case: "L__self____export_root_child" -> "L__self__child"
- inline_builtin case: Uses numeric ID strings like "140468831433840"
2. Values: Removes "._export_root" and "._modules['_export_root']" from child names
e.g., "L['self']._export_root.child" -> "L['self'].child"
e.g., "L['self']._modules['_export_root'].child" -> "L['self'].child"
Also removes the root export entry "L__self____export_root" entirely.
Args:
graph_module: The GraphModule to clean up
is_inline_builtin: If True, keys are numeric ID strings and self references
(L['self']) are filtered out
Returns:
The cleaned GraphModule (modified in-place)
"""
def _process_nn_module_stack(nn_module_stack):
if "L__self____export_root" in nn_module_stack:
del nn_module_stack["L__self____export_root"]
# Clean up remaining entries
cleaned_stack = {}
for key, (child_name, child_class) in nn_module_stack.items():
# Clean key by removing export_root patterns
clean_key = clean_export_root_string(key)
# Clean child_name by removing export_root patterns
clean_name = clean_export_root_string(child_name)
# Skip self reference for inline builtin case
if is_inline_builtin and clean_name == "L['self']":
continue
cleaned_stack[clean_key] = (clean_name, child_class)
return cleaned_stack
def _process_source_fn(source_fn_stack):
cleaned_stack = []
for item in source_fn_stack:
if isinstance(item, tuple) and len(item) == 2:
name, cls = item
if isinstance(name, str):
clean_name = clean_export_root_string(name)
cleaned_stack.append((clean_name, cls))
else:
cleaned_stack.append(item)
else:
cleaned_stack.append(item)
return cleaned_stack
for node in graph_module.graph.nodes:
if "nn_module_stack" in node.meta:
node.meta["nn_module_stack"] = _process_nn_module_stack(
node.meta["nn_module_stack"].copy()
)
source_fn_stack = node.meta.get("source_fn_stack", None)
if source_fn_stack:
node.meta["source_fn_stack"] = _process_source_fn(source_fn_stack.copy())
if "dynamo_flat_name_to_original_fqn" in graph_module.meta:
# Clean up flat name to original fqn mapping
clean_name_to_original_fqn = {}
for flat_name, original_fqn in graph_module.meta[
"dynamo_flat_name_to_original_fqn"
].items():
clean_name_to_original_fqn[clean_export_root_string(flat_name)] = (
clean_export_root_string(original_fqn)
)
graph_module.meta["dynamo_flat_name_to_original_fqn"] = (
clean_name_to_original_fqn
)
return graph_module
|
Clean up nn_module_stack metadata by removing export_root references.
Removes the _export_root module references from nn_module_stack metadata
in graph nodes, which are artifacts from the export process. Fixes two patterns:
1. Keys: Removes "__export_root_" and "__modules['_export_root']_" prefixes
- Normal case: "L__self____export_root_child" -> "L__self__child"
- inline_builtin case: Uses numeric ID strings like "140468831433840"
2. Values: Removes "._export_root" and "._modules['_export_root']" from child names
e.g., "L['self']._export_root.child" -> "L['self'].child"
e.g., "L['self']._modules['_export_root'].child" -> "L['self'].child"
Also removes the root export entry "L__self____export_root" entirely.
Args:
graph_module: The GraphModule to clean up
is_inline_builtin: If True, keys are numeric ID strings and self references
(L['self']) are filtered out
Returns:
The cleaned GraphModule (modified in-place)
|
python
|
torch/_dynamo/functional_export.py
| 78
|
[
"graph_module",
"is_inline_builtin"
] |
torch.fx.GraphModule
| true
| 16
| 6.32
|
pytorch/pytorch
| 96,034
|
google
| false
|
set_fill_value
|
def set_fill_value(a, fill_value):
"""
Set the filling value of a, if a is a masked array.
This function changes the fill value of the masked array `a` in place.
If `a` is not a masked array, the function returns silently, without
doing anything.
Parameters
----------
a : array_like
Input array.
fill_value : dtype
Filling value. A consistency test is performed to make sure
the value is compatible with the dtype of `a`.
Returns
-------
None
Nothing returned by this function.
See Also
--------
maximum_fill_value : Return the default fill value for a dtype.
MaskedArray.fill_value : Return current fill value.
MaskedArray.set_fill_value : Equivalent method.
Examples
--------
>>> import numpy as np
>>> import numpy.ma as ma
>>> a = np.arange(5)
>>> a
array([0, 1, 2, 3, 4])
>>> a = ma.masked_where(a < 3, a)
>>> a
masked_array(data=[--, --, --, 3, 4],
mask=[ True, True, True, False, False],
fill_value=999999)
>>> ma.set_fill_value(a, -999)
>>> a
masked_array(data=[--, --, --, 3, 4],
mask=[ True, True, True, False, False],
fill_value=-999)
Nothing happens if `a` is not a masked array.
>>> a = list(range(5))
>>> a
[0, 1, 2, 3, 4]
>>> ma.set_fill_value(a, 100)
>>> a
[0, 1, 2, 3, 4]
>>> a = np.arange(5)
>>> a
array([0, 1, 2, 3, 4])
>>> ma.set_fill_value(a, 100)
>>> a
array([0, 1, 2, 3, 4])
"""
if isinstance(a, MaskedArray):
a.set_fill_value(fill_value)
|
Set the filling value of a, if a is a masked array.
This function changes the fill value of the masked array `a` in place.
If `a` is not a masked array, the function returns silently, without
doing anything.
Parameters
----------
a : array_like
Input array.
fill_value : dtype
Filling value. A consistency test is performed to make sure
the value is compatible with the dtype of `a`.
Returns
-------
None
Nothing returned by this function.
See Also
--------
maximum_fill_value : Return the default fill value for a dtype.
MaskedArray.fill_value : Return current fill value.
MaskedArray.set_fill_value : Equivalent method.
Examples
--------
>>> import numpy as np
>>> import numpy.ma as ma
>>> a = np.arange(5)
>>> a
array([0, 1, 2, 3, 4])
>>> a = ma.masked_where(a < 3, a)
>>> a
masked_array(data=[--, --, --, 3, 4],
mask=[ True, True, True, False, False],
fill_value=999999)
>>> ma.set_fill_value(a, -999)
>>> a
masked_array(data=[--, --, --, 3, 4],
mask=[ True, True, True, False, False],
fill_value=-999)
Nothing happens if `a` is not a masked array.
>>> a = list(range(5))
>>> a
[0, 1, 2, 3, 4]
>>> ma.set_fill_value(a, 100)
>>> a
[0, 1, 2, 3, 4]
>>> a = np.arange(5)
>>> a
array([0, 1, 2, 3, 4])
>>> ma.set_fill_value(a, 100)
>>> a
array([0, 1, 2, 3, 4])
|
python
|
numpy/ma/core.py
| 508
|
[
"a",
"fill_value"
] | false
| 2
| 7.6
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
value
|
public XContentBuilder value(BigInteger value) throws IOException {
if (value == null) {
return nullValue();
}
generator.writeNumber(value);
return this;
}
|
@return the value of the "human readable" flag. When the value is equal to true,
some types of values are written in a format easier to read for a human.
|
java
|
libs/x-content/src/main/java/org/elasticsearch/xcontent/XContentBuilder.java
| 686
|
[
"value"
] |
XContentBuilder
| true
| 2
| 7.04
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
reportException
|
boolean reportException(Throwable failure);
|
Report a startup failure to the user.
@param failure the source failure
@return {@code true} if the failure was reported or {@code false} if default
reporting should occur.
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/SpringBootExceptionReporter.java
| 42
|
[
"failure"
] | true
| 1
| 6.8
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
refreshBeanFactory
|
@Override
protected final void refreshBeanFactory() throws IllegalStateException {
if (!this.refreshed.compareAndSet(false, true)) {
throw new IllegalStateException(
"GenericApplicationContext does not support multiple refresh attempts: just call 'refresh' once");
}
this.beanFactory.setSerializationId(getId());
}
|
Do nothing: We hold a single internal BeanFactory and rely on callers
to register beans through our public methods (or the BeanFactory's).
@see #registerBeanDefinition
|
java
|
spring-context/src/main/java/org/springframework/context/support/GenericApplicationContext.java
| 289
|
[] |
void
| true
| 2
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
getNamespaceMemberName
|
function getNamespaceMemberName(ns: Identifier, name: Identifier, allowComments?: boolean, allowSourceMaps?: boolean): PropertyAccessExpression {
const qualifiedName = createPropertyAccessExpression(ns, nodeIsSynthesized(name) ? name : cloneNode(name));
setTextRange(qualifiedName, name);
let emitFlags: EmitFlags = 0;
if (!allowSourceMaps) emitFlags |= EmitFlags.NoSourceMap;
if (!allowComments) emitFlags |= EmitFlags.NoComments;
if (emitFlags) setEmitFlags(qualifiedName, emitFlags);
return qualifiedName;
}
|
Gets a namespace-qualified name for use in expressions.
@param ns The namespace identifier.
@param name The name.
@param allowComments A value indicating whether comments may be emitted for the name.
@param allowSourceMaps A value indicating whether source maps may be emitted for the name.
|
typescript
|
src/compiler/factory/nodeFactory.ts
| 6,836
|
[
"ns",
"name",
"allowComments?",
"allowSourceMaps?"
] | true
| 5
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
ofCustom
|
public static LevelConfiguration ofCustom(String name) {
Assert.hasText(name, "'name' must not be empty");
return new LevelConfiguration(name, null);
}
|
Create a new {@link LevelConfiguration} instance for a custom level name.
@param name the log level name
@return a new {@link LevelConfiguration} instance
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/LoggerConfiguration.java
| 246
|
[
"name"
] |
LevelConfiguration
| true
| 1
| 6.48
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
completeEventsExceptionallyOnClose
|
private long completeEventsExceptionallyOnClose(Collection<?> events) {
long count = 0;
for (Object o : events) {
if (!(o instanceof CompletableEvent))
continue;
CompletableEvent<?> event = (CompletableEvent<?>) o;
if (event.future().isDone())
continue;
count++;
TimeoutException error = new TimeoutException(String.format("%s could not be completed before the consumer closed", event.getClass().getSimpleName()));
if (event.future().completeExceptionally(error)) {
log.debug("Event {} completed exceptionally since the consumer is closing", event);
} else {
log.trace("Event {} not completed exceptionally since it was completed prior to the consumer closing", event);
}
}
return count;
}
|
For all the {@link CompletableEvent}s in the collection, if they're not already complete, invoke
{@link CompletableFuture#completeExceptionally(Throwable)}.
@param events Collection of objects, assumed to be subclasses of {@link ApplicationEvent} or
{@link BackgroundEvent}, but will only perform completion for any
unfinished {@link CompletableEvent}s
@return Number of events closed
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/events/CompletableEventReaper.java
| 186
|
[
"events"
] | true
| 4
| 7.28
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
delete_old_records
|
def delete_old_records(
cls,
task_id: str,
dag_id: str,
num_to_keep: int = conf.getint("core", "max_num_rendered_ti_fields_per_task", fallback=0),
session: Session = NEW_SESSION,
) -> None:
"""
Keep only Last X (num_to_keep) number of records for a task by deleting others.
In the case of data for a mapped task either all of the rows or none of the rows will be deleted, so
we don't end up with partial data for a set of mapped Task Instances left in the database.
:param task_id: Task ID
:param dag_id: Dag ID
:param num_to_keep: Number of Records to keep
:param session: SqlAlchemy Session
"""
if num_to_keep <= 0:
return
from airflow.models.dagrun import DagRun
tis_to_keep_query = (
select(cls.dag_id, cls.task_id, cls.run_id, DagRun.logical_date)
.where(cls.dag_id == dag_id, cls.task_id == task_id)
.join(cls.dag_run)
.distinct()
.order_by(DagRun.logical_date.desc())
.limit(num_to_keep)
)
cls._do_delete_old_records(
dag_id=dag_id,
task_id=task_id,
ti_clause=tis_to_keep_query.subquery(),
session=session,
)
session.flush()
|
Keep only Last X (num_to_keep) number of records for a task by deleting others.
In the case of data for a mapped task either all of the rows or none of the rows will be deleted, so
we don't end up with partial data for a set of mapped Task Instances left in the database.
:param task_id: Task ID
:param dag_id: Dag ID
:param num_to_keep: Number of Records to keep
:param session: SqlAlchemy Session
|
python
|
airflow-core/src/airflow/models/renderedtifields.py
| 257
|
[
"cls",
"task_id",
"dag_id",
"num_to_keep",
"session"
] |
None
| true
| 2
| 6.72
|
apache/airflow
| 43,597
|
sphinx
| false
|
unless
|
public boolean unless(String unlessExpression, AnnotatedElementKey methodKey, EvaluationContext evalContext) {
return (Boolean.TRUE.equals(getExpression(this.unlessCache, methodKey, unlessExpression).getValue(
evalContext, Boolean.class)));
}
|
Create an {@link EvaluationContext}.
@param caches the current caches
@param method the method
@param args the method arguments
@param target the target object
@param targetClass the target class
@param result the return value (can be {@code null}) or
{@link #NO_RESULT} if there is no return at this time
@return the evaluation context
|
java
|
spring-context/src/main/java/org/springframework/cache/interceptor/CacheOperationExpressionEvaluator.java
| 114
|
[
"unlessExpression",
"methodKey",
"evalContext"
] | true
| 1
| 6.48
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
value
|
public XContentBuilder value(Float value) throws IOException {
return (value == null) ? nullValue() : value(value.floatValue());
}
|
@return the value of the "human readable" flag. When the value is equal to true,
some types of values are written in a format easier to read for a human.
|
java
|
libs/x-content/src/main/java/org/elasticsearch/xcontent/XContentBuilder.java
| 533
|
[
"value"
] |
XContentBuilder
| true
| 2
| 6.96
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
asList
|
public static List<Byte> asList(byte... backingArray) {
if (backingArray.length == 0) {
return Collections.emptyList();
}
return new ByteArrayAsList(backingArray);
}
|
Returns a fixed-size list backed by the specified array, similar to {@link
Arrays#asList(Object[])}. The list supports {@link List#set(int, Object)}, but any attempt to
set a value to {@code null} will result in a {@link NullPointerException}.
<p>The returned list maintains the values, but not the identities, of {@code Byte} objects
written to or read from it. For example, whether {@code list.get(0) == list.get(0)} is true for
the returned list is unspecified.
<p>The returned list is serializable.
@param backingArray the array to back the list
@return a list view of the array
|
java
|
android/guava/src/com/google/common/primitives/Bytes.java
| 249
|
[] | true
| 2
| 8.08
|
google/guava
| 51,352
|
javadoc
| false
|
|
getDependentFiles
|
public Collection<Path> getDependentFiles() {
Set<Path> paths = new HashSet<>(keyConfig.getDependentFiles());
paths.addAll(trustConfig.getDependentFiles());
return paths;
}
|
@return A collection of files that are used by this SSL configuration. If the contents of these files change, then any
subsequent call to {@link #createSslContext()} (or similar methods) may create a context with different behaviour.
It is recommended that these files be monitored for changes, and a new ssl-context is created whenever any of the files are modified.
|
java
|
libs/ssl-config/src/main/java/org/elasticsearch/common/ssl/SslConfiguration.java
| 107
|
[] | true
| 1
| 6.72
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
resolveEmbeddedValue
|
protected @Nullable String resolveEmbeddedValue(String value) {
return (this.embeddedValueResolver != null ? this.embeddedValueResolver.resolveStringValue(value) : value);
}
|
Resolve the given embedded value through this instance's {@link StringValueResolver}.
@param value the value to resolve
@return the resolved value, or always the original value if no resolver is available
@see #setEmbeddedValueResolver
|
java
|
spring-context/src/main/java/org/springframework/context/support/EmbeddedValueResolutionSupport.java
| 47
|
[
"value"
] |
String
| true
| 2
| 7.2
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
enable_memray_trace
|
def enable_memray_trace(component: MemrayTraceComponents) -> Callable[[Callable[PS, RT]], Callable[PS, RT]]:
"""
Conditionally track memory using memray based on configuration.
Args:
component: Enum value of the component for configuration lookup
"""
def decorator(func: Callable[PS, RT]) -> Callable[PS, RT]:
@wraps(func)
def wrapper(*args: PS.args, **kwargs: PS.kwargs) -> RT: # type: ignore[return]
_memray_trace_components = conf.getenumlist(
"profiling", "memray_trace_components", MemrayTraceComponents
)
if component not in _memray_trace_components:
return func(*args, **kwargs)
try:
import memray
profile_path = f"{AIRFLOW_HOME}/{component.value}_memory.bin"
with memray.Tracker(
profile_path,
):
log.info("Memray tracing enabled for %s. Output: %s", component.value, profile_path)
return func(*args, **kwargs)
except ImportError as error:
# Silently fall back to running without tracking
log.warning(
"ImportError memray.Tracker: %s in %s, please check the memray is installed",
error.msg,
component.value,
)
return func(*args, **kwargs)
except Exception as exception:
log.warning("Fail to apply memray.Tracker in %s, error: %s", component.value, exception)
return func(*args, **kwargs)
return wrapper
return decorator
|
Conditionally track memory using memray based on configuration.
Args:
component: Enum value of the component for configuration lookup
|
python
|
airflow-core/src/airflow/utils/memray_utils.py
| 44
|
[
"component"
] |
Callable[[Callable[PS, RT]], Callable[PS, RT]]
| true
| 2
| 6.24
|
apache/airflow
| 43,597
|
google
| false
|
cacheIfAbsent
|
boolean cacheIfAbsent(boolean useCaches, URL jarFileUrl, JarFile jarFile) {
if (!useCaches) {
return false;
}
return this.cache.putIfAbsent(jarFileUrl, jarFile);
}
|
Cache the given {@link JarFile} if caching can be used and there is no existing
entry.
@param useCaches if caches can be used
@param jarFileUrl the jar file URL
@param jarFile the jar file
@return {@code true} if that file was added to the cache
|
java
|
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/net/protocol/jar/UrlJarFiles.java
| 92
|
[
"useCaches",
"jarFileUrl",
"jarFile"
] | true
| 2
| 7.76
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
compareNodeCoreModuleSpecifiers
|
function compareNodeCoreModuleSpecifiers(a: string, b: string, importingFile: SourceFile | FutureSourceFile, program: Program): Comparison {
if (startsWith(a, "node:") && !startsWith(b, "node:")) return shouldUseUriStyleNodeCoreModules(importingFile, program) ? Comparison.LessThan : Comparison.GreaterThan;
if (startsWith(b, "node:") && !startsWith(a, "node:")) return shouldUseUriStyleNodeCoreModules(importingFile, program) ? Comparison.GreaterThan : Comparison.LessThan;
return Comparison.EqualTo;
}
|
@returns `Comparison.LessThan` if `a` is better than `b`.
|
typescript
|
src/services/codefixes/importFixes.ts
| 1,469
|
[
"a",
"b",
"importingFile",
"program"
] | true
| 7
| 6.24
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
_get_trimming_maximums
|
def _get_trimming_maximums(
rn,
cn,
max_elements,
max_rows=None,
max_cols=None,
scaling_factor: float = 0.8,
) -> tuple[int, int]:
"""
Recursively reduce the number of rows and columns to satisfy max elements.
Parameters
----------
rn, cn : int
The number of input rows / columns
max_elements : int
The number of allowable elements
max_rows, max_cols : int, optional
Directly specify an initial maximum rows or columns before compression.
scaling_factor : float
Factor at which to reduce the number of rows / columns to fit.
Returns
-------
rn, cn : tuple
New rn and cn values that satisfy the max_elements constraint
"""
def scale_down(rn, cn):
if cn >= rn:
return rn, int(cn * scaling_factor)
else:
return int(rn * scaling_factor), cn
if max_rows:
rn = max_rows if rn > max_rows else rn
if max_cols:
cn = max_cols if cn > max_cols else cn
while rn * cn > max_elements:
rn, cn = scale_down(rn, cn)
return rn, cn
|
Recursively reduce the number of rows and columns to satisfy max elements.
Parameters
----------
rn, cn : int
The number of input rows / columns
max_elements : int
The number of allowable elements
max_rows, max_cols : int, optional
Directly specify an initial maximum rows or columns before compression.
scaling_factor : float
Factor at which to reduce the number of rows / columns to fit.
Returns
-------
rn, cn : tuple
New rn and cn values that satisfy the max_elements constraint
|
python
|
pandas/io/formats/style_render.py
| 1,738
|
[
"rn",
"cn",
"max_elements",
"max_rows",
"max_cols",
"scaling_factor"
] |
tuple[int, int]
| true
| 8
| 6.48
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
maybeAutoCommitSyncBeforeRebalance
|
public CompletableFuture<Void> maybeAutoCommitSyncBeforeRebalance(final long deadlineMs) {
if (!autoCommitEnabled()) {
return CompletableFuture.completedFuture(null);
}
CompletableFuture<Void> result = new CompletableFuture<>();
OffsetCommitRequestState requestState =
createOffsetCommitRequest(subscriptions.allConsumed(), deadlineMs);
autoCommitSyncBeforeRebalanceWithRetries(requestState, result);
return result;
}
|
Commit consumed offsets if auto-commit is enabled, regardless of the auto-commit interval.
This is used for committing offsets before rebalance. This will retry committing
the latest offsets until the request succeeds, fails with a fatal error, or the timeout
expires. Note that:
<ul>
<li>Considers {@link Errors#STALE_MEMBER_EPOCH} as a retriable error, and will retry it
including the member ID and latest member epoch received from the broker.</li>
<li>Considers {@link Errors#UNKNOWN_TOPIC_OR_PARTITION} as a fatal error, and will not
retry it although the error extends RetriableException. The reason is that if a topic
or partition is deleted, rebalance would not finish in time since the auto commit would keep retrying.</li>
</ul>
Also note that this will generate a commit request even if there is another one in-flight,
generated by the auto-commit on the interval logic, to ensure that the latest offsets are
committed before rebalance.
@return Future that will complete when the offsets are successfully committed. It will
complete exceptionally if the commit fails with a non-retriable error, or if the retry
timeout expires.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java
| 330
|
[
"deadlineMs"
] | true
| 2
| 6.56
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
CursorBase
|
CursorBase(CursorBase&&) = default;
|
Check if this cursor has a size limit imposed on it.
@methodset Configuration
|
cpp
|
folly/io/Cursor.h
| 844
|
[] | true
| 2
| 6.64
|
facebook/folly
| 30,157
|
doxygen
| false
|
|
validateAddress
|
protected void validateAddress(InternetAddress address) throws AddressException {
if (isValidateAddresses()) {
address.validate();
}
}
|
Validate the given mail address.
Called by all of MimeMessageHelper's address setters and adders.
<p>The default implementation invokes {@link InternetAddress#validate()},
provided that address validation is activated for the helper instance.
@param address the address to validate
@throws AddressException if validation failed
@see #isValidateAddresses()
@see jakarta.mail.internet.InternetAddress#validate()
|
java
|
spring-context-support/src/main/java/org/springframework/mail/javamail/MimeMessageHelper.java
| 539
|
[
"address"
] |
void
| true
| 2
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
tigger_workflow
|
def tigger_workflow(workflow_name: str, repo: str, branch: str = "main", **kwargs):
"""
Trigger a GitHub Actions workflow using the `gh` CLI.
:param workflow_name: The name of the workflow to trigger.
:param repo: Workflow repository example: 'apache/airflow'
:param branch: The branch to run the workflow on.
:param kwargs: Additional parameters to pass to the workflow.
"""
command = ["gh", "workflow", "run", workflow_name, "--ref", branch, "--repo", repo]
# These are the input parameters to workflow
for key, value_raw in kwargs.items():
# GH cli requires bool inputs to be converted to string format
if isinstance(value_raw, bool):
value = "true" if value_raw else "false"
else:
value = value_raw
command.extend(["-f", f"{key}={value}"])
get_console().print(f"[blue]Running command: {' '.join(command)}[/blue]")
result = run_command(command, capture_output=True, check=False)
if result.returncode != 0:
get_console().print(f"[red]Error running workflow: {result.stderr}[/red]")
sys.exit(1)
# Wait for a few seconds to start the workflow run
time.sleep(5)
|
Trigger a GitHub Actions workflow using the `gh` CLI.
:param workflow_name: The name of the workflow to trigger.
:param repo: Workflow repository example: 'apache/airflow'
:param branch: The branch to run the workflow on.
:param kwargs: Additional parameters to pass to the workflow.
|
python
|
dev/breeze/src/airflow_breeze/utils/gh_workflow_utils.py
| 31
|
[
"workflow_name",
"repo",
"branch"
] | true
| 6
| 7.04
|
apache/airflow
| 43,597
|
sphinx
| false
|
|
moduleNameIsEqualTo
|
function moduleNameIsEqualTo(a: StringLiteralLike | Identifier, b: StringLiteralLike | Identifier): boolean {
return a.kind === SyntaxKind.Identifier
? b.kind === SyntaxKind.Identifier && a.escapedText === b.escapedText
: b.kind === SyntaxKind.StringLiteral && a.text === b.text;
}
|
@returns The line index marked as preceding the diagnostic, or -1 if none was.
|
typescript
|
src/compiler/program.ts
| 3,294
|
[
"a",
"b"
] | true
| 4
| 7.04
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
setupIndex
|
private void setupIndex() throws IOException {
directory = new ByteBuffersDirectory();
FieldType keywordFieldType = new FieldType(KeywordFieldMapper.Defaults.FIELD_TYPE);
keywordFieldType.setStored(true);
keywordFieldType.freeze();
try (IndexWriter iw = new IndexWriter(directory, new IndexWriterConfig().setMergePolicy(NoMergePolicy.INSTANCE))) {
for (int i = 0; i < INDEX_SIZE; i++) {
String c = Character.toString('a' - ((i % 1000) % 26) + 26);
iw.addDocument(
List.of(
new NumericDocValuesField("long", i),
new StoredField("long", i),
new NumericDocValuesField("int", i),
new StoredField("int", i),
new NumericDocValuesField("double", NumericUtils.doubleToSortableLong(i)),
new StoredField("double", (double) i),
new KeywordFieldMapper.KeywordField("keyword_1", new BytesRef(c + i % 1000), keywordFieldType),
new KeywordFieldMapper.KeywordField("keyword_2", new BytesRef(c + i % 1000), keywordFieldType),
new KeywordFieldMapper.KeywordField("keyword_3", new BytesRef(c + i % 1000), keywordFieldType),
new KeywordFieldMapper.KeywordField("keyword_mv", new BytesRef(c + i % 1000), keywordFieldType),
new KeywordFieldMapper.KeywordField("keyword_mv", new BytesRef(c + i % 500), keywordFieldType)
)
);
if (i % COMMIT_INTERVAL == 0) {
iw.commit();
}
}
}
reader = DirectoryReader.open(directory);
}
|
Layouts for the input blocks.
<ul>
<li>{@code in_order} is how {@link LuceneSourceOperator} produces them to read in
the most efficient possible way. We </li>
<li>{@code shuffled} is chunked the same size as {@link LuceneSourceOperator} but
loads in a shuffled order, like a hypothetical {@link TopNOperator} that can
output large blocks would output.</li>
<li>{@code shuffled_singles} is shuffled in the same order as {@code shuffled} but
each page has a single document rather than {@code BLOCK_SIZE} docs.</li>
</ul>
|
java
|
benchmarks/src/main/java/org/elasticsearch/benchmark/_nightly/esql/ValuesSourceReaderBenchmark.java
| 499
|
[] |
void
| true
| 3
| 6.72
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
standardHashCode
|
protected int standardHashCode() {
K k = getKey();
V v = getValue();
return ((k == null) ? 0 : k.hashCode()) ^ ((v == null) ? 0 : v.hashCode());
}
|
A sensible definition of {@link #hashCode()} in terms of {@link #getKey()} and {@link
#getValue()}. If you override either of these methods, you may wish to override {@link
#hashCode()} to forward to this implementation.
@since 7.0
|
java
|
android/guava/src/com/google/common/collect/ForwardingMapEntry.java
| 112
|
[] | true
| 3
| 6.72
|
google/guava
| 51,352
|
javadoc
| false
|
|
keySize
|
public int keySize() {
if (magic() == RecordBatch.MAGIC_VALUE_V0)
return buffer.getInt(KEY_SIZE_OFFSET_V0);
else
return buffer.getInt(KEY_SIZE_OFFSET_V1);
}
|
The length of the key in bytes
@return the size in bytes of the key (0 if the key is null)
|
java
|
clients/src/main/java/org/apache/kafka/common/record/LegacyRecord.java
| 152
|
[] | true
| 2
| 7.44
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
get
|
static SecurityInfo get(ZipContent content) {
if (!content.hasJarSignatureFile()) {
return NONE;
}
try {
return load(content);
}
catch (IOException ex) {
throw new UncheckedIOException(ex);
}
}
|
Get the {@link SecurityInfo} for the given {@link ZipContent}.
@param content the zip content
@return the security info
|
java
|
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/jar/SecurityInfo.java
| 64
|
[
"content"
] |
SecurityInfo
| true
| 3
| 7.76
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
abort
|
public void abort(RuntimeException exception) {
if (!finalState.compareAndSet(null, FinalState.ABORTED))
throw new IllegalStateException("Batch has already been completed in final state " + finalState.get());
log.trace("Aborting batch for partition {}", topicPartition, exception);
completeFutureAndFireCallbacks(ProduceResponse.INVALID_OFFSET, RecordBatch.NO_TIMESTAMP, index -> exception);
}
|
Abort the batch and complete the future and callbacks.
@param exception The exception to use to complete the future and awaiting callbacks.
|
java
|
clients/src/main/java/org/apache/kafka/clients/producer/internals/ProducerBatch.java
| 195
|
[
"exception"
] |
void
| true
| 2
| 7.04
|
apache/kafka
| 31,560
|
javadoc
| false
|
readArgumentIndex
|
private int readArgumentIndex(final String pattern, final ParsePosition pos) {
final int start = pos.getIndex();
seekNonWs(pattern, pos);
final StringBuilder result = new StringBuilder();
boolean error = false;
for (; !error && pos.getIndex() < pattern.length(); next(pos)) {
char c = pattern.charAt(pos.getIndex());
if (Character.isWhitespace(c)) {
seekNonWs(pattern, pos);
c = pattern.charAt(pos.getIndex());
if (c != START_FMT && c != END_FE) {
error = true;
continue;
}
}
if ((c == START_FMT || c == END_FE) && result.length() > 0) {
try {
return Integer.parseInt(result.toString());
} catch (final NumberFormatException ignored) {
// we've already ensured only digits, so unless something
// outlandishly large was specified we should be okay.
}
}
error = !Character.isDigit(c);
result.append(c);
}
if (error) {
throw new IllegalArgumentException(
"Invalid format argument index at position " + start + ": "
+ pattern.substring(start, pos.getIndex()));
}
throw new IllegalArgumentException(
"Unterminated format element at position " + start);
}
|
Reads the argument index from the current format element
@param pattern pattern to parse
@param pos current parse position
@return argument index
|
java
|
src/main/java/org/apache/commons/lang3/text/ExtendedMessageFormat.java
| 408
|
[
"pattern",
"pos"
] | true
| 11
| 7.28
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
replaceEach
|
public static String replaceEach(final String text, final String[] searchList, final String[] replacementList) {
return replaceEach(text, searchList, replacementList, false, 0);
}
|
Replaces all occurrences of Strings within another String.
<p>
A {@code null} reference passed to this method is a no-op, or if any "search string" or "string to replace" is null, that replace will be ignored. This
will not repeat. For repeating replaces, call the overloaded method.
</p>
<pre>
StringUtils.replaceEach(null, *, *) = null
StringUtils.replaceEach("", *, *) = ""
StringUtils.replaceEach("aba", null, null) = "aba"
StringUtils.replaceEach("aba", new String[0], null) = "aba"
StringUtils.replaceEach("aba", null, new String[0]) = "aba"
StringUtils.replaceEach("aba", new String[]{"a"}, null) = "aba"
StringUtils.replaceEach("aba", new String[]{"a"}, new String[]{""}) = "b"
StringUtils.replaceEach("aba", new String[]{null}, new String[]{"a"}) = "aba"
StringUtils.replaceEach("abcde", new String[]{"ab", "d"}, new String[]{"w", "t"}) = "wcte"
(example of how it does not repeat)
StringUtils.replaceEach("abcde", new String[]{"ab", "d"}, new String[]{"d", "t"}) = "dcte"
</pre>
@param text text to search and replace in, no-op if null.
@param searchList the Strings to search for, no-op if null.
@param replacementList the Strings to replace them with, no-op if null.
@return the text with any replacements processed, {@code null} if null String input.
@throws IllegalArgumentException if the lengths of the arrays are not the same (null is ok, and/or size 0).
@since 2.4
|
java
|
src/main/java/org/apache/commons/lang3/StringUtils.java
| 6,362
|
[
"text",
"searchList",
"replacementList"
] |
String
| true
| 1
| 6.48
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
checkCircularity
|
function checkCircularity(stackIndex: number, nodeStack: BinaryExpression[], node: BinaryExpression) {
if (Debug.shouldAssert(AssertionLevel.Aggressive)) {
while (stackIndex >= 0) {
Debug.assert(nodeStack[stackIndex] !== node, "Circular traversal detected.");
stackIndex--;
}
}
}
|
Handles a frame that is already done.
@returns The `done` state.
|
typescript
|
src/compiler/factory/utilities.ts
| 1,400
|
[
"stackIndex",
"nodeStack",
"node"
] | false
| 3
| 6.08
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
get_standard_colors
|
def get_standard_colors(
num_colors: int,
colormap: Colormap | None = None,
color_type: str = "default",
*,
color: dict[str, Color] | Color | Sequence[Color] | None = None,
) -> dict[str, Color] | list[Color]:
"""
Get standard colors based on `colormap`, `color_type` or `color` inputs.
Parameters
----------
num_colors : int
Minimum number of colors to be returned.
Ignored if `color` is a dictionary.
colormap : :py:class:`matplotlib.colors.Colormap`, optional
Matplotlib colormap.
When provided, the resulting colors will be derived from the colormap.
color_type : {"default", "random"}, optional
Type of colors to derive. Used if provided `color` and `colormap` are None.
Ignored if either `color` or `colormap` are not None.
color : dict or str or sequence, optional
Color(s) to be used for deriving sequence of colors.
Can be either be a dictionary, or a single color (single color string,
or sequence of floats representing a single color),
or a sequence of colors.
Returns
-------
dict or list
Standard colors. Can either be a mapping if `color` was a dictionary,
or a list of colors with a length of `num_colors` or more.
Warns
-----
UserWarning
If both `colormap` and `color` are provided.
Parameter `color` will override.
"""
if isinstance(color, dict):
return color
colors = _derive_colors(
color=color,
colormap=colormap,
color_type=color_type,
num_colors=num_colors,
)
return list(_cycle_colors(colors, num_colors=num_colors))
|
Get standard colors based on `colormap`, `color_type` or `color` inputs.
Parameters
----------
num_colors : int
Minimum number of colors to be returned.
Ignored if `color` is a dictionary.
colormap : :py:class:`matplotlib.colors.Colormap`, optional
Matplotlib colormap.
When provided, the resulting colors will be derived from the colormap.
color_type : {"default", "random"}, optional
Type of colors to derive. Used if provided `color` and `colormap` are None.
Ignored if either `color` or `colormap` are not None.
color : dict or str or sequence, optional
Color(s) to be used for deriving sequence of colors.
Can be either be a dictionary, or a single color (single color string,
or sequence of floats representing a single color),
or a sequence of colors.
Returns
-------
dict or list
Standard colors. Can either be a mapping if `color` was a dictionary,
or a list of colors with a length of `num_colors` or more.
Warns
-----
UserWarning
If both `colormap` and `color` are provided.
Parameter `color` will override.
|
python
|
pandas/plotting/_matplotlib/style.py
| 59
|
[
"num_colors",
"colormap",
"color_type",
"color"
] |
dict[str, Color] | list[Color]
| true
| 2
| 6.72
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
iterator
|
@Override
public Iterator<T> iterator() {
return this.services.iterator();
}
|
Create a new {@link Loader} that will obtain AOT services from the given
{@link SpringFactoriesLoader} and {@link ListableBeanFactory}.
@param springFactoriesLoader the spring factories loader
@param beanFactory the bean factory
@return a new {@link Loader} instance
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/aot/AotServices.java
| 144
|
[] | true
| 1
| 6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
doLongValue
|
@Override
public long doLongValue() throws IOException {
try {
return parser.getLongValue();
} catch (IOException e) {
throw handleParserException(e);
}
}
|
Handle parser exception depending on type.
This converts known exceptions to XContentParseException and rethrows them.
|
java
|
libs/x-content/impl/src/main/java/org/elasticsearch/xcontent/provider/json/JsonXContentParser.java
| 272
|
[] | true
| 2
| 6.08
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
output_layout
|
def output_layout(self, flexible: bool = True) -> Layout:
"""
Handle output layout generation for matrix multiplication.
Args:
out_dtype: Optional output dtype. If not provided, infer from inputs
flexible: If True, return FlexibleLayout, otherwise FixedLayout
"""
mat1, mat2 = self.mat1mat2()
out_dtype = self.out_dtype()
# NOTE: taken from mm_common.mm_args
*b1, m, k1 = mat1.get_size()
*b2, k2, n = mat2.get_size()
b = [V.graph.sizevars.check_equals_and_simplify(a, b) for a, b in zip(b1, b2)]
size = [*b, m, n]
if flexible:
return FlexibleLayout(self.device(), out_dtype, size)
else:
return FixedLayout(self.device(), out_dtype, size)
|
Handle output layout generation for matrix multiplication.
Args:
out_dtype: Optional output dtype. If not provided, infer from inputs
flexible: If True, return FlexibleLayout, otherwise FixedLayout
|
python
|
torch/_inductor/kernel_inputs.py
| 286
|
[
"self",
"flexible"
] |
Layout
| true
| 3
| 6.24
|
pytorch/pytorch
| 96,034
|
google
| false
|
nextPrint
|
public String nextPrint(final int count) {
return next(count, 32, 126, false, false);
}
|
Creates a random string whose length is the number of characters specified.
<p>
Characters will be chosen from the set of characters which match the POSIX [:print:] regular expression character
class. This class includes all visible ASCII characters and spaces (i.e. anything except control characters).
</p>
@param count the length of random string to create.
@return the random string.
@throws IllegalArgumentException if {@code count} < 0.
@since 3.5
@since 3.16.0
|
java
|
src/main/java/org/apache/commons/lang3/RandomStringUtils.java
| 971
|
[
"count"
] |
String
| true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
appendIndex
|
private ConfigurationPropertyName appendIndex(ConfigurationPropertyName root, int i) {
return root.append((i < INDEXES.length) ? INDEXES[i] : "[" + i + "]");
}
|
Bind indexed elements to the supplied collection.
@param name the name of the property to bind
@param target the target bindable
@param elementBinder the binder to use for elements
@param aggregateType the aggregate type, may be a collection or an array
@param elementType the element type
@param result the destination for results
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/IndexedElementsBinder.java
| 159
|
[
"root",
"i"
] |
ConfigurationPropertyName
| true
| 2
| 6.32
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
addYears
|
public static Date addYears(final Date date, final int amount) {
return add(date, Calendar.YEAR, amount);
}
|
Adds a number of years to a date returning a new object.
The original {@link Date} is unchanged.
@param date the date, not null.
@param amount the amount to add, may be negative.
@return the new {@link Date} with the amount added.
@throws NullPointerException if the date is null.
|
java
|
src/main/java/org/apache/commons/lang3/time/DateUtils.java
| 328
|
[
"date",
"amount"
] |
Date
| true
| 1
| 6.64
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
ifUnique
|
default void ifUnique(Consumer<T> dependencyConsumer) throws BeansException {
T dependency = getIfUnique();
if (dependency != null) {
dependencyConsumer.accept(dependency);
}
}
|
Consume an instance (possibly shared or independent) of the object
managed by this factory, if unique.
@param dependencyConsumer a callback for processing the target object
if unique (not called otherwise)
@throws BeansException in case of creation errors
@since 5.0
@see #getIfUnique()
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/ObjectProvider.java
| 207
|
[
"dependencyConsumer"
] |
void
| true
| 2
| 6.24
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
hermcompanion
|
def hermcompanion(c):
"""Return the scaled companion matrix of c.
The basis polynomials are scaled so that the companion matrix is
symmetric when `c` is a Hermite basis polynomial. This provides
better eigenvalue estimates than the unscaled case and for basis
polynomials the eigenvalues are guaranteed to be real if
`numpy.linalg.eigvalsh` is used to obtain them.
Parameters
----------
c : array_like
1-D array of Hermite series coefficients ordered from low to high
degree.
Returns
-------
mat : ndarray
Scaled companion matrix of dimensions (deg, deg).
Examples
--------
>>> from numpy.polynomial.hermite import hermcompanion
>>> hermcompanion([1, 0, 1])
array([[0. , 0.35355339],
[0.70710678, 0. ]])
"""
# c is a trimmed copy
[c] = pu.as_series([c])
if len(c) < 2:
raise ValueError('Series must have maximum degree of at least 1.')
if len(c) == 2:
return np.array([[-.5 * c[0] / c[1]]])
n = len(c) - 1
mat = np.zeros((n, n), dtype=c.dtype)
scl = np.hstack((1., 1. / np.sqrt(2. * np.arange(n - 1, 0, -1))))
scl = np.multiply.accumulate(scl)[::-1]
top = mat.reshape(-1)[1::n + 1]
bot = mat.reshape(-1)[n::n + 1]
top[...] = np.sqrt(.5 * np.arange(1, n))
bot[...] = top
mat[:, -1] -= scl * c[:-1] / (2.0 * c[-1])
return mat
|
Return the scaled companion matrix of c.
The basis polynomials are scaled so that the companion matrix is
symmetric when `c` is a Hermite basis polynomial. This provides
better eigenvalue estimates than the unscaled case and for basis
polynomials the eigenvalues are guaranteed to be real if
`numpy.linalg.eigvalsh` is used to obtain them.
Parameters
----------
c : array_like
1-D array of Hermite series coefficients ordered from low to high
degree.
Returns
-------
mat : ndarray
Scaled companion matrix of dimensions (deg, deg).
Examples
--------
>>> from numpy.polynomial.hermite import hermcompanion
>>> hermcompanion([1, 0, 1])
array([[0. , 0.35355339],
[0.70710678, 0. ]])
|
python
|
numpy/polynomial/hermite.py
| 1,438
|
[
"c"
] | false
| 3
| 7.68
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
getOrigin
|
@Override
public @Nullable Origin getOrigin(String name) {
Object value = super.getProperty(name);
if (value instanceof OriginTrackedValue originTrackedValue) {
return originTrackedValue.getOrigin();
}
return null;
}
|
Create a new {@link OriginTrackedMapPropertySource} instance.
@param name the property source name
@param source the underlying map source
@param immutable if the underlying source is immutable and guaranteed not to change
@since 2.2.0
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/env/OriginTrackedMapPropertySource.java
| 74
|
[
"name"
] |
Origin
| true
| 2
| 6.4
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
dropna
|
def dropna(self, how: AnyAll = "any") -> Self:
"""
Return Index without NA/NaN values.
Parameters
----------
how : {'any', 'all'}, default 'any'
If the Index is a MultiIndex, drop the value when any or all levels
are NaN.
Returns
-------
Index
Returns an Index object after removing NA/NaN values.
See Also
--------
Index.fillna : Fill NA/NaN values with the specified value.
Index.isna : Detect missing values.
Examples
--------
>>> idx = pd.Index([1, np.nan, 3])
>>> idx.dropna()
Index([1.0, 3.0], dtype='float64')
"""
if how not in ("any", "all"):
raise ValueError(f"invalid how option: {how}")
if self.hasnans:
res_values = self._values[~self._isnan]
return type(self)._simple_new(res_values, name=self.name)
return self._view()
|
Return Index without NA/NaN values.
Parameters
----------
how : {'any', 'all'}, default 'any'
If the Index is a MultiIndex, drop the value when any or all levels
are NaN.
Returns
-------
Index
Returns an Index object after removing NA/NaN values.
See Also
--------
Index.fillna : Fill NA/NaN values with the specified value.
Index.isna : Detect missing values.
Examples
--------
>>> idx = pd.Index([1, np.nan, 3])
>>> idx.dropna()
Index([1.0, 3.0], dtype='float64')
|
python
|
pandas/core/indexes/base.py
| 2,781
|
[
"self",
"how"
] |
Self
| true
| 3
| 7.12
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
withAliases
|
default ConfigurationPropertySource withAliases(ConfigurationPropertyNameAliases aliases) {
return new AliasedConfigurationPropertySource(this, aliases);
}
|
Return a variant of this source that supports name aliases.
@param aliases a function that returns a stream of aliases for any given name
@return a {@link ConfigurationPropertySource} instance supporting name aliases
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertySource.java
| 76
|
[
"aliases"
] |
ConfigurationPropertySource
| true
| 1
| 6.32
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
create_processing_job
|
def create_processing_job(
self,
config: dict,
wait_for_completion: bool = True,
check_interval: int = 30,
max_ingestion_time: int | None = None,
):
"""
Use Amazon SageMaker Processing to analyze data and evaluate models.
With Processing, you can use a simplified, managed experience on
SageMaker to run your data processing workloads, such as feature
engineering, data validation, model evaluation, and model
interpretation.
.. seealso::
- :external+boto3:py:meth:`SageMaker.Client.create_processing_job`
:param config: the config for processing job
:param wait_for_completion: if the program should keep running until job finishes
:param check_interval: the time interval in seconds which the operator
will check the status of any SageMaker job
:param max_ingestion_time: the maximum ingestion time in seconds. Any
SageMaker jobs that run longer than this will fail. Setting this to
None implies no timeout for any SageMaker job.
:return: A response to transform job creation
"""
response = self.get_conn().create_processing_job(**config)
if wait_for_completion:
self.check_status(
config["ProcessingJobName"],
"ProcessingJobStatus",
self.describe_processing_job,
check_interval,
max_ingestion_time,
)
return response
|
Use Amazon SageMaker Processing to analyze data and evaluate models.
With Processing, you can use a simplified, managed experience on
SageMaker to run your data processing workloads, such as feature
engineering, data validation, model evaluation, and model
interpretation.
.. seealso::
- :external+boto3:py:meth:`SageMaker.Client.create_processing_job`
:param config: the config for processing job
:param wait_for_completion: if the program should keep running until job finishes
:param check_interval: the time interval in seconds which the operator
will check the status of any SageMaker job
:param max_ingestion_time: the maximum ingestion time in seconds. Any
SageMaker jobs that run longer than this will fail. Setting this to
None implies no timeout for any SageMaker job.
:return: A response to transform job creation
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/sagemaker.py
| 419
|
[
"self",
"config",
"wait_for_completion",
"check_interval",
"max_ingestion_time"
] | true
| 2
| 7.44
|
apache/airflow
| 43,597
|
sphinx
| false
|
|
poll
|
public void poll(final long timeoutMs, final long currentTimeMs, boolean onClose) {
trySend(currentTimeMs);
long pollTimeoutMs = timeoutMs;
if (!unsentRequests.isEmpty()) {
pollTimeoutMs = Math.min(retryBackoffMs, pollTimeoutMs);
}
this.client.poll(pollTimeoutMs, currentTimeMs);
maybePropagateMetadataError();
checkDisconnects(currentTimeMs, onClose);
asyncConsumerMetrics.recordUnsentRequestsQueueSize(unsentRequests.size(), currentTimeMs);
}
|
This method will try to send the unsent requests, poll for responses,
and check the disconnected nodes.
@param timeoutMs timeout time
@param currentTimeMs current time
@param onClose True when the network thread is closing.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/NetworkClientDelegate.java
| 159
|
[
"timeoutMs",
"currentTimeMs",
"onClose"
] |
void
| true
| 2
| 6.72
|
apache/kafka
| 31,560
|
javadoc
| false
|
canConnect
|
boolean canConnect(Node node, long now) {
return connectionStates.canConnect(node.idString(), now);
}
|
Begin connecting to the given node, return true if we are already connected and ready to send to that node.
@param node The node to check
@param now The current timestamp
@return True if we are ready to send to the given node
|
java
|
clients/src/main/java/org/apache/kafka/clients/NetworkClient.java
| 374
|
[
"node",
"now"
] | true
| 1
| 6.96
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
map
|
def map(self, mapper, na_action: Literal["ignore"] | None = None):
"""
Map values using an input mapping or function.
Parameters
----------
mapper : function, dict, or Series
Mapping correspondence.
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
mapping correspondence.
Returns
-------
Union[Index, MultiIndex]
The output of the mapping function applied to the index.
If the function returns a tuple with more than one element
a MultiIndex will be returned.
See Also
--------
Index.where : Replace values where the condition is False.
Examples
--------
>>> idx = pd.Index([1, 2, 3])
>>> idx.map({1: "a", 2: "b", 3: "c"})
Index(['a', 'b', 'c'], dtype='object')
Using `map` with a function:
>>> idx = pd.Index([1, 2, 3])
>>> idx.map("I am a {}".format)
Index(['I am a 1', 'I am a 2', 'I am a 3'], dtype='object')
>>> idx = pd.Index(["a", "b", "c"])
>>> idx.map(lambda x: x.upper())
Index(['A', 'B', 'C'], dtype='object')
"""
from pandas.core.indexes.multi import MultiIndex
new_values = self._map_values(mapper, na_action=na_action)
# we can return a MultiIndex
if new_values.size and isinstance(new_values[0], tuple):
if isinstance(self, MultiIndex):
names = self.names
elif self.name:
names = [self.name] * len(new_values[0])
else:
names = None
return MultiIndex.from_tuples(new_values, names=names)
dtype = None
if not new_values.size:
# empty
dtype = self.dtype
elif isinstance(new_values, Categorical):
# cast_pointwise_result is unnecessary
dtype = new_values.dtype
else:
if isinstance(self, MultiIndex):
arr = self[:0].to_flat_index().array
else:
arr = self[:0].array
# e.g. if we are floating and new_values is all ints, then we
# don't want to cast back to floating. But if we are UInt64
# and new_values is all ints, we want to try.
new_values = arr._cast_pointwise_result(new_values)
dtype = new_values.dtype
return Index(new_values, dtype=dtype, copy=False, name=self.name)
|
Map values using an input mapping or function.
Parameters
----------
mapper : function, dict, or Series
Mapping correspondence.
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
mapping correspondence.
Returns
-------
Union[Index, MultiIndex]
The output of the mapping function applied to the index.
If the function returns a tuple with more than one element
a MultiIndex will be returned.
See Also
--------
Index.where : Replace values where the condition is False.
Examples
--------
>>> idx = pd.Index([1, 2, 3])
>>> idx.map({1: "a", 2: "b", 3: "c"})
Index(['a', 'b', 'c'], dtype='object')
Using `map` with a function:
>>> idx = pd.Index([1, 2, 3])
>>> idx.map("I am a {}".format)
Index(['I am a 1', 'I am a 2', 'I am a 3'], dtype='object')
>>> idx = pd.Index(["a", "b", "c"])
>>> idx.map(lambda x: x.upper())
Index(['A', 'B', 'C'], dtype='object')
|
python
|
pandas/core/indexes/base.py
| 6,489
|
[
"self",
"mapper",
"na_action"
] | true
| 11
| 8.56
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
|
parseInt
|
function parseInt(string, radix, guard) {
if (guard || radix == null) {
radix = 0;
} else if (radix) {
radix = +radix;
}
return nativeParseInt(toString(string).replace(reTrimStart, ''), radix || 0);
}
|
Converts `string` to an integer of the specified radix. If `radix` is
`undefined` or `0`, a `radix` of `10` is used unless `value` is a
hexadecimal, in which case a `radix` of `16` is used.
**Note:** This method aligns with the
[ES5 implementation](https://es5.github.io/#x15.1.2.2) of `parseInt`.
@static
@memberOf _
@since 1.1.0
@category String
@param {string} string The string to convert.
@param {number} [radix=10] The radix to interpret `value` by.
@param- {Object} [guard] Enables use as an iteratee for methods like `_.map`.
@returns {number} Returns the converted integer.
@example
_.parseInt('08');
// => 8
_.map(['6', '08', '10'], _.parseInt);
// => [6, 8, 10]
|
javascript
|
lodash.js
| 14,584
|
[
"string",
"radix",
"guard"
] | false
| 6
| 7.68
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
record
|
public void record() {
if (shouldRecord()) {
recordInternal(1.0d, time.milliseconds(), true);
}
}
|
Record an occurrence, this is just short-hand for {@link #record(double) record(1.0)}
|
java
|
clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java
| 183
|
[] |
void
| true
| 2
| 6.4
|
apache/kafka
| 31,560
|
javadoc
| false
|
formatTimeStamp
|
private static WritableJson formatTimeStamp(long timeStamp) {
return (out) -> out.append(new BigDecimal(timeStamp).movePointLeft(3).toPlainString());
}
|
GELF requires "seconds since UNIX epoch with optional <b>decimal places for
milliseconds</b>". To comply with this requirement, we format a POSIX timestamp
with millisecond precision as e.g. "1725459730385" -> "1725459730.385"
@param timeStamp the timestamp of the log message
@return the timestamp formatted as string with millisecond precision
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/logback/GraylogExtendedLogFormatStructuredLogFormatter.java
| 123
|
[
"timeStamp"
] |
WritableJson
| true
| 1
| 6
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
Operator
|
Operator(Operator&&) noexcept = default;
|
compose() - Must be implemented by child class to compose a new Generator
out of a given generator. This function left intentionally unimplemented.
|
cpp
|
folly/gen/Core-inl.h
| 94
|
[] | true
| 2
| 6.48
|
facebook/folly
| 30,157
|
doxygen
| false
|
|
notEmpty
|
public static <T> T[] notEmpty(final T[] array) {
return notEmpty(array, DEFAULT_NOT_EMPTY_ARRAY_EX_MESSAGE);
}
|
<p>Validates that the specified argument array is neither {@code null}
nor a length of zero (no elements); otherwise throwing an exception.
<pre>Validate.notEmpty(myArray);</pre>
<p>The message in the exception is "The validated array is
empty".
@param <T> the array type.
@param array the array to check, validated not null by this method.
@return the validated array (never {@code null} method for chaining).
@throws NullPointerException if the array is {@code null}.
@throws IllegalArgumentException if the array is empty.
@see #notEmpty(Object[], String, Object...)
|
java
|
src/main/java/org/apache/commons/lang3/Validate.java
| 960
|
[
"array"
] | true
| 1
| 6.32
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
append
|
public StrBuilder append(final String format, final Object... objs) {
return append(String.format(format, objs));
}
|
Calls {@link String#format(String, Object...)} and appends the result.
@param format the format string
@param objs the objects to use in the format string
@return {@code this} to enable chaining
@see String#format(String, Object...)
@since 3.2
|
java
|
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
| 675
|
[
"format"
] |
StrBuilder
| true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
triggerUpdate
|
function triggerUpdate() {
const hooks = getHooksContextOrNull();
// Rerun storyFn if updates were triggered synchronously, force rerender otherwise
if (hooks != null && hooks.currentPhase !== 'NONE') {
hooks.hasUpdates = true;
} else {
try {
addons.getChannel().emit(FORCE_RE_RENDER);
} catch (e) {
logger.warn('State updates of Storybook preview hooks work only in browser');
}
}
}
|
Returns a mutable ref object.
@example
```ts
const ref = useRef(0);
ref.current = 1;
```
@template T The type of the ref object.
@param {T} initialValue The initial value of the ref object.
@returns {{ current: T }} The mutable ref object.
|
typescript
|
code/core/src/preview-api/modules/addons/hooks.ts
| 368
|
[] | false
| 5
| 8.88
|
storybookjs/storybook
| 88,865
|
jsdoc
| false
|
|
headers
|
Iterable<Header> headers(String key);
|
Returns all headers for the given key, in the order they were added in, if present.
The iterator does not support {@link java.util.Iterator#remove()}.
@param key to return the headers for; must not be null.
@return all headers for the given key, in the order they were added in, if NO headers are present an empty iterable is returned.
|
java
|
clients/src/main/java/org/apache/kafka/common/header/Headers.java
| 71
|
[
"key"
] | true
| 1
| 6.8
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
destroy
|
@Override
public void destroy() {
if (this.cacheManager != null) {
this.cacheManager.close();
}
}
|
Specify properties for the to-be-created {@code CacheManager}.
<p>Default is {@code null} (i.e. no special properties to apply).
@see javax.cache.spi.CachingProvider#getCacheManager(URI, ClassLoader, Properties)
|
java
|
spring-context-support/src/main/java/org/springframework/cache/jcache/JCacheManagerFactoryBean.java
| 99
|
[] |
void
| true
| 2
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
open_instance_resource
|
def open_instance_resource(
self, resource: str, mode: str = "rb", encoding: str | None = "utf-8"
) -> t.IO[t.AnyStr]:
"""Open a resource file relative to the application's instance folder
:attr:`instance_path`. Unlike :meth:`open_resource`, files in the
instance folder can be opened for writing.
:param resource: Path to the resource relative to :attr:`instance_path`.
:param mode: Open the file in this mode.
:param encoding: Open the file with this encoding when opening in text
mode. This is ignored when opening in binary mode.
.. versionchanged:: 3.1
Added the ``encoding`` parameter.
"""
path = os.path.join(self.instance_path, resource)
if "b" in mode:
return open(path, mode)
return open(path, mode, encoding=encoding)
|
Open a resource file relative to the application's instance folder
:attr:`instance_path`. Unlike :meth:`open_resource`, files in the
instance folder can be opened for writing.
:param resource: Path to the resource relative to :attr:`instance_path`.
:param mode: Open the file in this mode.
:param encoding: Open the file with this encoding when opening in text
mode. This is ignored when opening in binary mode.
.. versionchanged:: 3.1
Added the ``encoding`` parameter.
|
python
|
src/flask/app.py
| 446
|
[
"self",
"resource",
"mode",
"encoding"
] |
t.IO[t.AnyStr]
| true
| 2
| 6.72
|
pallets/flask
| 70,946
|
sphinx
| false
|
unsubscribe
|
public void unsubscribe() {
acquireAndEnsureOpen();
try {
fetcher.clearBufferedDataForUnassignedPartitions(Collections.emptySet());
if (this.coordinator != null) {
this.coordinator.onLeavePrepare();
this.coordinator.maybeLeaveGroup(CloseOptions.GroupMembershipOperation.DEFAULT, "the consumer unsubscribed from all topics");
}
this.subscriptions.unsubscribe();
log.info("Unsubscribed all topics or patterns and assigned partitions");
} finally {
release();
}
}
|
Internal helper method for {@link #subscribe(Pattern)} and
{@link #subscribe(Pattern, ConsumerRebalanceListener)}
<p>
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.
The pattern matching will be done periodically against all topics existing at the time of check.
This can be controlled through the {@code metadata.max.age.ms} configuration: by lowering
the max metadata age, the consumer will refresh metadata more often and check for matching topics.
<p>
See {@link #subscribe(Collection, ConsumerRebalanceListener)} for details on the
use of the {@link ConsumerRebalanceListener}. Generally rebalances are triggered when there
is a change to the topics matching the provided pattern and when consumer group membership changes.
Group rebalances only take place during an active call to {@link #poll(Duration)}.
@param pattern Pattern to subscribe to
@param listener {@link Optional} listener instance to get notifications on partition assignment/revocation
for the subscribed topics
@throws IllegalArgumentException If pattern or listener is null
@throws IllegalStateException If {@code subscribe()} is called previously with topics, or assign is called
previously (without a subsequent call to {@link #unsubscribe()}), or if not
configured at-least one partition assignment strategy
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ClassicKafkaConsumer.java
| 577
|
[] |
void
| true
| 2
| 6.24
|
apache/kafka
| 31,560
|
javadoc
| false
|
computeChecksum
|
private long computeChecksum() {
return Crc32C.compute(buffer, ATTRIBUTES_OFFSET, buffer.limit() - ATTRIBUTES_OFFSET);
}
|
Gets the base timestamp of the batch which is used to calculate the record timestamps from the deltas.
@return The base timestamp
|
java
|
clients/src/main/java/org/apache/kafka/common/record/DefaultRecordBatch.java
| 398
|
[] | true
| 1
| 6.8
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
startInlineUnsafe
|
startInlineUnsafe() && {
folly::Promise<lift_unit_t<StorageType>> p;
auto sf = p.getSemiFuture();
std::move(*this).startInlineImpl(
[promise = std::move(p)](Try<StorageType>&& result) mutable {
promise.setTry(std::move(result));
},
folly::CancellationToken{},
FOLLY_ASYNC_STACK_RETURN_ADDRESS());
return sf;
}
|
Task. Refer to TaskWithExecuter::start() for more information.
|
cpp
|
folly/coro/Task.h
| 389
|
[] | true
| 3
| 6.08
|
facebook/folly
| 30,157
|
doxygen
| false
|
|
getResolvableConstructor
|
@SuppressWarnings("unchecked")
public static <T> Constructor<T> getResolvableConstructor(Class<T> clazz) {
Constructor<T> ctor = findPrimaryConstructor(clazz);
if (ctor != null) {
return ctor;
}
Constructor<?>[] ctors = clazz.getConstructors();
if (ctors.length == 1) {
// A single public constructor
return (Constructor<T>) ctors[0];
}
else if (ctors.length == 0) {
// No public constructors -> check non-public
ctors = clazz.getDeclaredConstructors();
if (ctors.length == 1) {
// A single non-public constructor, for example, from a non-public record type
return (Constructor<T>) ctors[0];
}
}
// Several constructors -> let's try to take the default constructor
try {
return clazz.getDeclaredConstructor();
}
catch (NoSuchMethodException ex) {
// Giving up...
}
// No unique constructor at all
throw new IllegalStateException("No primary or single unique constructor found for " + clazz);
}
|
Return a resolvable constructor for the provided class, either a primary or single
public constructor with arguments, a single non-public constructor with arguments
or simply a default constructor.
<p>Callers have to be prepared to resolve arguments for the returned constructor's
parameters, if any.
@param clazz the class to check
@throws IllegalStateException in case of no unique constructor found at all
@since 5.3
@see #findPrimaryConstructor
|
java
|
spring-beans/src/main/java/org/springframework/beans/BeanUtils.java
| 235
|
[
"clazz"
] | true
| 6
| 6.4
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
buildInternalBeanFactory
|
protected DefaultListableBeanFactory buildInternalBeanFactory(ConfigurableBeanFactory containingFactory) {
// Set parent so that references (up container hierarchies) are correctly resolved.
DefaultListableBeanFactory internalBeanFactory = new DefaultListableBeanFactory(containingFactory);
// Required so that all BeanPostProcessors, Scopes, etc become available.
internalBeanFactory.copyConfigurationFrom(containingFactory);
// Filter out BeanPostProcessors that are part of the AOP infrastructure,
// since those are only meant to apply to beans defined in the original factory.
internalBeanFactory.getBeanPostProcessors().removeIf(AopInfrastructureBean.class::isInstance);
return internalBeanFactory;
}
|
Build an internal BeanFactory for resolving target beans.
@param containingFactory the containing BeanFactory that originally defines the beans
@return an independent internal BeanFactory to hold copies of some target beans
|
java
|
spring-aop/src/main/java/org/springframework/aop/framework/autoproxy/target/AbstractBeanFactoryBasedTargetSourceCreator.java
| 142
|
[
"containingFactory"
] |
DefaultListableBeanFactory
| true
| 1
| 6.24
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
allocatedSizeInBytes
|
@Override
public OptionalLong allocatedSizeInBytes(Path path) {
assert Files.isRegularFile(path) : path;
String fileName = "\\\\?\\" + path;
AtomicInteger lpFileSizeHigh = new AtomicInteger();
final int lpFileSizeLow = kernel.GetCompressedFileSizeW(fileName, lpFileSizeHigh::set);
if (lpFileSizeLow == INVALID_FILE_SIZE) {
logger.warn("Unable to get allocated size of file [{}]. Error code {}", path, kernel.GetLastError());
return OptionalLong.empty();
}
// convert lpFileSizeLow to unsigned long and combine with signed/shifted lpFileSizeHigh
final long allocatedSize = (((long) lpFileSizeHigh.get()) << Integer.SIZE) | Integer.toUnsignedLong(lpFileSizeLow);
if (logger.isTraceEnabled()) {
logger.trace(
"executing native method GetCompressedFileSizeW returned [high={}, low={}, allocated={}] for file [{}]",
lpFileSizeHigh.get(),
lpFileSizeLow,
allocatedSize,
path
);
}
return OptionalLong.of(allocatedSize);
}
|
Install exec system call filtering on Windows.
<p>
Process creation is restricted with {@code SetInformationJobObject/ActiveProcessLimit}.
<p>
Note: This is not intended as a real sandbox. It is another level of security, mostly intended to annoy
security researchers and make their lives more difficult in achieving "remote execution" exploits.
|
java
|
libs/native/src/main/java/org/elasticsearch/nativeaccess/WindowsNativeAccess.java
| 129
|
[
"path"
] |
OptionalLong
| true
| 3
| 6.4
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
getMonthDisplayNames
|
String[] getMonthDisplayNames(final int style) {
// Unfortunately standalone month names are not available in DateFormatSymbols,
// so we have to extract them.
final Map<String, Integer> displayNames = calendar.getDisplayNames(Calendar.MONTH, style, locale);
if (displayNames == null) {
return null;
}
final String[] monthNames = new String[displayNames.size()];
displayNames.forEach((k, v) -> monthNames[v] = k);
return monthNames;
}
|
Gets month names in the requested style.
@param style Must be a valid {@link Calendar#getDisplayNames(int, int, Locale)} month style.
@return Styled names of months
|
java
|
src/main/java/org/apache/commons/lang3/time/CalendarUtils.java
| 160
|
[
"style"
] | true
| 2
| 8.08
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
check_axis_name_return_reason
|
def check_axis_name_return_reason(
name: str, allow_underscore: bool = False
) -> tuple[bool, str]:
"""Check if the given axis name is valid, and a message explaining why if not.
Valid axes names are python identifiers except keywords, and should not start or end with an underscore.
Args:
name (str): the axis name to check
allow_underscore (bool): whether axis names are allowed to start with an underscore
Returns:
tuple[bool, str]: whether the axis name is valid, a message explaining why if not
"""
if not str.isidentifier(name):
return False, "not a valid python identifier"
elif name[0] == "_" or name[-1] == "_":
if name == "_" and allow_underscore:
return True, ""
return False, "axis name should should not start or end with underscore"
else:
if keyword.iskeyword(name):
warnings.warn(
f"It is discouraged to use axes names that are keywords: {name}",
RuntimeWarning,
)
if name in ["axis"]:
warnings.warn(
"It is discouraged to use 'axis' as an axis name and will raise an error in future",
FutureWarning,
)
return True, ""
|
Check if the given axis name is valid, and a message explaining why if not.
Valid axes names are python identifiers except keywords, and should not start or end with an underscore.
Args:
name (str): the axis name to check
allow_underscore (bool): whether axis names are allowed to start with an underscore
Returns:
tuple[bool, str]: whether the axis name is valid, a message explaining why if not
|
python
|
functorch/einops/_parsing.py
| 165
|
[
"name",
"allow_underscore"
] |
tuple[bool, str]
| true
| 9
| 7.76
|
pytorch/pytorch
| 96,034
|
google
| false
|
parseTemplateHead
|
function parseTemplateHead(isTaggedTemplate: boolean): TemplateHead {
if (!isTaggedTemplate && scanner.getTokenFlags() & TokenFlags.IsInvalid) {
reScanTemplateToken(/*isTaggedTemplate*/ false);
}
const fragment = parseLiteralLikeNode(token());
Debug.assert(fragment.kind === SyntaxKind.TemplateHead, "Template head has wrong token kind");
return fragment as TemplateHead;
}
|
Reports a diagnostic error for the current token being an invalid name.
@param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName).
@param nameDiagnostic Diagnostic to report for all other cases.
@param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
|
typescript
|
src/compiler/parser.ts
| 3,739
|
[
"isTaggedTemplate"
] | true
| 3
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
clone
|
@Override
public Object clone() {
try {
return cloneReset();
} catch (final CloneNotSupportedException ex) {
return null;
}
}
|
Creates a new instance of this Tokenizer. The new instance is reset so
that it will be at the start of the token list.
If a {@link CloneNotSupportedException} is caught, return {@code null}.
@return a new instance of this Tokenizer which has been reset.
|
java
|
src/main/java/org/apache/commons/lang3/text/StrTokenizer.java
| 450
|
[] |
Object
| true
| 2
| 8.08
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
targetTypeNecessary
|
private boolean targetTypeNecessary(ResolvableType beanType, @Nullable Class<?> beanClass) {
if (beanType.hasGenerics()) {
return true;
}
if (beanClass != null && this.registeredBean.getMergedBeanDefinition().getFactoryMethodName() != null) {
return true;
}
return (beanClass != null && !beanType.toClass().equals(ClassUtils.getUserClass(beanClass)));
}
|
Extract the target class of a public {@link FactoryBean} based on its
constructor. If the implementation does not resolve the target class
because it itself uses a generic, attempt to extract it from the bean type.
@param factoryBeanType the factory bean type
@param beanType the bean type
@return the target class to use
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/aot/DefaultBeanRegistrationCodeFragments.java
| 155
|
[
"beanType",
"beanClass"
] | true
| 5
| 7.76
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
run
|
def run(self, header, body, partial_args, app=None, interval=None,
countdown=1, max_retries=None, eager=False,
task_id=None, kwargs=None, **options):
"""Execute the chord.
Executing the chord means executing the header and sending the
result to the body. In case of an empty header, the body is
executed immediately.
Arguments:
header (group): The header to execute.
body (Signature): The body to execute.
partial_args (tuple): Arguments to pass to the header.
app (Celery): The Celery app instance.
interval (float): The interval between retries.
countdown (int): The countdown between retries.
max_retries (int): The maximum number of retries.
task_id (str): The task id to use for the body.
kwargs (dict): Keyword arguments to pass to the header.
options (dict): Options to pass to the header.
Returns:
AsyncResult: The result of the body (with the result of the header in the parent of the body).
"""
app = app or self._get_app(body)
group_id = header.options.get('task_id') or uuid()
root_id = body.options.get('root_id')
options = dict(self.options, **options) if options else self.options
if options:
options.pop('task_id', None)
body.options.update(options)
body_task_id = task_id or uuid()
bodyres = body.freeze(body_task_id, group_id=group_id, root_id=root_id)
# Chains should not be passed to the header tasks. See #3771
options.pop('chain', None)
# Neither should chords, for deeply nested chords to work
options.pop('chord', None)
options.pop('task_id', None)
header_result_args = header._freeze_group_tasks(group_id=group_id, chord=body, root_id=root_id)
if header.tasks:
app.backend.apply_chord(
header_result_args,
body,
interval=interval,
countdown=countdown,
max_retries=max_retries,
)
header_result = header.apply_async(partial_args, kwargs, task_id=group_id, **options)
# The execution of a chord body is normally triggered by its header's
# tasks completing. If the header is empty this will never happen, so
# we execute the body manually here.
else:
body.delay([])
header_result = self.app.GroupResult(*header_result_args)
bodyres.parent = header_result
return bodyres
|
Execute the chord.
Executing the chord means executing the header and sending the
result to the body. In case of an empty header, the body is
executed immediately.
Arguments:
header (group): The header to execute.
body (Signature): The body to execute.
partial_args (tuple): Arguments to pass to the header.
app (Celery): The Celery app instance.
interval (float): The interval between retries.
countdown (int): The countdown between retries.
max_retries (int): The maximum number of retries.
task_id (str): The task id to use for the body.
kwargs (dict): Keyword arguments to pass to the header.
options (dict): Options to pass to the header.
Returns:
AsyncResult: The result of the body (with the result of the header in the parent of the body).
|
python
|
celery/canvas.py
| 2,205
|
[
"self",
"header",
"body",
"partial_args",
"app",
"interval",
"countdown",
"max_retries",
"eager",
"task_id",
"kwargs"
] | false
| 8
| 7.2
|
celery/celery
| 27,741
|
google
| false
|
|
get_waiter
|
def get_waiter(self, waiterName: str) -> botocore.waiter.Waiter:
"""
Get an AWS Batch service waiter.
:param waiterName: The name of the waiter. The name should match
the name (including the casing) of the key name in the waiter
model file (typically this is CamelCasing).
:return: a waiter object for the named AWS Batch service
.. note::
AWS Batch might not have any waiters (until botocore PR-1307 is released).
.. code-block:: python
import boto3
boto3.client("batch").waiter_names == []
.. seealso::
- https://boto3.amazonaws.com/v1/documentation/api/latest/guide/clients.html#waiters
- https://github.com/boto/botocore/pull/1307
"""
...
|
Get an AWS Batch service waiter.
:param waiterName: The name of the waiter. The name should match
the name (including the casing) of the key name in the waiter
model file (typically this is CamelCasing).
:return: a waiter object for the named AWS Batch service
.. note::
AWS Batch might not have any waiters (until botocore PR-1307 is released).
.. code-block:: python
import boto3
boto3.client("batch").waiter_names == []
.. seealso::
- https://boto3.amazonaws.com/v1/documentation/api/latest/guide/clients.html#waiters
- https://github.com/boto/botocore/pull/1307
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/batch_client.py
| 71
|
[
"self",
"waiterName"
] |
botocore.waiter.Waiter
| true
| 1
| 6.4
|
apache/airflow
| 43,597
|
sphinx
| false
|
removeExactly
|
@CanIgnoreReturnValue
public boolean removeExactly(@Nullable Object element, int occurrences) {
if (occurrences == 0) {
return true;
}
CollectPreconditions.checkPositive(occurrences, "occurrences");
AtomicInteger existingCounter = safeGet(countMap, element);
if (existingCounter == null) {
return false;
}
while (true) {
int oldValue = existingCounter.get();
if (oldValue < occurrences) {
return false;
}
int newValue = oldValue - occurrences;
if (existingCounter.compareAndSet(oldValue, newValue)) {
if (newValue == 0) {
// Just CASed to 0; remove the entry to clean up the map. If the removal fails,
// another thread has already replaced it with a new counter, which is fine.
countMap.remove(element, existingCounter);
}
return true;
}
}
}
|
Removes exactly the specified number of occurrences of {@code element}, or makes no change if
this is not possible.
<p>This method, in contrast to {@link #remove(Object, int)}, has no effect when the element
count is smaller than {@code occurrences}.
@param element the element to remove
@param occurrences the number of occurrences of {@code element} to remove
@return {@code true} if the removal was possible (including if {@code occurrences} is zero)
@throws IllegalArgumentException if {@code occurrences} is negative
|
java
|
android/guava/src/com/google/common/collect/ConcurrentHashMultiset.java
| 334
|
[
"element",
"occurrences"
] | true
| 7
| 7.76
|
google/guava
| 51,352
|
javadoc
| false
|
|
loadDefaults
|
@Override
protected void loadDefaults(LoggingInitializationContext initializationContext, @Nullable LogFile logFile) {
String location = getPackagedConfigFile((logFile != null) ? "log4j2-file.xml" : "log4j2.xml");
load(initializationContext, location, logFile);
}
|
Return the configuration location. The result may be:
<ul>
<li>{@code null}: if DefaultConfiguration is used (no explicit config loaded)</li>
<li>A file path: if provided explicitly by the user</li>
<li>A URI: if loaded from the classpath default or a custom location</li>
</ul>
@param configuration the source configuration
@return the config location or {@code null}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/log4j2/Log4J2LoggingSystem.java
| 258
|
[
"initializationContext",
"logFile"
] |
void
| true
| 2
| 7.28
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
get_tensor_storages
|
def get_tensor_storages(tensor: torch.Tensor) -> set[StorageWeakRef]:
"""
Get storage references from a tensor.
Handles regular tensors. Raises NotImplementedError for sparse tensors
and traceable wrapper subclasses.
Args:
tensor: The tensor to extract storages from
Returns:
Set of StorageWeakRef objects for the tensor's storage(s)
"""
from torch.multiprocessing.reductions import StorageWeakRef
from torch.utils._python_dispatch import is_traceable_wrapper_subclass
storages: set[StorageWeakRef] = set()
if not isinstance(tensor, torch.Tensor):
return storages
if tensor.is_sparse or tensor.is_sparse_csr:
raise NotImplementedError("get_tensor_storages does not support sparse tensors")
if is_traceable_wrapper_subclass(tensor):
raise NotImplementedError(
"get_tensor_storages does not support traceable wrapper subclasses"
)
else:
storages.add(StorageWeakRef(tensor._typed_storage()))
return storages
|
Get storage references from a tensor.
Handles regular tensors. Raises NotImplementedError for sparse tensors
and traceable wrapper subclasses.
Args:
tensor: The tensor to extract storages from
Returns:
Set of StorageWeakRef objects for the tensor's storage(s)
|
python
|
torch/_dynamo/variables/higher_order_ops.py
| 432
|
[
"tensor"
] |
set[StorageWeakRef]
| true
| 6
| 7.6
|
pytorch/pytorch
| 96,034
|
google
| false
|
refreshForAotProcessing
|
public void refreshForAotProcessing(RuntimeHints runtimeHints) {
if (logger.isDebugEnabled()) {
logger.debug("Preparing bean factory for AOT processing");
}
prepareRefresh();
obtainFreshBeanFactory();
prepareBeanFactory(this.beanFactory);
postProcessBeanFactory(this.beanFactory);
invokeBeanFactoryPostProcessors(this.beanFactory);
this.beanFactory.freezeConfiguration();
PostProcessorRegistrationDelegate.invokeMergedBeanDefinitionPostProcessors(this.beanFactory);
preDetermineBeanTypes(runtimeHints);
}
|
Load or refresh the persistent representation of the configuration up to
a point where the underlying bean factory is ready to create bean
instances.
<p>This variant of {@link #refresh()} is used by Ahead of Time (AOT)
processing that optimizes the application context, typically at build time.
<p>In this mode, only {@link BeanDefinitionRegistryPostProcessor} and
{@link MergedBeanDefinitionPostProcessor} are invoked.
@param runtimeHints the runtime hints
@throws BeansException if the bean factory could not be initialized
@throws IllegalStateException if already initialized and multiple refresh
attempts are not supported
@since 6.0
|
java
|
spring-context/src/main/java/org/springframework/context/support/GenericApplicationContext.java
| 406
|
[
"runtimeHints"
] |
void
| true
| 2
| 6.24
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
formatPeriod
|
public static String formatPeriod(final long startMillis, final long endMillis, final String format, final boolean padWithZeros,
final TimeZone timezone) {
Validate.isTrue(startMillis <= endMillis, "startMillis must not be greater than endMillis");
// Used to optimize for differences under 28 days and
// called formatDuration(millis, format); however this did not work
// over leap years.
// TODO: Compare performance to see if anything was lost by
// losing this optimization.
final Token[] tokens = lexx(format);
// time zones get funky around 0, so normalizing everything to GMT
// stops the hours being off
final Calendar start = Calendar.getInstance(timezone);
start.setTime(new Date(startMillis));
final Calendar end = Calendar.getInstance(timezone);
end.setTime(new Date(endMillis));
// initial estimates
long milliseconds = end.get(Calendar.MILLISECOND) - start.get(Calendar.MILLISECOND);
int seconds = end.get(Calendar.SECOND) - start.get(Calendar.SECOND);
int minutes = end.get(Calendar.MINUTE) - start.get(Calendar.MINUTE);
int hours = end.get(Calendar.HOUR_OF_DAY) - start.get(Calendar.HOUR_OF_DAY);
int days = end.get(Calendar.DAY_OF_MONTH) - start.get(Calendar.DAY_OF_MONTH);
int months = end.get(Calendar.MONTH) - start.get(Calendar.MONTH);
int years = end.get(Calendar.YEAR) - start.get(Calendar.YEAR);
// each initial estimate is adjusted in case it is under 0
while (milliseconds < 0) {
milliseconds += DateUtils.MILLIS_PER_SECOND;
seconds -= 1;
}
while (seconds < 0) {
seconds += SECONDS_PER_MINUTES;
minutes -= 1;
}
while (minutes < 0) {
minutes += MINUTES_PER_HOUR;
hours -= 1;
}
while (hours < 0) {
hours += HOURS_PER_DAY;
days -= 1;
}
if (Token.containsTokenWithValue(tokens, M)) {
while (days < 0) {
days += start.getActualMaximum(Calendar.DAY_OF_MONTH);
months -= 1;
start.add(Calendar.MONTH, 1);
}
while (months < 0) {
months += 12;
years -= 1;
}
if (!Token.containsTokenWithValue(tokens, y) && years != 0) {
while (years != 0) {
months += 12 * years;
years = 0;
}
}
} else {
// there are no M's in the format string
if (!Token.containsTokenWithValue(tokens, y)) {
int target = end.get(Calendar.YEAR);
if (months < 0) {
// target is end-year -1
target -= 1;
}
while (start.get(Calendar.YEAR) != target) {
days += start.getActualMaximum(Calendar.DAY_OF_YEAR) - start.get(Calendar.DAY_OF_YEAR);
// Not sure I grok why this is needed, but the brutal tests show it is
if (start instanceof GregorianCalendar &&
start.get(Calendar.MONTH) == Calendar.FEBRUARY &&
start.get(Calendar.DAY_OF_MONTH) == 29) {
days += 1;
}
start.add(Calendar.YEAR, 1);
days += start.get(Calendar.DAY_OF_YEAR);
}
years = 0;
}
while (start.get(Calendar.MONTH) != end.get(Calendar.MONTH)) {
days += start.getActualMaximum(Calendar.DAY_OF_MONTH);
start.add(Calendar.MONTH, 1);
}
months = 0;
while (days < 0) {
days += start.getActualMaximum(Calendar.DAY_OF_MONTH);
months -= 1;
start.add(Calendar.MONTH, 1);
}
}
// The rest of this code adds in values that
// aren't requested. This allows the user to ask for the
// number of months and get the real count and not just 0->11.
if (!Token.containsTokenWithValue(tokens, d)) {
hours += HOURS_PER_DAY * days;
days = 0;
}
if (!Token.containsTokenWithValue(tokens, H)) {
minutes += MINUTES_PER_HOUR * hours;
hours = 0;
}
if (!Token.containsTokenWithValue(tokens, m)) {
seconds += SECONDS_PER_MINUTES * minutes;
minutes = 0;
}
if (!Token.containsTokenWithValue(tokens, s)) {
milliseconds += DateUtils.MILLIS_PER_SECOND * seconds;
seconds = 0;
}
return format(tokens, years, months, days, hours, minutes, seconds, milliseconds, padWithZeros);
}
|
<p>Formats the time gap as a string, using the specified format.
Padding the left-hand side side of numbers with zeroes is optional and
the time zone may be specified.
<p>When calculating the difference between months/days, it chooses to
calculate months first. So when working out the number of months and
days between January 15th and March 10th, it choose 1 month and
23 days gained by choosing January->February = 1 month and then
calculating days forwards, and not the 1 month and 26 days gained by
choosing March -> February = 1 month and then calculating days
backwards.</p>
<p>For more control, the <a href="https://www.joda.org/joda-time/">Joda-Time</a>
library is recommended.</p>
@param startMillis the start of the duration
@param endMillis the end of the duration
@param format the way in which to format the duration, not null
@param padWithZeros whether to pad the left-hand side side of numbers with 0's
@param timezone the millis are defined in
@return the formatted duration, not null
@throws IllegalArgumentException if startMillis is greater than endMillis
|
java
|
src/main/java/org/apache/commons/lang3/time/DurationFormatUtils.java
| 529
|
[
"startMillis",
"endMillis",
"format",
"padWithZeros",
"timezone"
] |
String
| true
| 23
| 6.64
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
getMessage
|
private static String getMessage(PropertySource<?> propertySource, @Nullable ConfigDataResource location,
String propertyName, @Nullable Origin origin) {
StringBuilder message = new StringBuilder("Inactive property source '");
message.append(propertySource.getName());
if (location != null) {
message.append("' imported from location '");
message.append(location);
}
message.append("' cannot contain property '");
message.append(propertyName);
message.append("'");
if (origin != null) {
message.append(" [origin: ");
message.append(origin);
message.append("]");
}
return message.toString();
}
|
Create a new {@link InactiveConfigDataAccessException} instance.
@param propertySource the inactive property source
@param location the {@link ConfigDataResource} of the property source or
{@code null} if the source was not loaded from {@link ConfigData}.
@param propertyName the name of the property
@param origin the origin or the property or {@code null}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/config/InactiveConfigDataAccessException.java
| 64
|
[
"propertySource",
"location",
"propertyName",
"origin"
] |
String
| true
| 3
| 6.08
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
add
|
private static Date add(final Date date, final int calendarField, final int amount) {
validateDateNotNull(date);
final Calendar c = Calendar.getInstance();
c.setTime(date);
c.add(calendarField, amount);
return c.getTime();
}
|
Adds to a date returning a new object.
The original {@link Date} is unchanged.
@param date the date, not null.
@param calendarField the calendar field to add to.
@param amount the amount to add, may be negative.
@return the new {@link Date} with the amount added.
@throws NullPointerException if the date is null.
|
java
|
src/main/java/org/apache/commons/lang3/time/DateUtils.java
| 220
|
[
"date",
"calendarField",
"amount"
] |
Date
| true
| 1
| 6.88
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
_compute_n_patches
|
def _compute_n_patches(i_h, i_w, p_h, p_w, max_patches=None):
"""Compute the number of patches that will be extracted in an image.
Read more in the :ref:`User Guide <image_feature_extraction>`.
Parameters
----------
i_h : int
The image height
i_w : int
The image with
p_h : int
The height of a patch
p_w : int
The width of a patch
max_patches : int or float, default=None
The maximum number of patches to extract. If `max_patches` is a float
between 0 and 1, it is taken to be a proportion of the total number
of patches. If `max_patches` is None, all possible patches are extracted.
"""
n_h = i_h - p_h + 1
n_w = i_w - p_w + 1
all_patches = n_h * n_w
if max_patches:
if isinstance(max_patches, (Integral)) and max_patches < all_patches:
return max_patches
elif isinstance(max_patches, (Integral)) and max_patches >= all_patches:
return all_patches
elif isinstance(max_patches, (Real)) and 0 < max_patches < 1:
return int(max_patches * all_patches)
else:
raise ValueError("Invalid value for max_patches: %r" % max_patches)
else:
return all_patches
|
Compute the number of patches that will be extracted in an image.
Read more in the :ref:`User Guide <image_feature_extraction>`.
Parameters
----------
i_h : int
The image height
i_w : int
The image with
p_h : int
The height of a patch
p_w : int
The width of a patch
max_patches : int or float, default=None
The maximum number of patches to extract. If `max_patches` is a float
between 0 and 1, it is taken to be a proportion of the total number
of patches. If `max_patches` is None, all possible patches are extracted.
|
python
|
sklearn/feature_extraction/image.py
| 259
|
[
"i_h",
"i_w",
"p_h",
"p_w",
"max_patches"
] | false
| 10
| 6.08
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
ordinalIndexOf
|
private static int ordinalIndexOf(final CharSequence str, final CharSequence searchStr, final int ordinal, final boolean lastIndex) {
if (str == null || searchStr == null || ordinal <= 0) {
return INDEX_NOT_FOUND;
}
if (searchStr.length() == 0) {
return lastIndex ? str.length() : 0;
}
int found = 0;
// set the initial index beyond the end of the string
// this is to allow for the initial index decrement/increment
int index = lastIndex ? str.length() : INDEX_NOT_FOUND;
do {
if (lastIndex) {
index = CharSequenceUtils.lastIndexOf(str, searchStr, index - 1); // step backwards through string
} else {
index = CharSequenceUtils.indexOf(str, searchStr, index + 1); // step forwards through string
}
if (index < 0) {
return index;
}
found++;
} while (found < ordinal);
return index;
}
|
Finds the n-th index within a String, handling {@code null}. This method uses {@link String#indexOf(String)} if possible.
<p>
Note that matches may overlap.
<p>
<p>
A {@code null} CharSequence will return {@code -1}.
</p>
@param str the CharSequence to check, may be null.
@param searchStr the CharSequence to find, may be null.
@param ordinal the n-th {@code searchStr} to find, overlapping matches are allowed.
@param lastIndex true if lastOrdinalIndexOf() otherwise false if ordinalIndexOf().
@return the n-th index of the search CharSequence, {@code -1} ({@code INDEX_NOT_FOUND}) if no match or {@code null} string input.
|
java
|
src/main/java/org/apache/commons/lang3/StringUtils.java
| 5,489
|
[
"str",
"searchStr",
"ordinal",
"lastIndex"
] | true
| 9
| 8.24
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
set_task_instance_state
|
def set_task_instance_state(
self,
*,
task_id: str,
map_indexes: Collection[int] | None = None,
run_id: str | None = None,
state: TaskInstanceState,
upstream: bool = False,
downstream: bool = False,
future: bool = False,
past: bool = False,
commit: bool = True,
session=NEW_SESSION,
) -> list[TaskInstance]:
"""
Set the state of a TaskInstance and clear downstream tasks in failed or upstream_failed state.
:param task_id: Task ID of the TaskInstance
:param map_indexes: Only set TaskInstance if its map_index matches.
If None (default), all mapped TaskInstances of the task are set.
:param run_id: The run_id of the TaskInstance
:param state: State to set the TaskInstance to
:param upstream: Include all upstream tasks of the given task_id
:param downstream: Include all downstream tasks of the given task_id
:param future: Include all future TaskInstances of the given task_id
:param commit: Commit changes
:param past: Include all past TaskInstances of the given task_id
"""
from airflow.api.common.mark_tasks import set_state
task = self.get_task(task_id)
task.dag = self
tasks_to_set_state: list[SerializedOperator | tuple[SerializedOperator, int]]
if map_indexes is None:
tasks_to_set_state = [task]
else:
tasks_to_set_state = [(task, map_index) for map_index in map_indexes]
altered = set_state(
tasks=tasks_to_set_state,
run_id=run_id,
upstream=upstream,
downstream=downstream,
future=future,
past=past,
state=state,
commit=commit,
session=session,
)
if not commit:
return altered
# Clear downstream tasks that are in failed/upstream_failed state to resume them.
# Flush the session so that the tasks marked success are reflected in the db.
session.flush()
subset = self.partial_subset(
task_ids={task_id},
include_downstream=True,
include_upstream=False,
)
# Raises an error if not found
dr_id, logical_date = session.execute(
select(DagRun.id, DagRun.logical_date).where(
DagRun.run_id == run_id, DagRun.dag_id == self.dag_id
)
).one()
# Now we want to clear downstreams of tasks that had their state set...
clear_kwargs = {
"only_failed": True,
"session": session,
# Exclude the task itself from being cleared.
"exclude_task_ids": frozenset((task_id,)),
}
if not future and not past: # Simple case 1: we're only dealing with exactly one run.
clear_kwargs["run_id"] = run_id
subset.clear(**clear_kwargs)
elif future and past: # Simple case 2: we're clearing ALL runs.
subset.clear(**clear_kwargs)
else: # Complex cases: we may have more than one run, based on a date range.
# Make 'future' and 'past' make some sense when multiple runs exist
# for the same logical date. We order runs by their id and only
# clear runs have larger/smaller ids.
exclude_run_id_stmt = select(DagRun.run_id).where(DagRun.logical_date == logical_date)
if future:
clear_kwargs["start_date"] = logical_date
exclude_run_id_stmt = exclude_run_id_stmt.where(DagRun.id > dr_id)
else:
clear_kwargs["end_date"] = logical_date
exclude_run_id_stmt = exclude_run_id_stmt.where(DagRun.id < dr_id)
subset.clear(exclude_run_ids=frozenset(session.scalars(exclude_run_id_stmt)), **clear_kwargs)
return altered
|
Set the state of a TaskInstance and clear downstream tasks in failed or upstream_failed state.
:param task_id: Task ID of the TaskInstance
:param map_indexes: Only set TaskInstance if its map_index matches.
If None (default), all mapped TaskInstances of the task are set.
:param run_id: The run_id of the TaskInstance
:param state: State to set the TaskInstance to
:param upstream: Include all upstream tasks of the given task_id
:param downstream: Include all downstream tasks of the given task_id
:param future: Include all future TaskInstances of the given task_id
:param commit: Commit changes
:param past: Include all past TaskInstances of the given task_id
|
python
|
airflow-core/src/airflow/serialization/serialized_objects.py
| 3,220
|
[
"self",
"task_id",
"map_indexes",
"run_id",
"state",
"upstream",
"downstream",
"future",
"past",
"commit",
"session"
] |
list[TaskInstance]
| true
| 11
| 6.96
|
apache/airflow
| 43,597
|
sphinx
| false
|
getFieldOrDefault
|
private Object getFieldOrDefault(BoundField field) {
Object value = this.values[field.index];
if (value != null)
return value;
else if (field.def.hasDefaultValue)
return field.def.defaultValue;
else if (field.def.type.isNullable())
return null;
else
throw new SchemaException("Missing value for field '" + field.def.name + "' which has no default value.");
}
|
Return the value of the given pre-validated field, or if the value is missing return the default value.
@param field The field for which to get the default value
@throws SchemaException if the field has no value and has no default.
|
java
|
clients/src/main/java/org/apache/kafka/common/protocol/types/Struct.java
| 53
|
[
"field"
] |
Object
| true
| 4
| 6.72
|
apache/kafka
| 31,560
|
javadoc
| false
|
replaceImpl
|
private void replaceImpl(final int startIndex, final int endIndex, final int removeLen, final String insertStr, final int insertLen) {
final int newSize = size - removeLen + insertLen;
if (insertLen != removeLen) {
ensureCapacity(newSize);
System.arraycopy(buffer, endIndex, buffer, startIndex + insertLen, size - endIndex);
size = newSize;
}
if (insertLen > 0) {
insertStr.getChars(0, insertLen, buffer, startIndex);
}
}
|
Internal method to delete a range without validation.
@param startIndex the start index, must be valid
@param endIndex the end index (exclusive), must be valid
@param removeLen the length to remove (endIndex - startIndex), must be valid
@param insertStr the string to replace with, null means delete range
@param insertLen the length of the insert string, must be valid
@throws IndexOutOfBoundsException if any index is invalid
|
java
|
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
| 2,684
|
[
"startIndex",
"endIndex",
"removeLen",
"insertStr",
"insertLen"
] |
void
| true
| 3
| 6.4
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
median
|
@SafeVarargs
public static <T extends Comparable<? super T>> T median(final T... items) {
Validate.notEmpty(items);
Validate.noNullElements(items);
final TreeSet<T> sort = new TreeSet<>();
Collections.addAll(sort, items);
return (T) sort.toArray()[(sort.size() - 1) / 2];
}
|
Finds the "best guess" middle value among comparables. If there is an even number of total values, the lower of the two middle values will be returned.
@param <T> type of values processed by this method.
@param items to compare.
@return T at middle position.
@throws NullPointerException if items is {@code null}.
@throws IllegalArgumentException if items is empty or contains {@code null} values.
@since 3.0.1
|
java
|
src/main/java/org/apache/commons/lang3/ObjectUtils.java
| 1,076
|
[] |
T
| true
| 1
| 6.88
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
_generate_env_for_docker_compose_file_if_needed
|
def _generate_env_for_docker_compose_file_if_needed(env: dict[str, str]):
"""
Generates docker-compose env file if needed.
:param env: dictionary of env variables to use for docker-compose and docker env files.
Writes env files for docker and docker compose to make sure the envs will be passed
to docker-compose/docker when running commands.
The list of variables might change over time, and we want to keep the list updated only in
one place (above env_variables_for_docker_commands method). So we need to regenerate the env
files automatically when new variable is added to the list or removed.
Docker-Compose based tests can start in parallel, so we want to make sure we generate it once
per invocation of breeze command otherwise there could be nasty race condition that
the file would be empty while another compose tries to use it when starting.
Also, it means that we need to pass the values through environment rather than writing
them to the file directly, because they might differ between different parallel runs
of compose or docker.
Unfortunately docker and docker-compose do not share the same env files any more as of Compose V2
format for passing variables from the environment, so we need to generate both files.
Documentation is a bit vague about this.
The docker file contain simply list of all variables that should be passed to docker.
See https://docs.docker.com/engine/reference/commandline/run/#env
> When running the command, the Docker CLI client checks the value the variable has in
> your local environment and passes it to the container. If no = is provided and that
> variable is not exported in your local environment, the variable isn't set in the container.
The docker-compose file should instead contain VARIABLE=${VARIABLE} for each variable
that should be passed to docker compose.
From https://docs.docker.com/compose/compose-file/05-services/#env_file
> VAL may be omitted, in such cases the variable value is an empty string. =VAL may be omitted,
> in such cases the variable is unset.
"""
from filelock import FileLock
with FileLock(GENERATED_DOCKER_LOCK_PATH):
if GENERATED_DOCKER_ENV_PATH.exists():
generated_keys = GENERATED_DOCKER_ENV_PATH.read_text().splitlines()
if set(env.keys()) == set(generated_keys):
# we check if the set of env variables had not changed since last run
# if so - cool, we do not need to do anything else
return
if get_verbose():
get_console().print(
f"[info]The keys has changed vs last run. Regenerating[/]: "
f"{GENERATED_DOCKER_ENV_PATH} and {GENERATED_DOCKER_COMPOSE_ENV_PATH}"
)
if get_verbose():
get_console().print(f"[info]Generating new docker env file [/]: {GENERATED_DOCKER_ENV_PATH}")
GENERATED_DOCKER_ENV_PATH.write_text("\n".join(sorted(env.keys())))
if get_verbose():
get_console().print(
f"[info]Generating new docker compose env file [/]: {GENERATED_DOCKER_COMPOSE_ENV_PATH}"
)
GENERATED_DOCKER_COMPOSE_ENV_PATH.write_text(
"\n".join([f"{k}=${{{k}}}" for k in sorted(env.keys())])
)
|
Generates docker-compose env file if needed.
:param env: dictionary of env variables to use for docker-compose and docker env files.
Writes env files for docker and docker compose to make sure the envs will be passed
to docker-compose/docker when running commands.
The list of variables might change over time, and we want to keep the list updated only in
one place (above env_variables_for_docker_commands method). So we need to regenerate the env
files automatically when new variable is added to the list or removed.
Docker-Compose based tests can start in parallel, so we want to make sure we generate it once
per invocation of breeze command otherwise there could be nasty race condition that
the file would be empty while another compose tries to use it when starting.
Also, it means that we need to pass the values through environment rather than writing
them to the file directly, because they might differ between different parallel runs
of compose or docker.
Unfortunately docker and docker-compose do not share the same env files any more as of Compose V2
format for passing variables from the environment, so we need to generate both files.
Documentation is a bit vague about this.
The docker file contain simply list of all variables that should be passed to docker.
See https://docs.docker.com/engine/reference/commandline/run/#env
> When running the command, the Docker CLI client checks the value the variable has in
> your local environment and passes it to the container. If no = is provided and that
> variable is not exported in your local environment, the variable isn't set in the container.
The docker-compose file should instead contain VARIABLE=${VARIABLE} for each variable
that should be passed to docker compose.
From https://docs.docker.com/compose/compose-file/05-services/#env_file
> VAL may be omitted, in such cases the variable value is an empty string. =VAL may be omitted,
> in such cases the variable is unset.
|
python
|
dev/breeze/src/airflow_breeze/params/shell_params.py
| 732
|
[
"env"
] | true
| 6
| 7.12
|
apache/airflow
| 43,597
|
sphinx
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.