function_name
stringlengths 1
57
| function_code
stringlengths 20
4.99k
| documentation
stringlengths 50
2k
| language
stringclasses 5
values | file_path
stringlengths 8
166
| line_number
int32 4
16.7k
| parameters
listlengths 0
20
| return_type
stringlengths 0
131
| has_type_hints
bool 2
classes | complexity
int32 1
51
| quality_score
float32 6
9.68
| repo_name
stringclasses 34
values | repo_stars
int32 2.9k
242k
| docstring_style
stringclasses 7
values | is_async
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
performRequest
|
private Response performRequest(final NodeTuple<Iterator<Node>> tuple, final InternalRequest request, Exception previousException)
throws IOException {
RequestContext context = request.createContextForNextAttempt(tuple.nodes.next(), tuple.authCache);
HttpResponse httpResponse;
try {
httpResponse = client.execute(context.requestProducer, context.asyncResponseConsumer, context.context, null).get();
} catch (Exception e) {
RequestLogger.logFailedRequest(logger, request.httpRequest, context.node, e);
onFailure(context.node);
Exception cause = extractAndWrapCause(e);
addSuppressedException(previousException, cause);
if (isRetryableException(e) && tuple.nodes.hasNext()) {
return performRequest(tuple, request, cause);
}
if (cause instanceof IOException) {
throw (IOException) cause;
}
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
}
throw new IllegalStateException("unexpected exception type: must be either RuntimeException or IOException", cause);
}
ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node, httpResponse);
if (responseOrResponseException.responseException == null) {
return responseOrResponseException.response;
}
addSuppressedException(previousException, responseOrResponseException.responseException);
if (tuple.nodes.hasNext()) {
return performRequest(tuple, request, responseOrResponseException.responseException);
}
throw responseOrResponseException.responseException;
}
|
Sends a request to the Elasticsearch cluster that the client points to.
Blocks until the request is completed and returns its response or fails
by throwing an exception. Selects a host out of the provided ones in a
round-robin fashion. Failing hosts are marked dead and retried after a
certain amount of time (minimum 1 minute, maximum 30 minutes), depending
on how many times they previously failed (the more failures, the later
they will be retried). In case of failures all of the alive nodes (or
dead nodes that deserve a retry) are retried until one responds or none
of them does, in which case an {@link IOException} will be thrown.
This method works by performing an asynchronous call and waiting
for the result. If the asynchronous call throws an exception we wrap
it and rethrow it so that the stack trace attached to the exception
contains the call site. While we attempt to preserve the original
exception this isn't always possible and likely haven't covered all of
the cases. You can get the original exception from
{@link Exception#getCause()}.
@param request the request to perform
@return the response returned by Elasticsearch
@throws IOException in case of a problem or the connection was aborted
@throws ClientProtocolException in case of an http protocol error
@throws ResponseException in case Elasticsearch responded with a status code that indicated an error
|
java
|
client/rest/src/main/java/org/elasticsearch/client/RestClient.java
| 295
|
[
"tuple",
"request",
"previousException"
] |
Response
| true
| 8
| 7.92
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
provide_api_client
|
def provide_api_client(
kind: Literal[ClientKind.CLI, ClientKind.AUTH] = ClientKind.CLI,
) -> Callable[[Callable[PS, RT]], Callable[PS, RT]]:
"""
Provide a CLI API Client to the decorated function.
CLI API Client shouldn't be passed to the function when this wrapper is used
if the purpose is not mocking or testing.
If you want to reuse a CLI API Client or run the function as part of
API call, you pass it to the function, if not this wrapper
will create one and close it for you.
"""
def decorator(func: Callable[PS, RT]) -> Callable[PS, RT]:
@wraps(func)
def wrapper(*args, **kwargs) -> RT:
if "api_client" not in kwargs:
with get_client(kind=kind) as api_client:
return func(*args, api_client=api_client, **kwargs)
# The CLI API Client should be only passed for Mocking and Testing
return func(*args, **kwargs)
return wrapper
return decorator
|
Provide a CLI API Client to the decorated function.
CLI API Client shouldn't be passed to the function when this wrapper is used
if the purpose is not mocking or testing.
If you want to reuse a CLI API Client or run the function as part of
API call, you pass it to the function, if not this wrapper
will create one and close it for you.
|
python
|
airflow-ctl/src/airflowctl/api/client.py
| 331
|
[
"kind"
] |
Callable[[Callable[PS, RT]], Callable[PS, RT]]
| true
| 2
| 6
|
apache/airflow
| 43,597
|
unknown
| false
|
scoreMode
|
@Override
public ScoreMode scoreMode() {
return (valuesSources != null && valuesSources.needsScores()) ? ScoreMode.COMPLETE : ScoreMode.COMPLETE_NO_SCORES;
}
|
array of descriptive stats, per shard, needed to compute the correlation
|
java
|
modules/aggregations/src/main/java/org/elasticsearch/aggregations/metric/MatrixStatsAggregator.java
| 55
|
[] |
ScoreMode
| true
| 3
| 6.32
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
nativeExtracter
|
public <T> T nativeExtracter(NativeExtracterMap<T> map) {
return type.nativeExtracter(backRefs, map);
}
|
Build an extract that has access to the "native" type of the extracter
match. This means that patterns like {@code %{NUMBER:bytes:float}} has
access to an actual {@link float}. Extracters returned from this method
should be stateless and can be reused. Pathological implementations
of the {@code map} parameter could violate this, but the caller should
take care to stay sane.
<p>
While the goal is to produce a {@link GrokCaptureExtracter} that provides
a primitive, the caller can produce whatever type-safe constructs it
needs and return them from this method. Thus, the {@code <T>} in the type
signature.
@param <T> The type of the result.
@param map Collection of handlers for each native type. Only one method
will be called but well-behaved implementers are stateless.
@return whatever was returned by the handler.
|
java
|
libs/grok/src/main/java/org/elasticsearch/grok/GrokCaptureConfig.java
| 109
|
[
"map"
] |
T
| true
| 1
| 6.8
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
toPrimitive
|
public static short[] toPrimitive(final Short[] array) {
if (array == null) {
return null;
}
if (array.length == 0) {
return EMPTY_SHORT_ARRAY;
}
final short[] result = new short[array.length];
for (int i = 0; i < array.length; i++) {
result[i] = array[i].shortValue();
}
return result;
}
|
Converts an array of object Shorts to primitives.
<p>
This method returns {@code null} for a {@code null} input array.
</p>
@param array a {@link Short} array, may be {@code null}.
@return a {@code byte} array, {@code null} if null array input.
@throws NullPointerException if an array element is {@code null}.
|
java
|
src/main/java/org/apache/commons/lang3/ArrayUtils.java
| 9,196
|
[
"array"
] | true
| 4
| 8.08
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
get_seq_lens
|
def get_seq_lens(self, input_length):
"""
Given a 1D Tensor or Variable containing integer sequence lengths, return a 1D tensor or variable
containing the size sequences that will be output by the network.
:param input_length: 1D Tensor
:return: 1D Tensor scaled by model
"""
seq_len = input_length
for m in self.conv.modules():
if type(m) is nn.modules.conv.Conv2d:
seq_len = (
seq_len
+ 2 * m.padding[1]
- m.dilation[1] * (m.kernel_size[1] - 1)
- 1
)
seq_len = seq_len.true_divide(m.stride[1]) + 1
return seq_len.int()
|
Given a 1D Tensor or Variable containing integer sequence lengths, return a 1D tensor or variable
containing the size sequences that will be output by the network.
:param input_length: 1D Tensor
:return: 1D Tensor scaled by model
|
python
|
benchmarks/functional_autograd_benchmark/torchaudio_models.py
| 361
|
[
"self",
"input_length"
] | false
| 3
| 7.28
|
pytorch/pytorch
| 96,034
|
sphinx
| false
|
|
tolerateRaceConditionDueToBeingParallelCapable
|
private void tolerateRaceConditionDueToBeingParallelCapable(IllegalArgumentException ex, String packageName)
throws AssertionError {
if (getDefinedPackage(packageName) == null) {
// This should never happen as the IllegalArgumentException indicates that the
// package has already been defined and, therefore, getDefinedPackage(name)
// should not have returned null.
throw new AssertionError(
"Package %s has already been defined but it could not be found".formatted(packageName), ex);
}
}
|
Define a package before a {@code findClass} call is made. This is necessary to
ensure that the appropriate manifest for nested JARs is associated with the
package.
@param className the class name being found
|
java
|
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/net/protocol/jar/JarUrlClassLoader.java
| 162
|
[
"ex",
"packageName"
] |
void
| true
| 2
| 6.72
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
match
|
public static ConditionOutcome match(ConditionMessage message) {
return new ConditionOutcome(true, message);
}
|
Create a new {@link ConditionOutcome} instance for 'match'.
@param message the message
@return the {@link ConditionOutcome}
|
java
|
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/condition/ConditionOutcome.java
| 81
|
[
"message"
] |
ConditionOutcome
| true
| 1
| 6.16
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
createExportStatement
|
function createExportStatement(name: ModuleExportName, value: Expression, location?: TextRange, allowComments?: boolean, liveBinding?: boolean) {
const statement = setTextRange(factory.createExpressionStatement(createExportExpression(name, value, /*location*/ undefined, liveBinding)), location);
startOnNewLine(statement);
if (!allowComments) {
setEmitFlags(statement, EmitFlags.NoComments);
}
return statement;
}
|
Creates a call to the current file's export function to export a value.
@param name The bound name of the export.
@param value The exported value.
@param location The location to use for source maps and comments for the export.
@param allowComments An optional value indicating whether to emit comments for the statement.
|
typescript
|
src/compiler/transformers/module/module.ts
| 2,168
|
[
"name",
"value",
"location?",
"allowComments?",
"liveBinding?"
] | false
| 2
| 6.08
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
createTrustManager
|
@Override
public X509ExtendedTrustManager createTrustManager() {
final List<Path> paths = resolveFiles();
try {
final List<Certificate> certificates = readCertificates(paths);
final KeyStore store = KeyStoreUtil.buildTrustStore(certificates);
return KeyStoreUtil.createTrustManager(store, TrustManagerFactory.getDefaultAlgorithm());
} catch (GeneralSecurityException e) {
throw new SslConfigException("cannot create trust using PEM certificates [" + SslFileUtil.pathsToString(paths) + "]", e);
}
}
|
Construct a new trust config for the provided paths (which will be resolved relative to the basePath).
The paths are stored as-is, and are not read until {@link #createTrustManager()} is called.
This means that
<ol>
<li>validation of the file (contents and accessibility) is deferred, and this constructor will <em>not fail</em> on missing
of invalid files.</li>
<li>
if the contents of the files are modified, then subsequent calls {@link #createTrustManager()} will return a new trust
manager that trust a different set of CAs.
</li>
</ol>
|
java
|
libs/ssl-config/src/main/java/org/elasticsearch/common/ssl/PemTrustConfig.java
| 76
|
[] |
X509ExtendedTrustManager
| true
| 2
| 6.72
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
load_executor
|
def load_executor(cls, executor_name: ExecutorName | str | None) -> BaseExecutor:
"""
Load the executor.
This supports the following formats:
* by executor name for core executor
* by import path
* by class name of the Executor
* by ExecutorName object specification
:return: an instance of executor class via executor_name
"""
if not executor_name:
_executor_name = cls.get_default_executor_name()
elif isinstance(executor_name, str):
_executor_name = cls.lookup_executor_name_by_str(executor_name)
else:
_executor_name = executor_name
try:
executor_cls, import_source = cls.import_executor_cls(_executor_name)
log.debug("Loading executor %s from %s", _executor_name, import_source.value)
if _executor_name.team_name:
executor = executor_cls(team_name=_executor_name.team_name)
else:
executor = executor_cls()
except ImportError as e:
log.error(e)
raise AirflowConfigException(
f'The module/attribute could not be loaded. Please check "executor" key in "core" section. '
f'Current value: "{_executor_name}".'
)
log.info("Loaded executor: %s", _executor_name)
# Store the executor name we've built for this executor in the
# instance. This makes it easier for the Scheduler, Backfill, etc to
# know how we refer to this executor.
executor.name = _executor_name
return executor
|
Load the executor.
This supports the following formats:
* by executor name for core executor
* by import path
* by class name of the Executor
* by ExecutorName object specification
:return: an instance of executor class via executor_name
|
python
|
airflow-core/src/airflow/executors/executor_loader.py
| 325
|
[
"cls",
"executor_name"
] |
BaseExecutor
| true
| 6
| 6.56
|
apache/airflow
| 43,597
|
unknown
| false
|
preferredReadReplica
|
public synchronized Optional<Integer> preferredReadReplica(TopicPartition tp, long timeMs) {
final TopicPartitionState topicPartitionState = assignedStateOrNull(tp);
if (topicPartitionState == null) {
return Optional.empty();
} else {
return topicPartitionState.preferredReadReplica(timeMs);
}
}
|
Get the preferred read replica
@param tp The topic partition
@param timeMs The current time
@return Returns the current preferred read replica, if it has been set and if it has not expired.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
| 751
|
[
"tp",
"timeMs"
] | true
| 2
| 7.76
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
pattern
|
public String pattern() {
return this.pattern;
}
|
@return Regular expression pattern compatible with RE2/J.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/SubscriptionPattern.java
| 40
|
[] |
String
| true
| 1
| 6.32
|
apache/kafka
| 31,560
|
javadoc
| false
|
isPrototype
|
function isPrototype(value) {
var Ctor = value && value.constructor,
proto = (typeof Ctor == 'function' && Ctor.prototype) || objectProto;
return value === proto;
}
|
Checks if `value` is likely a prototype object.
@private
@param {*} value The value to check.
@returns {boolean} Returns `true` if `value` is a prototype, else `false`.
|
javascript
|
lodash.js
| 6,481
|
[
"value"
] | false
| 4
| 6.24
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
from_prefixed_env
|
def from_prefixed_env(
self, prefix: str = "FLASK", *, loads: t.Callable[[str], t.Any] = json.loads
) -> bool:
"""Load any environment variables that start with ``FLASK_``,
dropping the prefix from the env key for the config key. Values
are passed through a loading function to attempt to convert them
to more specific types than strings.
Keys are loaded in :func:`sorted` order.
The default loading function attempts to parse values as any
valid JSON type, including dicts and lists.
Specific items in nested dicts can be set by separating the
keys with double underscores (``__``). If an intermediate key
doesn't exist, it will be initialized to an empty dict.
:param prefix: Load env vars that start with this prefix,
separated with an underscore (``_``).
:param loads: Pass each string value to this function and use
the returned value as the config value. If any error is
raised it is ignored and the value remains a string. The
default is :func:`json.loads`.
.. versionadded:: 2.1
"""
prefix = f"{prefix}_"
for key in sorted(os.environ):
if not key.startswith(prefix):
continue
value = os.environ[key]
key = key.removeprefix(prefix)
try:
value = loads(value)
except Exception:
# Keep the value as a string if loading failed.
pass
if "__" not in key:
# A non-nested key, set directly.
self[key] = value
continue
# Traverse nested dictionaries with keys separated by "__".
current = self
*parts, tail = key.split("__")
for part in parts:
# If an intermediate dict does not exist, create it.
if part not in current:
current[part] = {}
current = current[part]
current[tail] = value
return True
|
Load any environment variables that start with ``FLASK_``,
dropping the prefix from the env key for the config key. Values
are passed through a loading function to attempt to convert them
to more specific types than strings.
Keys are loaded in :func:`sorted` order.
The default loading function attempts to parse values as any
valid JSON type, including dicts and lists.
Specific items in nested dicts can be set by separating the
keys with double underscores (``__``). If an intermediate key
doesn't exist, it will be initialized to an empty dict.
:param prefix: Load env vars that start with this prefix,
separated with an underscore (``_``).
:param loads: Pass each string value to this function and use
the returned value as the config value. If any error is
raised it is ignored and the value remains a string. The
default is :func:`json.loads`.
.. versionadded:: 2.1
|
python
|
src/flask/config.py
| 126
|
[
"self",
"prefix",
"loads"
] |
bool
| true
| 6
| 7.04
|
pallets/flask
| 70,946
|
sphinx
| false
|
forOwnRight
|
function forOwnRight(object, iteratee) {
return object && baseForOwnRight(object, getIteratee(iteratee, 3));
}
|
This method is like `_.forOwn` except that it iterates over properties of
`object` in the opposite order.
@static
@memberOf _
@since 2.0.0
@category Object
@param {Object} object The object to iterate over.
@param {Function} [iteratee=_.identity] The function invoked per iteration.
@returns {Object} Returns `object`.
@see _.forOwn
@example
function Foo() {
this.a = 1;
this.b = 2;
}
Foo.prototype.c = 3;
_.forOwnRight(new Foo, function(value, key) {
console.log(key);
});
// => Logs 'b' then 'a' assuming `_.forOwn` logs 'a' then 'b'.
|
javascript
|
lodash.js
| 13,150
|
[
"object",
"iteratee"
] | false
| 2
| 7.44
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
value_counts
|
def value_counts(self, dropna: bool = True) -> Series:
"""
Returns a Series containing counts of each unique value.
Parameters
----------
dropna : bool, default True
Don't include counts of missing values.
Returns
-------
counts : Series
See Also
--------
Series.value_counts
"""
from pandas import (
Index,
Series,
)
from pandas.arrays import IntegerArray
keys, value_counts, na_counter = algos.value_counts_arraylike(
self._data, dropna=dropna, mask=self._mask
)
mask_index = np.zeros((len(value_counts),), dtype=np.bool_)
mask = mask_index.copy()
if na_counter > 0:
mask_index[-1] = True
arr = IntegerArray(value_counts, mask)
index = Index(
self.dtype.construct_array_type()(
keys, # type: ignore[arg-type]
mask_index,
)
)
return Series(arr, index=index, name="count", copy=False)
|
Returns a Series containing counts of each unique value.
Parameters
----------
dropna : bool, default True
Don't include counts of missing values.
Returns
-------
counts : Series
See Also
--------
Series.value_counts
|
python
|
pandas/core/arrays/masked.py
| 1,388
|
[
"self",
"dropna"
] |
Series
| true
| 2
| 6.4
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
substringAfterLast
|
public static String substringAfterLast(final String str, final int find) {
if (isEmpty(str)) {
return str;
}
final int pos = str.lastIndexOf(find);
if (pos == INDEX_NOT_FOUND || pos == str.length() - 1) {
return EMPTY;
}
return str.substring(pos + 1);
}
|
Gets the substring after the last occurrence of a separator. The separator is not returned.
<p>
A {@code null} string input will return {@code null}. An empty ("") string input will return the empty string.
</p>
<p>
If nothing is found, the empty string is returned.
</p>
<pre>
StringUtils.substringAfterLast(null, *) = null
StringUtils.substringAfterLast("", *) = ""
StringUtils.substringAfterLast("abc", 'a') = "bc"
StringUtils.substringAfterLast(" bc", 32) = "bc"
StringUtils.substringAfterLast("abcba", 'b') = "a"
StringUtils.substringAfterLast("abc", 'c') = ""
StringUtils.substringAfterLast("a", 'a') = ""
StringUtils.substringAfterLast("a", 'z') = ""
</pre>
@param str the String to get a substring from, may be null.
@param find the character (Unicode code point) to find.
@return the substring after the last occurrence of the specified character, {@code null} if null String input.
@since 3.11
|
java
|
src/main/java/org/apache/commons/lang3/StringUtils.java
| 8,284
|
[
"str",
"find"
] |
String
| true
| 4
| 7.76
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
value
|
public JSONStringer value(boolean value) throws JSONException {
if (this.stack.isEmpty()) {
throw new JSONException("Nesting problem");
}
beforeValue();
this.out.append(value);
return this;
}
|
Encodes {@code value} to this stringer.
@param value the value to encode
@return this stringer.
@throws JSONException if processing of json failed
|
java
|
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONStringer.java
| 273
|
[
"value"
] |
JSONStringer
| true
| 2
| 8.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
freshTargetSource
|
private TargetSource freshTargetSource() {
if (this.targetName == null) {
// Not refreshing target: bean name not specified in 'interceptorNames'
return this.targetSource;
}
else {
if (this.beanFactory == null) {
throw new IllegalStateException("No BeanFactory available anymore (probably due to serialization) " +
"- cannot resolve target with name '" + this.targetName + "'");
}
if (logger.isDebugEnabled()) {
logger.debug("Refreshing target with name '" + this.targetName + "'");
}
Object target = this.beanFactory.getBean(this.targetName);
return (target instanceof TargetSource targetSource ? targetSource : new SingletonTargetSource(target));
}
}
|
Return a TargetSource to use when creating a proxy. If the target was not
specified at the end of the interceptorNames list, the TargetSource will be
this class's TargetSource member. Otherwise, we get the target bean and wrap
it in a TargetSource if necessary.
|
java
|
spring-aop/src/main/java/org/springframework/aop/framework/ProxyFactoryBean.java
| 529
|
[] |
TargetSource
| true
| 5
| 6.88
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
getAllInterfaces
|
public static List<Class<?>> getAllInterfaces(final Class<?> cls) {
if (cls == null) {
return null;
}
final LinkedHashSet<Class<?>> interfacesFound = new LinkedHashSet<>();
getAllInterfaces(cls, interfacesFound);
return new ArrayList<>(interfacesFound);
}
|
Gets a {@link List} of all interfaces implemented by the given class and its superclasses.
<p>
The order is determined by looking through each interface in turn as declared in the source file and following its
hierarchy up. Then each superclass is considered in the same way. Later duplicates are ignored, so the order is
maintained.
</p>
@param cls the class to look up, may be {@code null}.
@return the {@link List} of interfaces in order, {@code null} if null input.
|
java
|
src/main/java/org/apache/commons/lang3/ClassUtils.java
| 371
|
[
"cls"
] | true
| 2
| 8.24
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
removeIfFromRandomAccessList
|
private static <T extends @Nullable Object> boolean removeIfFromRandomAccessList(
List<T> list, Predicate<? super T> predicate) {
// Note: Not all random access lists support set(). Additionally, it's possible
// for a list to reject setting an element, such as when the list does not permit
// duplicate elements. For both of those cases, we need to fall back to a slower
// implementation.
int from = 0;
int to = 0;
for (; from < list.size(); from++) {
T element = list.get(from);
if (!predicate.apply(element)) {
if (from > to) {
try {
list.set(to, element);
} catch (UnsupportedOperationException e) {
slowRemoveIfForRemainingElements(list, predicate, to, from);
return true;
} catch (IllegalArgumentException e) {
slowRemoveIfForRemainingElements(list, predicate, to, from);
return true;
}
}
to++;
}
}
// Clear the tail of any remaining items
list.subList(to, list.size()).clear();
return from != to;
}
|
Removes, from an iterable, every element that satisfies the provided predicate.
<p>Removals may or may not happen immediately as each element is tested against the predicate.
The behavior of this method is not specified if {@code predicate} is dependent on {@code
removeFrom}.
<p><b>Java 8+ users:</b> if {@code removeFrom} is a {@link Collection}, use {@code
removeFrom.removeIf(predicate)} instead.
@param removeFrom the iterable to (potentially) remove elements from
@param predicate a predicate that determines whether an element should be removed
@return {@code true} if any elements were removed from the iterable
@throws UnsupportedOperationException if the iterable does not support {@code remove()}.
@since 2.0
|
java
|
android/guava/src/com/google/common/collect/Iterables.java
| 196
|
[
"list",
"predicate"
] | true
| 6
| 7.6
|
google/guava
| 51,352
|
javadoc
| false
|
|
escapeXml10
|
public static String escapeXml10(final String input) {
return ESCAPE_XML10.translate(input);
}
|
Escapes the characters in a {@link String} using XML entities.
<p>
For example:
</p>
<pre>{@code
"bread" & "butter"
}</pre>
<p>
converts to:
</p>
<pre>
{@code
"bread" & "butter"
}
</pre>
<p>
Note that XML 1.0 is a text-only format: it cannot represent control characters or unpaired Unicode surrogate code points, even after escaping. The
method {@code escapeXml10} will remove characters that do not fit in the following ranges:
</p>
<p>
{@code #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]}
</p>
<p>
Though not strictly necessary, {@code escapeXml10} will escape characters in the following ranges:
</p>
<p>
{@code [#x7F-#x84] | [#x86-#x9F]}
</p>
<p>
The returned string can be inserted into a valid XML 1.0 or XML 1.1 document. If you want to allow more non-text characters in an XML 1.1 document, use
{@link #escapeXml11(String)}.
</p>
@param input the {@link String} to escape, may be null
@return a new escaped {@link String}, {@code null} if null string input
@see #unescapeXml(String)
@since 3.3
|
java
|
src/main/java/org/apache/commons/lang3/StringEscapeUtils.java
| 625
|
[
"input"
] |
String
| true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
to_period
|
def to_period(
self,
freq: str | None = None,
copy: bool | lib.NoDefault = lib.no_default,
) -> Series:
"""
Convert Series from DatetimeIndex to PeriodIndex.
Parameters
----------
freq : str, default None
Frequency associated with the PeriodIndex.
copy : bool, default False
This keyword is now ignored; changing its value will have no
impact on the method.
.. deprecated:: 3.0.0
This keyword is ignored and will be removed in pandas 4.0. Since
pandas 3.0, this method always returns a new object using a lazy
copy mechanism that defers copies until necessary
(Copy-on-Write). See the `user guide on Copy-on-Write
<https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
for more details.
Returns
-------
Series
Series with index converted to PeriodIndex.
See Also
--------
DataFrame.to_period: Equivalent method for DataFrame.
Series.dt.to_period: Convert DateTime column values.
Examples
--------
>>> idx = pd.DatetimeIndex(["2023", "2024", "2025"])
>>> s = pd.Series([1, 2, 3], index=idx)
>>> s = s.to_period()
>>> s
2023 1
2024 2
2025 3
Freq: Y-DEC, dtype: int64
Viewing the index
>>> s.index
PeriodIndex(['2023', '2024', '2025'], dtype='period[Y-DEC]')
"""
self._check_copy_deprecation(copy)
if not isinstance(self.index, DatetimeIndex):
raise TypeError(f"unsupported Type {type(self.index).__name__}")
new_obj = self.copy(deep=False)
new_index = self.index.to_period(freq=freq)
setattr(new_obj, "index", new_index)
return new_obj
|
Convert Series from DatetimeIndex to PeriodIndex.
Parameters
----------
freq : str, default None
Frequency associated with the PeriodIndex.
copy : bool, default False
This keyword is now ignored; changing its value will have no
impact on the method.
.. deprecated:: 3.0.0
This keyword is ignored and will be removed in pandas 4.0. Since
pandas 3.0, this method always returns a new object using a lazy
copy mechanism that defers copies until necessary
(Copy-on-Write). See the `user guide on Copy-on-Write
<https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
for more details.
Returns
-------
Series
Series with index converted to PeriodIndex.
See Also
--------
DataFrame.to_period: Equivalent method for DataFrame.
Series.dt.to_period: Convert DateTime column values.
Examples
--------
>>> idx = pd.DatetimeIndex(["2023", "2024", "2025"])
>>> s = pd.Series([1, 2, 3], index=idx)
>>> s = s.to_period()
>>> s
2023 1
2024 2
2025 3
Freq: Y-DEC, dtype: int64
Viewing the index
>>> s.index
PeriodIndex(['2023', '2024', '2025'], dtype='period[Y-DEC]')
|
python
|
pandas/core/series.py
| 6,524
|
[
"self",
"freq",
"copy"
] |
Series
| true
| 2
| 8
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
parameterDocComments
|
function parameterDocComments(parameters: readonly ParameterDeclaration[], isJavaScriptFile: boolean, indentationStr: string, newLine: string): string {
return parameters.map(({ name, dotDotDotToken }, i) => {
const paramName = name.kind === SyntaxKind.Identifier ? name.text : "param" + i;
const type = isJavaScriptFile ? (dotDotDotToken ? "{...any} " : "{any} ") : "";
return `${indentationStr} * @param ${type}${paramName}${newLine}`;
}).join("");
}
|
Checks if position points to a valid position to add JSDoc comments, and if so,
returns the appropriate template. Otherwise returns an empty string.
Valid positions are
- outside of comments, statements, and expressions, and
- preceding a:
- function/constructor/method declaration
- class declarations
- variable statements
- namespace declarations
- interface declarations
- method signatures
- type alias declarations
Hosts should ideally check that:
- The line is all whitespace up to 'position' before performing the insertion.
- If the keystroke sequence "/\*\*" induced the call, we also check that the next
non-whitespace character is '*', which (approximately) indicates whether we added
the second '*' to complete an existing (JSDoc) comment.
@param fileName The file in which to perform the check.
@param position The (character-indexed) position in the file where the check should
be performed.
@internal
|
typescript
|
src/services/jsDoc.ts
| 541
|
[
"parameters",
"isJavaScriptFile",
"indentationStr",
"newLine"
] | true
| 4
| 6.24
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
fnv64_BROKEN
|
constexpr uint64_t fnv64_BROKEN(
const char* buf, uint64_t hash = fnv64_hash_start) noexcept {
for (; *buf; ++buf) {
hash = fnv64_append_byte_BROKEN(hash, static_cast<uint8_t>(*buf));
}
return hash;
}
|
FNV hash of a c-str.
Continues hashing until a null byte is reached.
@param hash The initial hash seed.
@see fnv32
@methodset fnv
|
cpp
|
folly/hash/FnvHash.h
| 376
|
[] | true
| 2
| 7.04
|
facebook/folly
| 30,157
|
doxygen
| false
|
|
shouldReturn
|
function shouldReturn(expression: Expression, transformer: Transformer): boolean {
return !!expression.original && transformer.setOfExpressionsToReturn.has(getNodeId(expression.original));
}
|
@param hasContinuation Whether another `then`, `catch`, or `finally` continuation follows the continuation to which this statement belongs.
@param continuationArgName The argument name for the continuation that follows this call.
|
typescript
|
src/services/codefixes/convertToAsyncFunction.ts
| 936
|
[
"expression",
"transformer"
] | true
| 2
| 6
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
toString
|
@Override
public String toString() {
return "LevelConfiguration [name=" + this.name + ", logLevel=" + this.logLevel + "]";
}
|
Return if this is a custom level and cannot be represented by {@link LogLevel}.
@return if this is a custom level
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/logging/LoggerConfiguration.java
| 226
|
[] |
String
| true
| 1
| 6.64
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
remove
|
public String remove(final String str, final String remove) {
return replace(str, remove, StringUtils.EMPTY, -1);
}
|
Removes all occurrences of a substring from within the source string.
<p>
A {@code null} source string will return {@code null}. An empty ("") source string will return the empty string. A {@code null} remove string will return
the source string. An empty ("") remove string will return the source string.
</p>
<p>
Case-sensitive examples
</p>
<pre>
Strings.CS.remove(null, *) = null
Strings.CS.remove("", *) = ""
Strings.CS.remove(*, null) = *
Strings.CS.remove(*, "") = *
Strings.CS.remove("queued", "ue") = "qd"
Strings.CS.remove("queued", "zz") = "queued"
</pre>
<p>
Case-insensitive examples
</p>
<pre>
Strings.CI.remove(null, *) = null
Strings.CI.remove("", *) = ""
Strings.CI.remove(*, null) = *
Strings.CI.remove(*, "") = *
Strings.CI.remove("queued", "ue") = "qd"
Strings.CI.remove("queued", "zz") = "queued"
Strings.CI.remove("quEUed", "UE") = "qd"
Strings.CI.remove("queued", "zZ") = "queued"
</pre>
@param str the source String to search, may be null
@param remove the String to search for and remove, may be null
@return the substring with the string removed if found, {@code null} if null String input
|
java
|
src/main/java/org/apache/commons/lang3/Strings.java
| 1,093
|
[
"str",
"remove"
] |
String
| true
| 1
| 6.48
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
beansOfTypeIncludingAncestors
|
public static <T> Map<String, T> beansOfTypeIncludingAncestors(
ListableBeanFactory lbf, Class<T> type, boolean includeNonSingletons, boolean allowEagerInit)
throws BeansException {
Assert.notNull(lbf, "ListableBeanFactory must not be null");
Map<String, T> result = new LinkedHashMap<>(4);
result.putAll(lbf.getBeansOfType(type, includeNonSingletons, allowEagerInit));
if (lbf instanceof HierarchicalBeanFactory hbf) {
if (hbf.getParentBeanFactory() instanceof ListableBeanFactory pbf) {
Map<String, T> parentResult = beansOfTypeIncludingAncestors(pbf, type, includeNonSingletons, allowEagerInit);
parentResult.forEach((beanName, beanInstance) -> {
if (!result.containsKey(beanName) && !hbf.containsLocalBean(beanName)) {
result.put(beanName, beanInstance);
}
});
}
}
return result;
}
|
Return all beans of the given type or subtypes, also picking up beans defined in
ancestor bean factories if the current bean factory is a HierarchicalBeanFactory.
The returned Map will only contain beans of this type.
<p>Does consider objects created by FactoryBeans if the "allowEagerInit" flag is set,
which means that FactoryBeans will get initialized. If the object created by the
FactoryBean doesn't match, the raw FactoryBean itself will be matched against the
type. If "allowEagerInit" is not set, only raw FactoryBeans will be checked
(which doesn't require initialization of each FactoryBean).
<p><b>Note: Beans of the same name will take precedence at the 'lowest' factory level,
i.e. such beans will be returned from the lowest factory that they are being found in,
hiding corresponding beans in ancestor factories.</b> This feature allows for
'replacing' beans by explicitly choosing the same bean name in a child factory;
the bean in the ancestor factory won't be visible then, not even for by-type lookups.
@param lbf the bean factory
@param type the type of bean to match
@param includeNonSingletons whether to include prototype or scoped beans too
or just singletons (also applies to FactoryBeans)
@param allowEagerInit whether to initialize <i>lazy-init singletons</i> and
<i>objects created by FactoryBeans</i> (or by factory methods with a
"factory-bean" reference) for the type check. Note that FactoryBeans need to be
eagerly initialized to determine their type: So be aware that passing in "true"
for this flag will initialize FactoryBeans and "factory-bean" references.
@return the Map of matching bean instances, or an empty Map if none
@throws BeansException if a bean could not be created
@see ListableBeanFactory#getBeansOfType(Class, boolean, boolean)
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/BeanFactoryUtils.java
| 367
|
[
"lbf",
"type",
"includeNonSingletons",
"allowEagerInit"
] | true
| 5
| 7.76
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
recursivelyUncacheFiberNode
|
function recursivelyUncacheFiberNode(node: Instance | TextInstance) {
if (typeof node === 'number') {
// Leaf node (eg text)
uncacheFiberNode(node);
} else {
uncacheFiberNode((node: any)._nativeTag);
(node: any)._children.forEach(recursivelyUncacheFiberNode);
}
}
|
Copyright (c) Meta Platforms, Inc. and affiliates.
This source code is licensed under the MIT license found in the
LICENSE file in the root directory of this source tree.
@flow
|
javascript
|
packages/react-native-renderer/src/ReactFiberConfigNative.js
| 104
|
[] | false
| 3
| 6.4
|
facebook/react
| 241,750
|
jsdoc
| false
|
|
Mutex
|
Mutex(Mutex&&) = delete;
|
Construct a new async mutex that is initially unlocked.
|
cpp
|
folly/coro/Mutex.h
| 102
|
[] | true
| 2
| 6.64
|
facebook/folly
| 30,157
|
doxygen
| false
|
|
loadBeanDefinitions
|
public int loadBeanDefinitions(EncodedResource encodedResource) throws BeanDefinitionStoreException {
// Check for XML files and redirect them to the "standard" XmlBeanDefinitionReader
String filename = encodedResource.getResource().getFilename();
if (StringUtils.endsWithIgnoreCase(filename, ".xml")) {
return this.standardXmlBeanDefinitionReader.loadBeanDefinitions(encodedResource);
}
if (logger.isTraceEnabled()) {
logger.trace("Loading Groovy bean definitions from " + encodedResource);
}
@SuppressWarnings("serial")
Closure<Object> beans = new Closure<>(this) {
@Override
public @Nullable Object call(Object... args) {
invokeBeanDefiningClosure((Closure<?>) args[0]);
return null;
}
};
Binding binding = new Binding() {
@Override
public void setVariable(String name, Object value) {
if (currentBeanDefinition != null) {
applyPropertyToBeanDefinition(name, value);
}
else {
super.setVariable(name, value);
}
}
};
binding.setVariable("beans", beans);
int countBefore = getRegistry().getBeanDefinitionCount();
try {
GroovyShell shell = new GroovyShell(getBeanClassLoader(), binding);
shell.evaluate(encodedResource.getReader(), "beans");
}
catch (Throwable ex) {
throw new BeanDefinitionParsingException(new Problem("Error evaluating Groovy script: " + ex.getMessage(),
new Location(encodedResource.getResource()), null, ex));
}
int count = getRegistry().getBeanDefinitionCount() - countBefore;
if (logger.isDebugEnabled()) {
logger.debug("Loaded " + count + " bean definitions from " + encodedResource);
}
return count;
}
|
Load bean definitions from the specified Groovy script or XML file.
<p>Note that {@code ".xml"} files will be parsed as XML content; all other kinds
of resources will be parsed as Groovy scripts.
@param encodedResource the resource descriptor for the Groovy script or XML file,
allowing specification of an encoding to use for parsing the file
@return the number of bean definitions found
@throws BeanDefinitionStoreException in case of loading or parsing errors
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/groovy/GroovyBeanDefinitionReader.java
| 237
|
[
"encodedResource"
] | true
| 6
| 8.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
create_cluster
|
def create_cluster(
self,
name: str,
roleArn: str,
resourcesVpcConfig: dict,
**kwargs,
) -> dict:
"""
Create an Amazon EKS control plane.
.. seealso::
- :external+boto3:py:meth:`EKS.Client.create_cluster`
:param name: The unique name to give to your Amazon EKS Cluster.
:param roleArn: The Amazon Resource Name (ARN) of the IAM role that provides permissions
for the Kubernetes control plane to make calls to AWS API operations on your behalf.
:param resourcesVpcConfig: The VPC configuration used by the cluster control plane.
:return: Returns descriptive information about the created EKS Cluster.
"""
eks_client = self.conn
response = eks_client.create_cluster(
name=name, roleArn=roleArn, resourcesVpcConfig=resourcesVpcConfig, **kwargs
)
self.log.info("Created Amazon EKS cluster with the name %s.", response.get("cluster").get("name"))
return response
|
Create an Amazon EKS control plane.
.. seealso::
- :external+boto3:py:meth:`EKS.Client.create_cluster`
:param name: The unique name to give to your Amazon EKS Cluster.
:param roleArn: The Amazon Resource Name (ARN) of the IAM role that provides permissions
for the Kubernetes control plane to make calls to AWS API operations on your behalf.
:param resourcesVpcConfig: The VPC configuration used by the cluster control plane.
:return: Returns descriptive information about the created EKS Cluster.
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/eks.py
| 133
|
[
"self",
"name",
"roleArn",
"resourcesVpcConfig"
] |
dict
| true
| 1
| 6.4
|
apache/airflow
| 43,597
|
sphinx
| false
|
visitImportDeclaration
|
function visitImportDeclaration(node: ImportDeclaration): VisitResult<Statement | undefined> {
if (!node.importClause) {
// Do not elide a side-effect only import declaration.
// import "foo";
return node;
}
if (node.importClause.isTypeOnly) {
// Always elide type-only imports
return undefined;
}
// Elide the declaration if the import clause was elided.
const importClause = visitNode(node.importClause, visitImportClause, isImportClause);
return importClause
? factory.updateImportDeclaration(
node,
/*modifiers*/ undefined,
importClause,
node.moduleSpecifier,
node.attributes,
)
: undefined;
}
|
Visits an import declaration, eliding it if it is type-only or if it has an import clause that may be elided.
@param node The import declaration node.
|
typescript
|
src/compiler/transformers/ts.ts
| 2,255
|
[
"node"
] | true
| 4
| 6.88
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
noMatch
|
public static ConditionOutcome noMatch(String message) {
return new ConditionOutcome(false, message);
}
|
Create a new {@link ConditionOutcome} instance for 'no match'. For more consistent
messages consider using {@link #noMatch(ConditionMessage)}.
@param message the message
@return the {@link ConditionOutcome}
|
java
|
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/condition/ConditionOutcome.java
| 91
|
[
"message"
] |
ConditionOutcome
| true
| 1
| 6
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
shouldSendLeaveGroupRequest
|
private boolean shouldSendLeaveGroupRequest(CloseOptions.GroupMembershipOperation membershipOperation) {
if (!coordinatorUnknown() && state != MemberState.UNJOINED && generation.hasMemberId()) {
return membershipOperation == LEAVE_GROUP || (isDynamicMember() && membershipOperation == DEFAULT);
} else {
return false;
}
}
|
Sends LeaveGroupRequest and logs the {@code leaveReason}, unless this member is using static membership
with the default consumer group membership operation, or is already not part of the group (i.e., does not have a
valid member ID, is in the UNJOINED state, or the coordinator is unknown).
@param membershipOperation the operation on consumer group membership that the consumer will perform when closing
@param leaveReason the reason to leave the group for logging
@throws KafkaException if the rebalance callback throws exception
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
| 1,190
|
[
"membershipOperation"
] | true
| 6
| 6.56
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
_validate_numeric_only
|
def _validate_numeric_only(self, name: str, numeric_only: bool) -> None:
"""
Validate numeric_only argument, raising if invalid for the input.
Parameters
----------
name : str
Name of the operator (kernel).
numeric_only : bool
Value passed by user.
"""
if (
self._selected_obj.ndim == 1
and numeric_only
and not is_numeric_dtype(self._selected_obj.dtype)
):
raise NotImplementedError(
f"{type(self).__name__}.{name} does not implement numeric_only"
)
|
Validate numeric_only argument, raising if invalid for the input.
Parameters
----------
name : str
Name of the operator (kernel).
numeric_only : bool
Value passed by user.
|
python
|
pandas/core/window/rolling.py
| 230
|
[
"self",
"name",
"numeric_only"
] |
None
| true
| 4
| 6.56
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
_generate_inputs_for_submodules
|
def _generate_inputs_for_submodules(
model: torch.nn.Module,
target_submodules: Iterable[str],
args: tuple[Any, ...],
kwargs: Optional[dict[str, Any]] = None,
) -> dict[str, tuple[Any, Any]]:
"""
Generate inputs for targeting submdoules in the given model. Note that if two submodules refer to the same obj, this
function doesn't work.
Args:
model: root model.
inputs: inputs to the root model.
target_submodules: submodules that we want to generate inputs for.
Returns:
A dict that maps from submodule name to its inputs.
"""
kwargs = kwargs or {}
handles = []
results = {}
submodule_to_names = {mod: name for name, mod in model.named_modules()}
def pre_forward(module, module_args, module_kwargs):
results[submodule_to_names[module]] = (module_args, module_kwargs)
try:
for name, mod in model.named_modules():
if name in target_submodules:
handles.append(
mod.register_forward_pre_hook(pre_forward, with_kwargs=True)
)
model(*args, **kwargs)
except Exception as e:
warnings.warn(
f"Failed to generate submodule inputs because of the following error:\n{e}",
stacklevel=2,
)
finally:
for h in handles:
h.remove()
return results
|
Generate inputs for targeting submdoules in the given model. Note that if two submodules refer to the same obj, this
function doesn't work.
Args:
model: root model.
inputs: inputs to the root model.
target_submodules: submodules that we want to generate inputs for.
Returns:
A dict that maps from submodule name to its inputs.
|
python
|
torch/_export/tools.py
| 18
|
[
"model",
"target_submodules",
"args",
"kwargs"
] |
dict[str, tuple[Any, Any]]
| true
| 5
| 8.08
|
pytorch/pytorch
| 96,034
|
google
| false
|
decorateBeanDefinitionIfRequired
|
public BeanDefinitionHolder decorateBeanDefinitionIfRequired(
Element ele, BeanDefinitionHolder originalDef, @Nullable BeanDefinition containingBd) {
BeanDefinitionHolder finalDefinition = originalDef;
// Decorate based on custom attributes first.
NamedNodeMap attributes = ele.getAttributes();
for (int i = 0; i < attributes.getLength(); i++) {
Node node = attributes.item(i);
finalDefinition = decorateIfRequired(node, finalDefinition, containingBd);
}
// Decorate based on custom nested elements.
NodeList children = ele.getChildNodes();
for (int i = 0; i < children.getLength(); i++) {
Node node = children.item(i);
if (node.getNodeType() == Node.ELEMENT_NODE) {
finalDefinition = decorateIfRequired(node, finalDefinition, containingBd);
}
}
return finalDefinition;
}
|
Decorate the given bean definition through a namespace handler, if applicable.
@param ele the current element
@param originalDef the current bean definition
@param containingBd the containing bean definition (if any)
@return the decorated bean definition
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/xml/BeanDefinitionParserDelegate.java
| 1,399
|
[
"ele",
"originalDef",
"containingBd"
] |
BeanDefinitionHolder
| true
| 4
| 7.44
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
withShortcut
|
public AutowiredMethodArgumentsResolver withShortcut(String... beanNames) {
return new AutowiredMethodArgumentsResolver(this.methodName, this.parameterTypes, this.required, beanNames);
}
|
Return a new {@link AutowiredMethodArgumentsResolver} instance
that uses direct bean name injection shortcuts for specific parameters.
@param beanNames the bean names to use as shortcuts (aligned with the
method parameters)
@return a new {@link AutowiredMethodArgumentsResolver} instance that uses
the given shortcut bean names
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/aot/AutowiredMethodArgumentsResolver.java
| 110
|
[] |
AutowiredMethodArgumentsResolver
| true
| 1
| 6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
conn
|
def conn(self) -> BaseAwsConnection:
"""
Get the underlying boto3 client/resource (cached).
:return: boto3.client or boto3.resource
"""
if self.client_type:
return self.get_client_type(region_name=self.region_name)
return self.get_resource_type(region_name=self.region_name)
|
Get the underlying boto3 client/resource (cached).
:return: boto3.client or boto3.resource
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/hooks/base_aws.py
| 759
|
[
"self"
] |
BaseAwsConnection
| true
| 2
| 6.72
|
apache/airflow
| 43,597
|
unknown
| false
|
transformFunctionLikeToExpression
|
function transformFunctionLikeToExpression(node: FunctionLikeDeclaration, location: TextRange | undefined, name: Identifier | undefined, container: Node | undefined): FunctionExpression {
const savedConvertedLoopState = convertedLoopState;
convertedLoopState = undefined;
const ancestorFacts = container && isClassLike(container) && !isStatic(node)
? enterSubtree(HierarchyFacts.FunctionExcludes, HierarchyFacts.FunctionIncludes | HierarchyFacts.NonStaticClassElement)
: enterSubtree(HierarchyFacts.FunctionExcludes, HierarchyFacts.FunctionIncludes);
const parameters = visitParameterList(node.parameters, visitor, context);
const body = transformFunctionBody(node);
if (hierarchyFacts & HierarchyFacts.NewTarget && !name && (node.kind === SyntaxKind.FunctionDeclaration || node.kind === SyntaxKind.FunctionExpression)) {
name = factory.getGeneratedNameForNode(node);
}
exitSubtree(ancestorFacts, HierarchyFacts.FunctionSubtreeExcludes, HierarchyFacts.None);
convertedLoopState = savedConvertedLoopState;
return setOriginalNode(
setTextRange(
factory.createFunctionExpression(
/*modifiers*/ undefined,
node.asteriskToken,
name,
/*typeParameters*/ undefined,
parameters,
/*type*/ undefined,
body,
),
location,
),
/*original*/ node,
);
}
|
Transforms a function-like node into a FunctionExpression.
@param node The function-like node to transform.
@param location The source-map location for the new FunctionExpression.
@param name The name of the new FunctionExpression.
|
typescript
|
src/compiler/transformers/es2015.ts
| 2,512
|
[
"node",
"location",
"name",
"container"
] | true
| 8
| 6.56
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
toString
|
public static String toString(byte x, int radix) {
checkArgument(
radix >= Character.MIN_RADIX && radix <= Character.MAX_RADIX,
"radix (%s) must be between Character.MIN_RADIX and Character.MAX_RADIX",
radix);
// Benchmarks indicate this is probably not worth optimizing.
return Integer.toString(toUnsignedInt(x), radix);
}
|
Returns a string representation of {@code x} for the given radix, where {@code x} is treated as
unsigned.
@param x the value to convert to a string.
@param radix the radix to use while working with {@code x}
@throws IllegalArgumentException if {@code radix} is not between {@link Character#MIN_RADIX}
and {@link Character#MAX_RADIX}.
@since 13.0
|
java
|
android/guava/src/com/google/common/primitives/UnsignedBytes.java
| 193
|
[
"x",
"radix"
] |
String
| true
| 2
| 7.04
|
google/guava
| 51,352
|
javadoc
| false
|
checkStrictModeFunctionDeclaration
|
function checkStrictModeFunctionDeclaration(node: FunctionDeclaration) {
if (languageVersion < ScriptTarget.ES2015) {
// Report error if function is not top level function declaration
if (
blockScopeContainer.kind !== SyntaxKind.SourceFile &&
blockScopeContainer.kind !== SyntaxKind.ModuleDeclaration &&
!isFunctionLikeOrClassStaticBlockDeclaration(blockScopeContainer)
) {
// We check first if the name is inside class declaration or class expression; if so give explicit message
// otherwise report generic error message.
const errorSpan = getErrorSpanForNode(file, node);
file.bindDiagnostics.push(createFileDiagnostic(file, errorSpan.start, errorSpan.length, getStrictModeBlockScopeFunctionDeclarationMessage(node)));
}
}
}
|
Declares a Symbol for the node and adds it to symbols. Reports errors for conflicting identifier names.
@param symbolTable - The symbol table which node will be added to.
@param parent - node's parent declaration.
@param node - The declaration to be added to the symbol table
@param includes - The SymbolFlags that node has in addition to its declaration type (eg: export, ambient, etc.)
@param excludes - The flags which node cannot be declared alongside in a symbol table. Used to report forbidden declarations.
|
typescript
|
src/compiler/binder.ts
| 2,694
|
[
"node"
] | false
| 5
| 6.08
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
resolveExports
|
function resolveExports(nmPath, request, conditions) {
// The implementation's behavior is meant to mirror resolution in ESM.
const { 1: name, 2: expansion = '' } =
RegExpPrototypeExec(EXPORTS_PATTERN, request) || kEmptyObject;
if (!name) { return; }
const pkgPath = path.resolve(nmPath, name);
const pkg = _readPackage(pkgPath);
if (pkg.exists && pkg.exports != null) {
try {
const { packageExportsResolve } = require('internal/modules/esm/resolve');
return finalizeEsmResolution(packageExportsResolve(
pathToFileURL(pkgPath + '/package.json'), '.' + expansion, pkg, null,
conditions), null, pkgPath);
} catch (e) {
if (e.code === 'ERR_MODULE_NOT_FOUND') {
throw createEsmNotFoundErr(request, pkgPath + '/package.json');
}
throw e;
}
}
}
|
Resolves the exports for a given module path and request.
@param {string} nmPath The path to the module.
@param {string} request The request for the module.
@param {Set<string>} conditions The conditions to use for resolution.
@returns {undefined|string}
|
javascript
|
lib/internal/modules/cjs/loader.js
| 668
|
[
"nmPath",
"request",
"conditions"
] | false
| 7
| 6.24
|
nodejs/node
| 114,839
|
jsdoc
| false
|
|
toString
|
@Override
public String toString() {
ToStringCreator creator = new ToStringCreator(this);
creator.append("active", getActive().toString());
creator.append("default", getDefault().toString());
creator.append("accepted", getAccepted().toString());
return creator.toString();
}
|
Return if the given profile is active.
@param profile the profile to test
@return if the profile is active
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/config/Profiles.java
| 215
|
[] |
String
| true
| 1
| 7.04
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
node_inline_
|
def node_inline_(call_mod_node: torch.fx.Node) -> Optional[torch.fx.GraphModule]:
"""
Inline the submodule of the given node into the parent module.
Note: we only support the case where submodule takes tensors inputs.
"""
assert call_mod_node.op == "call_module"
gm = call_mod_node.graph.owning_module
assert gm is not None
assert isinstance(call_mod_node.target, str)
sub_gm = getattr(gm, call_mod_node.target)
phs = (node for node in sub_gm.graph.nodes if node.op == "placeholder")
body = (
node for node in sub_gm.graph.nodes if node.op not in ("placeholder", "output")
)
output = [node for node in sub_gm.graph.nodes if node.op == "output"]
for ph, arg in zip(phs, call_mod_node.args):
assert isinstance(arg, torch.fx.Node)
node_replace_(ph, arg)
with gm.graph.inserting_before(call_mod_node):
for node in body:
new_node = gm.graph.node_copy(node)
if node.op == "get_attr":
new_target_name = new_node.target
if hasattr(gm, new_target_name):
# Loop through and find the "submod_{i}" that have no name collision
i = 1
new_target_name = f"submod_{i}"
while hasattr(gm, new_target_name):
i += 1
new_target_name = f"submod_{i}"
new_node.target = new_target_name
setattr(gm, new_node.target, getattr(sub_gm, node.target))
node_replace_(node, new_node)
if len(output) > 0:
assert len(output) == 1 and len(output[0].args) == 1
new_output = output[0].args[0]
if isinstance(new_output, torch.fx.Node):
# Clear the users of the output node and set
# the users to be the users of original call_module node.
new_output.users.clear()
node_replace_(call_mod_node, new_output)
elif isinstance(new_output, (list, tuple)):
# Pop subgraph output node from users.
for node in new_output:
node.users.pop(output[0])
# Inline the get_item calls for the output node.
get_item_users = nodes_filter(
list(call_mod_node.users.keys()),
lambda node: node.op == "call_function"
and node.target is operator.getitem,
)
# get_item_node.args[1] is the idx referring to new_output[idx]
nodes_map(
get_item_users,
lambda get_item_node: node_replace_(
get_item_node,
new_output[get_item_node.args[1]],
),
)
call_mod_node.graph.erase_node(call_mod_node)
else:
raise NotImplementedError(
f"Unsupported output type {type(new_output)}. Expect it to be a Node or a list/tuple of Nodes."
)
else:
call_mod_node.graph.erase_node(call_mod_node)
gm.delete_all_unused_submodules()
gm.recompile()
return gm
|
Inline the submodule of the given node into the parent module.
Note: we only support the case where submodule takes tensors inputs.
|
python
|
torch/_export/utils.py
| 797
|
[
"call_mod_node"
] |
Optional[torch.fx.GraphModule]
| true
| 14
| 7.12
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
createCtor
|
function createCtor(Ctor) {
return function() {
// Use a `switch` statement to work with class constructors. See
// http://ecma-international.org/ecma-262/7.0/#sec-ecmascript-function-objects-call-thisargument-argumentslist
// for more details.
var args = arguments;
switch (args.length) {
case 0: return new Ctor;
case 1: return new Ctor(args[0]);
case 2: return new Ctor(args[0], args[1]);
case 3: return new Ctor(args[0], args[1], args[2]);
case 4: return new Ctor(args[0], args[1], args[2], args[3]);
case 5: return new Ctor(args[0], args[1], args[2], args[3], args[4]);
case 6: return new Ctor(args[0], args[1], args[2], args[3], args[4], args[5]);
case 7: return new Ctor(args[0], args[1], args[2], args[3], args[4], args[5], args[6]);
}
var thisBinding = baseCreate(Ctor.prototype),
result = Ctor.apply(thisBinding, args);
// Mimic the constructor's `return` behavior.
// See https://es5.github.io/#x13.2.2 for more details.
return isObject(result) ? result : thisBinding;
};
}
|
Creates a function that produces an instance of `Ctor` regardless of
whether it was invoked as part of a `new` expression or by `call` or `apply`.
@private
@param {Function} Ctor The constructor to wrap.
@returns {Function} Returns the new wrapped function.
|
javascript
|
lodash.js
| 5,083
|
[
"Ctor"
] | false
| 2
| 6.4
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
peekingIterator
|
public static <T extends @Nullable Object> PeekingIterator<T> peekingIterator(
Iterator<? extends T> iterator) {
if (iterator instanceof PeekingImpl) {
// Safe to cast <? extends T> to <T> because PeekingImpl only uses T
// covariantly (and cannot be subclassed to add non-covariant uses).
@SuppressWarnings("unchecked")
PeekingImpl<T> peeking = (PeekingImpl<T>) iterator;
return peeking;
}
return new PeekingImpl<>(iterator);
}
|
Returns a {@code PeekingIterator} backed by the given iterator.
<p>Calls to the {@code peek} method with no intervening calls to {@code next} do not affect the
iteration, and hence return the same object each time. A subsequent call to {@code next} is
guaranteed to return the same object again. For example:
{@snippet :
PeekingIterator<String> peekingIterator =
Iterators.peekingIterator(Iterators.forArray("a", "b"));
String a1 = peekingIterator.peek(); // returns "a"
String a2 = peekingIterator.peek(); // also returns "a"
String a3 = peekingIterator.next(); // also returns "a"
}
<p>Any structural changes to the underlying iteration (aside from those performed by the
iterator's own {@link PeekingIterator#remove()} method) will leave the iterator in an undefined
state.
<p>The returned iterator does not support removal after peeking, as explained by {@link
PeekingIterator#remove()}.
<p>Note: If the given iterator is already a {@code PeekingIterator}, it <i>might</i> be
returned to the caller, although this is neither guaranteed to occur nor required to be
consistent. For example, this method <i>might</i> choose to pass through recognized
implementations of {@code PeekingIterator} when the behavior of the implementation is known to
meet the contract guaranteed by this method.
<p>There is no {@link Iterable} equivalent to this method, so use this method to wrap each
individual iterator as it is generated.
@param iterator the backing iterator. The {@link PeekingIterator} assumes ownership of this
iterator, so users should cease making direct calls to it after calling this method.
@return a peeking iterator backed by that iterator. Apart from the additional {@link
PeekingIterator#peek()} method, this iterator behaves exactly the same as {@code iterator}.
|
java
|
android/guava/src/com/google/common/collect/Iterators.java
| 1,263
|
[
"iterator"
] | true
| 2
| 7.6
|
google/guava
| 51,352
|
javadoc
| false
|
|
safe_sparse_dot
|
def safe_sparse_dot(a, b, *, dense_output=False):
"""Dot product that handle the sparse matrix case correctly.
Parameters
----------
a : {ndarray, sparse matrix}
b : {ndarray, sparse matrix}
dense_output : bool, default=False
When False, ``a`` and ``b`` both being sparse will yield sparse output.
When True, output will always be a dense array.
Returns
-------
dot_product : {ndarray, sparse matrix}
Sparse if ``a`` and ``b`` are sparse and ``dense_output=False``.
Examples
--------
>>> from scipy.sparse import csr_matrix
>>> from sklearn.utils.extmath import safe_sparse_dot
>>> X = csr_matrix([[1, 2], [3, 4], [5, 6]])
>>> dot_product = safe_sparse_dot(X, X.T)
>>> dot_product.toarray()
array([[ 5, 11, 17],
[11, 25, 39],
[17, 39, 61]])
"""
xp, _ = get_namespace(a, b)
if a.ndim > 2 or b.ndim > 2:
if sparse.issparse(a):
# sparse is always 2D. Implies b is 3D+
# [i, j] @ [k, ..., l, m, n] -> [i, k, ..., l, n]
b_ = np.rollaxis(b, -2)
b_2d = b_.reshape((b.shape[-2], -1))
ret = a @ b_2d
ret = ret.reshape(a.shape[0], *b_.shape[1:])
elif sparse.issparse(b):
# sparse is always 2D. Implies a is 3D+
# [k, ..., l, m] @ [i, j] -> [k, ..., l, j]
a_2d = a.reshape(-1, a.shape[-1])
ret = a_2d @ b
ret = ret.reshape(*a.shape[:-1], b.shape[1])
else:
# Alternative for `np.dot` when dealing with a or b having
# more than 2 dimensions, that works with the array api.
# If b is 1-dim then the last axis for b is taken otherwise
# if b is >= 2-dim then the second to last axis is taken.
b_axis = -1 if b.ndim == 1 else -2
ret = xp.tensordot(a, b, axes=[-1, b_axis])
elif (
dense_output
and a.ndim == 2
and b.ndim == 2
and a.dtype in (np.float32, np.float64)
and b.dtype in (np.float32, np.float64)
and (sparse.issparse(a) and a.format in ("csc", "csr"))
and (sparse.issparse(b) and b.format in ("csc", "csr"))
):
# Use dedicated fast method for dense_C = sparse_A @ sparse_B
return sparse_matmul_to_dense(a, b)
else:
ret = a @ b
if (
sparse.issparse(a)
and sparse.issparse(b)
and dense_output
and hasattr(ret, "toarray")
):
return ret.toarray()
return ret
|
Dot product that handle the sparse matrix case correctly.
Parameters
----------
a : {ndarray, sparse matrix}
b : {ndarray, sparse matrix}
dense_output : bool, default=False
When False, ``a`` and ``b`` both being sparse will yield sparse output.
When True, output will always be a dense array.
Returns
-------
dot_product : {ndarray, sparse matrix}
Sparse if ``a`` and ``b`` are sparse and ``dense_output=False``.
Examples
--------
>>> from scipy.sparse import csr_matrix
>>> from sklearn.utils.extmath import safe_sparse_dot
>>> X = csr_matrix([[1, 2], [3, 4], [5, 6]])
>>> dot_product = safe_sparse_dot(X, X.T)
>>> dot_product.toarray()
array([[ 5, 11, 17],
[11, 25, 39],
[17, 39, 61]])
|
python
|
sklearn/utils/extmath.py
| 166
|
[
"a",
"b",
"dense_output"
] | false
| 21
| 6.56
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
opj_int64_clamp
|
static INLINE OPJ_INT64 opj_int64_clamp(OPJ_INT64 a, OPJ_INT64 min,
OPJ_INT64 max)
{
if (a < min) {
return min;
}
if (a > max) {
return max;
}
return a;
}
|
Clamp an integer inside an interval
@return
<ul>
<li>Returns a if (min < a < max)
<li>Returns max if (a > max)
<li>Returns min if (a < min)
</ul>
|
cpp
|
3rdparty/openjpeg/openjp2/opj_intmath.h
| 137
|
[
"a",
"min",
"max"
] | true
| 3
| 6.56
|
opencv/opencv
| 85,374
|
doxygen
| false
|
|
abortableErrorIfPossible
|
void abortableErrorIfPossible(RuntimeException e) {
if (canHandleAbortableError()) {
if (needToTriggerEpochBumpFromClient())
clientSideEpochBumpRequired = true;
abortableError(e);
} else {
fatalError(e);
}
}
|
Determines if an error should be treated as abortable or fatal, based on transaction state and configuration.
<ol><l> NOTE: Only use this method for transactional producers </l></ol>
- <b>Abortable Error</b>:
An abortable error can be handled effectively, if epoch bumping is supported.
1) If transactionV2 is enabled, automatic epoch bumping happens at the end of every transaction.
2) If the client can trigger an epoch bump, the abortable error can be handled.
- <b>Fatal Error</b>:
If epoch bumping is not supported, the system cannot recover and the error must be treated as fatal.
@param e the error to determine as either abortable or fatal.
|
java
|
clients/src/main/java/org/apache/kafka/clients/producer/internals/TransactionManager.java
| 1,382
|
[
"e"
] |
void
| true
| 3
| 6.72
|
apache/kafka
| 31,560
|
javadoc
| false
|
cloneArray
|
function cloneArray(array) {
var length = array ? array.length : 0,
result = Array(length);
while (length--) {
result[length] = array[length];
}
return result;
}
|
Creates a clone of `array`.
@private
@param {Array} array The array to clone.
@returns {Array} Returns the cloned array.
|
javascript
|
fp/_baseConvert.js
| 44
|
[
"array"
] | false
| 3
| 6.24
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
unmodifiableRowSortedTable
|
public static <R extends @Nullable Object, C extends @Nullable Object, V extends @Nullable Object>
RowSortedTable<R, C, V> unmodifiableRowSortedTable(
RowSortedTable<R, ? extends C, ? extends V> table) {
/*
* It's not ? extends R, because it's technically not covariant in R. Specifically,
* table.rowMap().comparator() could return a comparator that only works for the ? extends R.
* Collections.unmodifiableSortedMap makes the same distinction.
*/
return new UnmodifiableRowSortedMap<>(table);
}
|
Returns an unmodifiable view of the specified row-sorted table. This method allows modules to
provide users with "read-only" access to internal tables. Query operations on the returned
table "read through" to the specified table, and attempts to modify the returned table, whether
direct or via its collection views, result in an {@code UnsupportedOperationException}.
<p>The returned table will be serializable if the specified table is serializable.
@param table the row-sorted table for which an unmodifiable view is to be returned
@return an unmodifiable view of the specified table
@since 11.0
|
java
|
android/guava/src/com/google/common/collect/Tables.java
| 623
|
[
"table"
] | true
| 1
| 6.72
|
google/guava
| 51,352
|
javadoc
| false
|
|
get_reference_class
|
def get_reference_class(cls, reference_name: str) -> type[BaseDeadlineReference]:
"""
Get a reference class by its name.
:param reference_name: The name of the reference class to find
"""
try:
return next(
ref_class
for name, ref_class in vars(cls).items()
if isinstance(ref_class, type)
and issubclass(ref_class, cls.BaseDeadlineReference)
and ref_class.__name__ == reference_name
)
except StopIteration:
raise ValueError(f"No reference class found with name: {reference_name}")
|
Get a reference class by its name.
:param reference_name: The name of the reference class to find
|
python
|
airflow-core/src/airflow/models/deadline.py
| 229
|
[
"cls",
"reference_name"
] |
type[BaseDeadlineReference]
| true
| 3
| 7.04
|
apache/airflow
| 43,597
|
sphinx
| false
|
getLowPC
|
static bool getLowPC(const DIE &Die, const DWARFUnit &DU, uint64_t &LowPC,
uint64_t &SectionIndex) {
DIEValue DvalLowPc = Die.findAttribute(dwarf::DW_AT_low_pc);
if (!DvalLowPc)
return false;
dwarf::Form Form = DvalLowPc.getForm();
bool AddrOffset = Form == dwarf::DW_FORM_LLVM_addrx_offset;
uint64_t LowPcValue = DvalLowPc.getDIEInteger().getValue();
if (Form == dwarf::DW_FORM_GNU_addr_index || Form == dwarf::DW_FORM_addrx ||
AddrOffset) {
uint32_t Index = AddrOffset ? (LowPcValue >> 32) : LowPcValue;
std::optional<object::SectionedAddress> SA =
DU.getAddrOffsetSectionItem(Index);
if (!SA)
return false;
if (AddrOffset)
SA->Address += (LowPcValue & 0xffffffff);
LowPC = SA->Address;
SectionIndex = SA->SectionIndex;
} else {
LowPC = LowPcValue;
SectionIndex = 0;
}
return true;
}
|
If DW_AT_low_pc exists sets LowPC and returns true.
|
cpp
|
bolt/lib/Rewrite/DWARFRewriter.cpp
| 373
|
[] | true
| 9
| 6.56
|
llvm/llvm-project
| 36,021
|
doxygen
| false
|
|
unique_with_mask
|
def unique_with_mask(values, mask: npt.NDArray[np.bool_] | None = None):
"""See algorithms.unique for docs. Takes a mask for masked arrays."""
values = _ensure_arraylike(values, func_name="unique")
if isinstance(values.dtype, ExtensionDtype):
# Dispatch to extension dtype's unique.
return values.unique()
if isinstance(values, ABCIndex):
# Dispatch to Index's unique.
return values.unique()
original = values
hashtable, values = _get_hashtable_algo(values)
table = hashtable(len(values))
if mask is None:
uniques = table.unique(values)
uniques = _reconstruct_data(uniques, original.dtype, original)
return uniques
else:
uniques, mask = table.unique(values, mask=mask)
uniques = _reconstruct_data(uniques, original.dtype, original)
assert mask is not None # for mypy
return uniques, mask.astype("bool")
|
See algorithms.unique for docs. Takes a mask for masked arrays.
|
python
|
pandas/core/algorithms.py
| 463
|
[
"values",
"mask"
] | true
| 5
| 6
|
pandas-dev/pandas
| 47,362
|
unknown
| false
|
|
iscomplexobj
|
def iscomplexobj(x):
"""
Check for a complex type or an array of complex numbers.
The type of the input is checked, not the value. Even if the input
has an imaginary part equal to zero, `iscomplexobj` evaluates to True.
Parameters
----------
x : any
The input can be of any type and shape.
Returns
-------
iscomplexobj : bool
The return value, True if `x` is of a complex type or has at least
one complex element.
See Also
--------
isrealobj, iscomplex
Examples
--------
>>> import numpy as np
>>> np.iscomplexobj(1)
False
>>> np.iscomplexobj(1+0j)
True
>>> np.iscomplexobj([3, 1+0j, True])
True
"""
try:
dtype = x.dtype
type_ = dtype.type
except AttributeError:
type_ = asarray(x).dtype.type
return issubclass(type_, _nx.complexfloating)
|
Check for a complex type or an array of complex numbers.
The type of the input is checked, not the value. Even if the input
has an imaginary part equal to zero, `iscomplexobj` evaluates to True.
Parameters
----------
x : any
The input can be of any type and shape.
Returns
-------
iscomplexobj : bool
The return value, True if `x` is of a complex type or has at least
one complex element.
See Also
--------
isrealobj, iscomplex
Examples
--------
>>> import numpy as np
>>> np.iscomplexobj(1)
False
>>> np.iscomplexobj(1+0j)
True
>>> np.iscomplexobj([3, 1+0j, True])
True
|
python
|
numpy/lib/_type_check_impl.py
| 271
|
[
"x"
] | false
| 1
| 6.48
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
mask_indices
|
def mask_indices(n, mask_func, k=0):
"""
Return the indices to access (n, n) arrays, given a masking function.
Assume `mask_func` is a function that, for a square array a of size
``(n, n)`` with a possible offset argument `k`, when called as
``mask_func(a, k)`` returns a new array with zeros in certain locations
(functions like `triu` or `tril` do precisely this). Then this function
returns the indices where the non-zero values would be located.
Parameters
----------
n : int
The returned indices will be valid to access arrays of shape (n, n).
mask_func : callable
A function whose call signature is similar to that of `triu`, `tril`.
That is, ``mask_func(x, k)`` returns a boolean array, shaped like `x`.
`k` is an optional argument to the function.
k : scalar
An optional argument which is passed through to `mask_func`. Functions
like `triu`, `tril` take a second argument that is interpreted as an
offset.
Returns
-------
indices : tuple of arrays.
The `n` arrays of indices corresponding to the locations where
``mask_func(np.ones((n, n)), k)`` is True.
See Also
--------
triu, tril, triu_indices, tril_indices
Examples
--------
>>> import numpy as np
These are the indices that would allow you to access the upper triangular
part of any 3x3 array:
>>> iu = np.mask_indices(3, np.triu)
For example, if `a` is a 3x3 array:
>>> a = np.arange(9).reshape(3, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a[iu]
array([0, 1, 2, 4, 5, 8])
An offset can be passed also to the masking function. This gets us the
indices starting on the first diagonal right of the main one:
>>> iu1 = np.mask_indices(3, np.triu, 1)
with which we now extract only three elements:
>>> a[iu1]
array([1, 2, 5])
"""
m = ones((n, n), int)
a = mask_func(m, k)
return nonzero(a != 0)
|
Return the indices to access (n, n) arrays, given a masking function.
Assume `mask_func` is a function that, for a square array a of size
``(n, n)`` with a possible offset argument `k`, when called as
``mask_func(a, k)`` returns a new array with zeros in certain locations
(functions like `triu` or `tril` do precisely this). Then this function
returns the indices where the non-zero values would be located.
Parameters
----------
n : int
The returned indices will be valid to access arrays of shape (n, n).
mask_func : callable
A function whose call signature is similar to that of `triu`, `tril`.
That is, ``mask_func(x, k)`` returns a boolean array, shaped like `x`.
`k` is an optional argument to the function.
k : scalar
An optional argument which is passed through to `mask_func`. Functions
like `triu`, `tril` take a second argument that is interpreted as an
offset.
Returns
-------
indices : tuple of arrays.
The `n` arrays of indices corresponding to the locations where
``mask_func(np.ones((n, n)), k)`` is True.
See Also
--------
triu, tril, triu_indices, tril_indices
Examples
--------
>>> import numpy as np
These are the indices that would allow you to access the upper triangular
part of any 3x3 array:
>>> iu = np.mask_indices(3, np.triu)
For example, if `a` is a 3x3 array:
>>> a = np.arange(9).reshape(3, 3)
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a[iu]
array([0, 1, 2, 4, 5, 8])
An offset can be passed also to the masking function. This gets us the
indices starting on the first diagonal right of the main one:
>>> iu1 = np.mask_indices(3, np.triu, 1)
with which we now extract only three elements:
>>> a[iu1]
array([1, 2, 5])
|
python
|
numpy/lib/_twodim_base_impl.py
| 839
|
[
"n",
"mask_func",
"k"
] | false
| 1
| 6.4
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
maybeScheduleJob
|
private void maybeScheduleJob() {
if (this.isMaster == false) {
return;
}
// don't schedule the job if the node is shutting down
if (isClusterServiceStoppedOrClosed()) {
logger.trace(
"Skipping scheduling a data stream lifecycle job due to the cluster lifecycle state being: [{}] ",
clusterService.lifecycleState()
);
return;
}
if (scheduler.get() == null) {
scheduler.set(new SchedulerEngine(settings, clock));
scheduler.get().register(this);
}
assert scheduler.get() != null : "scheduler should be available";
scheduledJob = new SchedulerEngine.Job(LIFECYCLE_JOB_NAME, new TimeValueSchedule(pollInterval));
scheduler.get().add(scheduledJob);
}
|
Records the provided error for the index in the error store and logs the error message at `ERROR` level if the error for the index
is different to what's already in the error store or if the same error was in the error store for a number of retries divible by
the provided signallingErrorRetryThreshold (i.e. we log to level `error` every signallingErrorRetryThreshold retries, if the error
stays the same)
This allows us to not spam the logs, but signal to the logs if DSL is not making progress.
|
java
|
modules/data-streams/src/main/java/org/elasticsearch/datastreams/lifecycle/DataStreamLifecycleService.java
| 1,568
|
[] |
void
| true
| 4
| 6.72
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
preventSubstitution
|
function preventSubstitution<T extends Node>(node: T): T {
if (noSubstitution === undefined) noSubstitution = [];
noSubstitution[getNodeId(node)] = true;
return node;
}
|
Prevent substitution of a node for this transformer.
@param node The node which should not be substituted.
|
typescript
|
src/compiler/transformers/module/system.ts
| 2,036
|
[
"node"
] | true
| 2
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
nanmean
|
def nanmean(
values: np.ndarray,
*,
axis: AxisInt | None = None,
skipna: bool = True,
mask: npt.NDArray[np.bool_] | None = None,
) -> float:
"""
Compute the mean of the element along an axis ignoring NaNs
Parameters
----------
values : ndarray
axis : int, optional
skipna : bool, default True
mask : ndarray[bool], optional
nan-mask if known
Returns
-------
float
Unless input is a float array, in which case use the same
precision as the input array.
Examples
--------
>>> from pandas.core import nanops
>>> s = pd.Series([1, 2, np.nan])
>>> nanops.nanmean(s.values)
np.float64(1.5)
"""
if values.dtype == object and len(values) > 1_000 and mask is None:
# GH#54754 if we are going to fail, try to fail-fast
nanmean(values[:1000], axis=axis, skipna=skipna)
dtype = values.dtype
values, mask = _get_values(values, skipna, fill_value=0, mask=mask)
dtype_sum = _get_dtype_max(dtype)
dtype_count = np.dtype(np.float64)
# not using needs_i8_conversion because that includes period
if dtype.kind in "mM":
dtype_sum = np.dtype(np.float64)
elif dtype.kind in "iu":
dtype_sum = np.dtype(np.float64)
elif dtype.kind == "f":
dtype_sum = dtype
dtype_count = dtype
count = _get_counts(values.shape, mask, axis, dtype=dtype_count)
the_sum = values.sum(axis, dtype=dtype_sum)
the_sum = _ensure_numeric(the_sum)
if axis is not None and getattr(the_sum, "ndim", False):
count = cast(np.ndarray, count)
with np.errstate(all="ignore"):
# suppress division by zero warnings
the_mean = the_sum / count
ct_mask = count == 0
if ct_mask.any():
the_mean[ct_mask] = np.nan
else:
the_mean = the_sum / count if count > 0 else np.nan
return the_mean
|
Compute the mean of the element along an axis ignoring NaNs
Parameters
----------
values : ndarray
axis : int, optional
skipna : bool, default True
mask : ndarray[bool], optional
nan-mask if known
Returns
-------
float
Unless input is a float array, in which case use the same
precision as the input array.
Examples
--------
>>> from pandas.core import nanops
>>> s = pd.Series([1, 2, np.nan])
>>> nanops.nanmean(s.values)
np.float64(1.5)
|
python
|
pandas/core/nanops.py
| 663
|
[
"values",
"axis",
"skipna",
"mask"
] |
float
| true
| 12
| 8.4
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
rejoinNeededOrPending
|
protected synchronized boolean rejoinNeededOrPending() {
// if there's a pending joinFuture, we should try to complete handling it.
return rejoinNeeded || joinFuture != null;
}
|
Check whether the group should be rejoined (e.g. if metadata changes) or whether a
rejoin request is already in flight and needs to be completed.
@return true if it should, false otherwise
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
| 353
|
[] | true
| 2
| 8.32
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
minimum_fill_value
|
def minimum_fill_value(obj):
"""
Return the maximum value that can be represented by the dtype of an object.
This function is useful for calculating a fill value suitable for
taking the minimum of an array with a given dtype.
Parameters
----------
obj : ndarray, dtype or scalar
An object that can be queried for it's numeric type.
Returns
-------
val : scalar
The maximum representable value.
Raises
------
TypeError
If `obj` isn't a suitable numeric type.
See Also
--------
maximum_fill_value : The inverse function.
set_fill_value : Set the filling value of a masked array.
MaskedArray.fill_value : Return current fill value.
Examples
--------
>>> import numpy as np
>>> import numpy.ma as ma
>>> a = np.int8()
>>> ma.minimum_fill_value(a)
127
>>> a = np.int32()
>>> ma.minimum_fill_value(a)
2147483647
An array of numeric data can also be passed.
>>> a = np.array([1, 2, 3], dtype=np.int8)
>>> ma.minimum_fill_value(a)
127
>>> a = np.array([1, 2, 3], dtype=np.float32)
>>> ma.minimum_fill_value(a)
inf
"""
return _extremum_fill_value(obj, min_filler, "minimum")
|
Return the maximum value that can be represented by the dtype of an object.
This function is useful for calculating a fill value suitable for
taking the minimum of an array with a given dtype.
Parameters
----------
obj : ndarray, dtype or scalar
An object that can be queried for it's numeric type.
Returns
-------
val : scalar
The maximum representable value.
Raises
------
TypeError
If `obj` isn't a suitable numeric type.
See Also
--------
maximum_fill_value : The inverse function.
set_fill_value : Set the filling value of a masked array.
MaskedArray.fill_value : Return current fill value.
Examples
--------
>>> import numpy as np
>>> import numpy.ma as ma
>>> a = np.int8()
>>> ma.minimum_fill_value(a)
127
>>> a = np.int32()
>>> ma.minimum_fill_value(a)
2147483647
An array of numeric data can also be passed.
>>> a = np.array([1, 2, 3], dtype=np.int8)
>>> ma.minimum_fill_value(a)
127
>>> a = np.array([1, 2, 3], dtype=np.float32)
>>> ma.minimum_fill_value(a)
inf
|
python
|
numpy/ma/core.py
| 319
|
[
"obj"
] | false
| 1
| 6.16
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
when
|
default ValueProcessor<T> when(Predicate<@Nullable T> predicate) {
return (name, value) -> (predicate.test(value)) ? processValue(name, value) : value;
}
|
Return a new processor from this one that only applies to member with values
that match the given predicate.
@param predicate the predicate that must match
@return a new {@link ValueProcessor} that only applies when the predicate
matches
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
| 1,034
|
[
"predicate"
] | true
| 2
| 7.68
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
chooseOverlappingNodes
|
function chooseOverlappingNodes(span: TextSpan, node: Node, result: Node[]): boolean {
if (!nodeOverlapsWithSpan(node, span)) {
return false;
}
if (textSpanContainsTextRange(span, node)) {
addSourceElement(node, result);
return true;
}
if (isBlockLike(node)) {
return chooseOverlappingBlockLike(span, node, result);
}
if (isClassLike(node)) {
return chooseOverlappingClassLike(span, node, result);
}
addSourceElement(node, result);
return true;
}
|
@returns whether the argument node was included in the result
|
typescript
|
src/services/services.ts
| 2,162
|
[
"span",
"node",
"result"
] | true
| 5
| 6.4
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
removeSingletonIfCreatedForTypeCheckOnly
|
protected boolean removeSingletonIfCreatedForTypeCheckOnly(String beanName) {
if (!this.alreadyCreated.contains(beanName)) {
removeSingleton(beanName);
return true;
}
else {
return false;
}
}
|
Remove the singleton instance (if any) for the given bean name,
but only if it hasn't been used for other purposes than type checking.
@param beanName the name of the bean
@return {@code true} if actually removed, {@code false} otherwise
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/AbstractBeanFactory.java
| 1,812
|
[
"beanName"
] | true
| 2
| 7.76
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
takeAcknowledgedRecords
|
public Map<TopicIdPartition, NodeAcknowledgements> takeAcknowledgedRecords() {
Map<TopicIdPartition, NodeAcknowledgements> acknowledgementMap = new LinkedHashMap<>();
batches.forEach((tip, batch) -> {
int nodeId = batch.nodeId();
Acknowledgements acknowledgements = batch.takeAcknowledgedRecords();
if (!acknowledgements.isEmpty())
acknowledgementMap.put(tip, new NodeAcknowledgements(nodeId, acknowledgements));
});
return acknowledgementMap;
}
|
Removes all acknowledged records from the in-flight records and returns the map of acknowledgements
to send. If some records were not acknowledged, the in-flight records will not be empty after this
method.
@return The map of acknowledgements to send, along with node information
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareFetch.java
| 229
|
[] | true
| 2
| 8.08
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
_convert_git_changes_to_table
|
def _convert_git_changes_to_table(
version: str, changes: str, base_url: str, markdown: bool = True
) -> tuple[str, list[Change]]:
"""
Converts list of changes from its string form to markdown/RST table and array of change information
The changes are in the form of multiple lines where each line consists of:
FULL_COMMIT_HASH SHORT_COMMIT_HASH COMMIT_DATE COMMIT_SUBJECT
The subject can contain spaces but one of the preceding values can, so we can make split
3 times on spaces to break it up.
:param version: Version from which the changes are
:param changes: list of changes in a form of multiple-line string
:param base_url: base url for the commit URL
:param markdown: if True, Markdown format is used else rst
:return: formatted table + list of changes (starting from the latest)
"""
from tabulate import tabulate
lines = changes.splitlines()
headers = ["Commit", "Committed", "Subject"]
table_data = []
changes_list: list[Change] = []
for line in lines:
if line == "":
continue
change = _get_change_from_line(line, version)
table_data.append(
(
f"[{change.short_hash}]({base_url}{change.full_hash})"
if markdown
else f"`{change.short_hash} <{base_url}{change.full_hash}>`__",
change.date,
f"`{change.message_without_backticks}`"
if markdown
else f"``{change.message_without_backticks}``",
)
)
changes_list.append(change)
header = ""
if not table_data:
return header, []
table = tabulate(
table_data,
headers=headers,
tablefmt="pipe" if markdown else "rst",
colalign=("left", "center", "left"),
)
if not markdown:
header += f"\n\n{version}\n" + "." * len(version) + "\n\n"
release_date = table_data[0][1]
header += f"Latest change: {release_date}\n\n"
return header + table, changes_list
|
Converts list of changes from its string form to markdown/RST table and array of change information
The changes are in the form of multiple lines where each line consists of:
FULL_COMMIT_HASH SHORT_COMMIT_HASH COMMIT_DATE COMMIT_SUBJECT
The subject can contain spaces but one of the preceding values can, so we can make split
3 times on spaces to break it up.
:param version: Version from which the changes are
:param changes: list of changes in a form of multiple-line string
:param base_url: base url for the commit URL
:param markdown: if True, Markdown format is used else rst
:return: formatted table + list of changes (starting from the latest)
|
python
|
dev/breeze/src/airflow_breeze/prepare_providers/provider_documentation.py
| 301
|
[
"version",
"changes",
"base_url",
"markdown"
] |
tuple[str, list[Change]]
| true
| 8
| 7.92
|
apache/airflow
| 43,597
|
sphinx
| false
|
_safe_parse_datetime_optional
|
def _safe_parse_datetime_optional(date_to_check: str | None) -> datetime | None:
"""
Parse datetime and raise error for invalid dates.
Allow None values.
:param date_to_check: the string value to be parsed
"""
if date_to_check is None:
return None
try:
return timezone.parse(date_to_check, strict=True)
except (TypeError, ParserError):
raise HTTPException(
400, f"Invalid datetime: {date_to_check!r}. Please check the date parameter have this value."
)
|
Parse datetime and raise error for invalid dates.
Allow None values.
:param date_to_check: the string value to be parsed
|
python
|
airflow-core/src/airflow/api_fastapi/common/parameters.py
| 540
|
[
"date_to_check"
] |
datetime | None
| true
| 2
| 6.88
|
apache/airflow
| 43,597
|
sphinx
| false
|
try_adopt_task_instances
|
def try_adopt_task_instances(self, tis: Sequence[TaskInstance]) -> Sequence[TaskInstance]:
"""
Adopt task instances which have an external_executor_id (the serialized task key).
Anything that is not adopted will be cleared by the scheduler and becomes eligible for re-scheduling.
:param tis: The task instances to adopt.
"""
with Stats.timer("lambda_executor.adopt_task_instances.duration"):
adopted_tis: list[TaskInstance] = []
if serialized_task_keys := [
(ti, ti.external_executor_id) for ti in tis if ti.external_executor_id
]:
for ti, ser_task_key in serialized_task_keys:
try:
task_key = TaskInstanceKey.from_dict(json.loads(ser_task_key))
except Exception:
# If that task fails to deserialize, we should just skip it.
self.log.exception(
"Task failed to be adopted because the key could not be deserialized"
)
continue
self.running_tasks[ser_task_key] = task_key
adopted_tis.append(ti)
if adopted_tis:
tasks = [f"{task} in state {task.state}" for task in adopted_tis]
task_instance_str = "\n\t".join(tasks)
self.log.info(
"Adopted the following %d tasks from a dead executor:\n\t%s",
len(adopted_tis),
task_instance_str,
)
not_adopted_tis = [ti for ti in tis if ti not in adopted_tis]
return not_adopted_tis
|
Adopt task instances which have an external_executor_id (the serialized task key).
Anything that is not adopted will be cleared by the scheduler and becomes eligible for re-scheduling.
:param tis: The task instances to adopt.
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/executors/aws_lambda/lambda_executor.py
| 454
|
[
"self",
"tis"
] |
Sequence[TaskInstance]
| true
| 4
| 7.2
|
apache/airflow
| 43,597
|
sphinx
| false
|
toString
|
@Override
public String toString() {
return new ToStringCreator(this).append("name", this.name)
.append("value", this.value)
.append("origin", this.origin)
.toString();
}
|
Return the value of the configuration property.
@return the configuration property value
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationProperty.java
| 115
|
[] |
String
| true
| 1
| 6.4
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
min_x_blocks_sub_kernel
|
def min_x_blocks_sub_kernel(self, sub_kernel: TritonKernel, num: int) -> None:
"""
Kernels with no_x_dim being true has no tunable XBLOCK. They have a fixed number of X blocks.
Grid calculation needs to make sure that they are assigned with enough number of blocks.
"""
min_x_blocks: Union[int, str] = 0
x_numels: Union[int, str] = 0
for tree in sub_kernel.range_trees:
simplified_tree_numel = V.graph.sizevars.simplify(tree.numel)
if tree.prefix == "x":
if isinstance(simplified_tree_numel, (Integer, int)):
x_numels = int(simplified_tree_numel)
else:
x_numels = f"{tree.prefix}numel_{num}"
if sub_kernel.no_x_dim:
min_x_blocks = x_numels
x_numels = (
# pyrefly: ignore [unsupported-operation]
-min_x_blocks
if isinstance(x_numels, int)
# pyrefly: ignore [redundant-cast]
else "-" + cast(str, x_numels)
)
else:
if isinstance(simplified_tree_numel, (Integer, int)):
x_numels = int(simplified_tree_numel)
else:
x_numels = f"{tree.prefix}numel_{num}"
self.min_x_blocks_list.append(min_x_blocks)
self.x_numels_list.append(x_numels)
|
Kernels with no_x_dim being true has no tunable XBLOCK. They have a fixed number of X blocks.
Grid calculation needs to make sure that they are assigned with enough number of blocks.
|
python
|
torch/_inductor/codegen/triton_combo_kernel.py
| 465
|
[
"self",
"sub_kernel",
"num"
] |
None
| true
| 10
| 6
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
indexOf
|
public int indexOf(final String str, final int startIndex) {
return Strings.CS.indexOf(this, str, startIndex);
}
|
Searches the string builder to find the first reference to the specified
string starting searching from the given index.
<p>
Note that a null input string will return -1, whereas the JDK throws an exception.
</p>
@param str the string to find, null returns -1
@param startIndex the index to start at, invalid index rounded to edge
@return the first index of the string, or -1 if not found
|
java
|
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
| 2,038
|
[
"str",
"startIndex"
] | true
| 1
| 6.64
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
to_sql
|
def to_sql(
self,
frame,
name: str,
if_exists: str = "fail",
index: bool = True,
index_label=None,
schema=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
method: Literal["multi"] | Callable | None = None,
engine: str = "auto",
**engine_kwargs,
) -> int | None:
"""
Write records stored in a DataFrame to a SQL database.
Parameters
----------
frame: DataFrame
name: string
Name of SQL table.
if_exists: {'fail', 'replace', 'append', 'delete_rows'}, default 'fail'
fail: If table exists, do nothing.
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if it does not exist.
delete_rows: If a table exists, delete all records and insert data.
index : bool, default True
Write DataFrame index as a column
index_label : string or sequence, default None
Column label for index column(s). If None is given (default) and
`index` is True, then the index names are used.
A sequence should be given if the DataFrame uses MultiIndex.
schema : string, default None
Ignored parameter included for compatibility with SQLAlchemy
version of ``to_sql``.
chunksize : int, default None
If not None, then rows will be written in batches of this
size at a time. If None, all rows will be written at once.
dtype : single type or dict of column name to SQL type, default None
Optional specifying the datatype for columns. The SQL type should
be a string. If all columns are of the same type, one single value
can be used.
method : {None, 'multi', callable}, default None
Controls the SQL insertion clause used:
* None : Uses standard SQL ``INSERT`` clause (one per row).
* 'multi': Pass multiple values in a single ``INSERT`` clause.
* callable with signature ``(pd_table, conn, keys, data_iter)``.
Details and a sample callable implementation can be found in the
section :ref:`insert method <io.sql.method>`.
"""
if dtype:
if not is_dict_like(dtype):
# error: Value expression in dictionary comprehension has incompatible
# type "Union[ExtensionDtype, str, dtype[Any], Type[object],
# Dict[Hashable, Union[ExtensionDtype, Union[str, dtype[Any]],
# Type[str], Type[float], Type[int], Type[complex], Type[bool],
# Type[object]]]]"; expected type "Union[ExtensionDtype, str,
# dtype[Any], Type[object]]"
dtype = dict.fromkeys(frame, dtype) # type: ignore[arg-type]
else:
dtype = cast(dict, dtype)
for col, my_type in dtype.items():
if not isinstance(my_type, str):
raise ValueError(f"{col} ({my_type}) not a string")
table = SQLiteTable(
name,
self,
frame=frame,
index=index,
if_exists=if_exists,
index_label=index_label,
dtype=dtype,
)
table.create()
return table.insert(chunksize, method)
|
Write records stored in a DataFrame to a SQL database.
Parameters
----------
frame: DataFrame
name: string
Name of SQL table.
if_exists: {'fail', 'replace', 'append', 'delete_rows'}, default 'fail'
fail: If table exists, do nothing.
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if it does not exist.
delete_rows: If a table exists, delete all records and insert data.
index : bool, default True
Write DataFrame index as a column
index_label : string or sequence, default None
Column label for index column(s). If None is given (default) and
`index` is True, then the index names are used.
A sequence should be given if the DataFrame uses MultiIndex.
schema : string, default None
Ignored parameter included for compatibility with SQLAlchemy
version of ``to_sql``.
chunksize : int, default None
If not None, then rows will be written in batches of this
size at a time. If None, all rows will be written at once.
dtype : single type or dict of column name to SQL type, default None
Optional specifying the datatype for columns. The SQL type should
be a string. If all columns are of the same type, one single value
can be used.
method : {None, 'multi', callable}, default None
Controls the SQL insertion clause used:
* None : Uses standard SQL ``INSERT`` clause (one per row).
* 'multi': Pass multiple values in a single ``INSERT`` clause.
* callable with signature ``(pd_table, conn, keys, data_iter)``.
Details and a sample callable implementation can be found in the
section :ref:`insert method <io.sql.method>`.
|
python
|
pandas/io/sql.py
| 2,801
|
[
"self",
"frame",
"name",
"if_exists",
"index",
"index_label",
"schema",
"chunksize",
"dtype",
"method",
"engine"
] |
int | None
| true
| 6
| 6.96
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
allocateTag
|
function allocateTag() {
let tag = nextReactTag;
if (tag % 10 === 1) {
tag += 2;
}
nextReactTag = tag + 2;
return tag;
}
|
Copyright (c) Meta Platforms, Inc. and affiliates.
This source code is licensed under the MIT license found in the
LICENSE file in the root directory of this source tree.
@flow
|
javascript
|
packages/react-native-renderer/src/ReactFiberConfigNative.js
| 95
|
[] | false
| 2
| 6.24
|
facebook/react
| 241,750
|
jsdoc
| false
|
|
beginCatchBlock
|
function beginCatchBlock(variable: VariableDeclaration): void {
Debug.assert(peekBlockKind() === CodeBlockKind.Exception);
// generated identifiers should already be unique within a file
let name: Identifier;
if (isGeneratedIdentifier(variable.name)) {
name = variable.name;
hoistVariableDeclaration(variable.name);
}
else {
const text = idText(variable.name as Identifier);
name = declareLocal(text);
if (!renamedCatchVariables) {
renamedCatchVariables = new Map<string, boolean>();
renamedCatchVariableDeclarations = [];
context.enableSubstitution(SyntaxKind.Identifier);
}
renamedCatchVariables.set(text, true);
renamedCatchVariableDeclarations[getOriginalNodeId(variable)] = name;
}
const exception = peekBlock() as ExceptionBlock;
Debug.assert(exception.state < ExceptionBlockState.Catch);
const endLabel = exception.endLabel;
emitBreak(endLabel);
const catchLabel = defineLabel();
markLabel(catchLabel);
exception.state = ExceptionBlockState.Catch;
exception.catchVariable = name;
exception.catchLabel = catchLabel;
emitAssignment(name, factory.createCallExpression(factory.createPropertyAccessExpression(state, "sent"), /*typeArguments*/ undefined, []));
emitNop();
}
|
Enters the `catch` clause of a generated `try` statement.
@param variable The catch variable.
|
typescript
|
src/compiler/transformers/generators.ts
| 2,227
|
[
"variable"
] | true
| 4
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
format
|
@Override
public String format(final Date date) {
return printer.format(date);
}
|
Formats a {@link Date} object using a {@link GregorianCalendar}.
@param date the date to format.
@return the formatted string.
|
java
|
src/main/java/org/apache/commons/lang3/time/FastDateFormat.java
| 444
|
[
"date"
] |
String
| true
| 1
| 6.64
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
substituteCallExpression
|
function substituteCallExpression(node: CallExpression): Expression {
const expression = node.expression;
if (isSuperProperty(expression)) {
const argumentExpression = isPropertyAccessExpression(expression)
? substitutePropertyAccessExpression(expression)
: substituteElementAccessExpression(expression);
return factory.createCallExpression(
factory.createPropertyAccessExpression(argumentExpression, "call"),
/*typeArguments*/ undefined,
[
factory.createThis(),
...node.arguments,
],
);
}
return node;
}
|
Hooks node substitutions.
@param hint A hint as to the intended usage of the node.
@param node The node to substitute.
|
typescript
|
src/compiler/transformers/es2017.ts
| 1,000
|
[
"node"
] | true
| 3
| 6.88
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
items
|
public ConditionMessage items(Style style, Object @Nullable ... items) {
return items(style, (items != null) ? Arrays.asList(items) : null);
}
|
Indicate the items. For example
{@code didNotFind("bean", "beans").items("x", "y")} results in the message "did
not find beans x, y".
@param style the render style
@param items the items (may be {@code null})
@return a built {@link ConditionMessage}
|
java
|
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/condition/ConditionMessage.java
| 360
|
[
"style"
] |
ConditionMessage
| true
| 2
| 8
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
__init__
|
def __init__(
self,
input_nodes: list[Any],
scalars: Optional[dict[str, Union[float, int]]] = None,
out_dtype: Optional[torch.dtype] = None,
mat1_idx: int = -2,
mat2_idx: int = -1,
):
"""
Initialize with a tuple of input nodes.
By default, we assume the last 2 input nodes are mat1 and mat2, but
the caller can adjust when necessary
"""
super().__init__(input_nodes, scalars, out_dtype)
# for mm, we need at least 2 nodes, and we need to know which nodes
# are the main matrixes e.g. addmm is (bias, mat1, mat2) whereas others
# might be (mat1, mat2, scale), etc.
assert len(self._input_nodes) >= 2, "Expected at least 2 input nodes"
# Adjust assertions to handle negative indices
m1_idx, m2_idx = mat1_idx, mat2_idx
if mat1_idx < 0:
m1_idx += len(input_nodes)
if mat2_idx < 0:
m2_idx += len(input_nodes)
assert 0 <= m1_idx < len(input_nodes), f"Invalid mat1_idx: {mat1_idx}"
assert 0 <= m1_idx < len(input_nodes), f"Invalid mat2_idx: {mat2_idx}"
self._mat1_idx = mat1_idx
self._mat2_idx = mat2_idx
|
Initialize with a tuple of input nodes.
By default, we assume the last 2 input nodes are mat1 and mat2, but
the caller can adjust when necessary
|
python
|
torch/_inductor/kernel_inputs.py
| 216
|
[
"self",
"input_nodes",
"scalars",
"out_dtype",
"mat1_idx",
"mat2_idx"
] | true
| 3
| 6
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
|
write_view_information_to_args
|
def write_view_information_to_args(
mutable_arg_names: list[str],
mutable_arg_types: list[torch.Type],
kwargs: dict[str, Any],
arg_to_base_index: dict[str, Any],
):
"""
This function writes the view information into kwargs. It reads mutable_args from kwargs.
and uses arg_to_base_index and tensor information to write ViewInfo into kwargs.
mutable_arg_names: mutable custom operator arg names.
mutable_arg_types: mutable custom operator arg types.
kwargs: the original custom operator args.
arg_to_base_index: maps mutable_arg_name to int | [int] that refers to the base tensor that
corresponds to the input tensor
"""
def write_single_view(prefix: str, tensor: Tensor, base_index: int):
assert f"{prefix}_base_index" not in kwargs
assert f"{prefix}_size" not in kwargs
assert f"{prefix}_stride" not in kwargs
assert f"{prefix}_storage_offset" not in kwargs
assert f"{prefix}_slice_dim" not in kwargs
assert f"{prefix}_slice_start" not in kwargs
assert f"{prefix}_slice_end" not in kwargs
def use_as_strided(tensor):
kwargs[f"{prefix}_size"] = tensor.size()
kwargs[f"{prefix}_stride"] = tensor.stride()
kwargs[f"{prefix}_storage_offset"] = tensor.storage_offset()
def use_slice(dim, start, end):
kwargs[f"{prefix}_slice_dim"] = dim
kwargs[f"{prefix}_slice_start"] = start
kwargs[f"{prefix}_slice_end"] = end
def use_alias():
kwargs[f"{prefix}_alias"] = True
# The start if the function
if tensor is None:
kwargs[f"{prefix}_base_index"] = None
else:
base = get_base(tensor)
kwargs[f"{prefix}_base_index"] = base_index
if base is None:
# no need to add anything else other than _base_index
return
elif is_alias(base, tensor):
use_alias()
elif (slice_info := try_use_slice(base, tensor)) is not None:
use_slice(*slice_info)
else:
use_as_strided(tensor)
for arg_name, arg_type in zip(mutable_arg_names, mutable_arg_types):
arg = kwargs[arg_name]
if library_utils.is_tensorlist_like_type(arg_type):
if arg is None:
kwargs[f"_{arg_name}_length"] = None
else:
kwargs[f"_{arg_name}_length"] = len(arg)
for i, elem in enumerate(arg):
write_single_view(
f"_{arg_name}_{i}", elem, arg_to_base_index[arg_name][i]
)
elif library_utils.is_tensor_like_type(arg_type):
write_single_view(
f"_{arg_name}",
kwargs[arg_name],
arg_to_base_index.get(arg_name), # type: ignore[arg-type]
)
else:
raise RuntimeError(f"Unsupported type {arg_type}")
|
This function writes the view information into kwargs. It reads mutable_args from kwargs.
and uses arg_to_base_index and tensor information to write ViewInfo into kwargs.
mutable_arg_names: mutable custom operator arg names.
mutable_arg_types: mutable custom operator arg types.
kwargs: the original custom operator args.
arg_to_base_index: maps mutable_arg_name to int | [int] that refers to the base tensor that
corresponds to the input tensor
|
python
|
torch/_higher_order_ops/auto_functionalize.py
| 171
|
[
"mutable_arg_names",
"mutable_arg_types",
"kwargs",
"arg_to_base_index"
] | true
| 14
| 6.48
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
|
maxTimestamp
|
long maxTimestamp();
|
Get the max timestamp or log append time of this record batch.
If the timestamp type is create time, this is the max timestamp among all records contained in this batch and
the value is updated during compaction.
@return The max timestamp
|
java
|
clients/src/main/java/org/apache/kafka/common/record/RecordBatch.java
| 93
|
[] | true
| 1
| 6.8
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
sub_node_can_fuse
|
def sub_node_can_fuse(
self,
node1: BaseSchedulerNode,
node2: BaseSchedulerNode,
other_nodes: tuple[BaseSchedulerNode, ...],
):
"""
node1 is from the current mix order reduction; node2 is another node we want to fuse in.
other_nodes are passed in to check if fusion will introduce producer/consumer relationship
between the inner and outer reduction. If yes, we don't fuse.
"""
assert not isinstance(node1, FusedMixOrderReductions)
assert not isinstance(node2, FusedMixOrderReductions)
# When we fuse extra nodes into a FusedMixOrderReductions node,
# we should not allow recursive mix-order reduction being
# created.
if not self.scheduler.can_fuse(node1, node2, allow_mix_order_reduction=False):
return False
def _get_ancestors(nodes: tuple[BaseSchedulerNode, ...]) -> OrderedSet[str]:
out = OrderedSet()
return out.union(*(n.ancestors for n in nodes))
def _get_operation_names(
nodes: tuple[BaseSchedulerNode, ...],
) -> OrderedSet[str]:
out = OrderedSet()
return out.union(*(n.get_operation_names() for n in nodes))
if other_nodes:
if (_get_ancestors((node1, node2)) & _get_operation_names(other_nodes)) or (
_get_ancestors(other_nodes) & _get_operation_names((node1, node2))
):
return False
return (
not node2.is_reduction()
or typing.cast(
int, self.scheduler.score_fusion_memory(node1, node2, count_bytes=False)
)
>= self.numel
)
|
node1 is from the current mix order reduction; node2 is another node we want to fuse in.
other_nodes are passed in to check if fusion will introduce producer/consumer relationship
between the inner and outer reduction. If yes, we don't fuse.
|
python
|
torch/_inductor/scheduler.py
| 2,093
|
[
"self",
"node1",
"node2",
"other_nodes"
] | true
| 6
| 6
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
|
castCurry
|
function castCurry(name, func, n) {
return (forceCurry || (config.curry && n > 1))
? curry(func, n)
: func;
}
|
Casts `func` to a curried function if needed.
@private
@param {string} name The name of the function to inspect.
@param {Function} func The function to inspect.
@param {number} n The arity of `func`.
@returns {Function} Returns the cast function.
|
javascript
|
fp/_baseConvert.js
| 300
|
[
"name",
"func",
"n"
] | false
| 4
| 6.24
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
moveaxis
|
def moveaxis(a, source, destination):
"""
Move axes of an array to new positions.
Other axes remain in their original order.
Parameters
----------
a : np.ndarray
The array whose axes should be reordered.
source : int or sequence of int
Original positions of the axes to move. These must be unique.
destination : int or sequence of int
Destination positions for each of the original axes. These must also be
unique.
Returns
-------
result : np.ndarray
Array with moved axes. This array is a view of the input array.
See Also
--------
transpose : Permute the dimensions of an array.
swapaxes : Interchange two axes of an array.
Examples
--------
>>> import numpy as np
>>> x = np.zeros((3, 4, 5))
>>> np.moveaxis(x, 0, -1).shape
(4, 5, 3)
>>> np.moveaxis(x, -1, 0).shape
(5, 3, 4)
These all achieve the same result:
>>> np.transpose(x).shape
(5, 4, 3)
>>> np.swapaxes(x, 0, -1).shape
(5, 4, 3)
>>> np.moveaxis(x, [0, 1], [-1, -2]).shape
(5, 4, 3)
>>> np.moveaxis(x, [0, 1, 2], [-1, -2, -3]).shape
(5, 4, 3)
"""
try:
# allow duck-array types if they define transpose
transpose = a.transpose
except AttributeError:
a = asarray(a)
transpose = a.transpose
source = normalize_axis_tuple(source, a.ndim, 'source')
destination = normalize_axis_tuple(destination, a.ndim, 'destination')
if len(source) != len(destination):
raise ValueError('`source` and `destination` arguments must have '
'the same number of elements')
order = [n for n in range(a.ndim) if n not in source]
for dest, src in sorted(zip(destination, source)):
order.insert(dest, src)
result = transpose(order)
return result
|
Move axes of an array to new positions.
Other axes remain in their original order.
Parameters
----------
a : np.ndarray
The array whose axes should be reordered.
source : int or sequence of int
Original positions of the axes to move. These must be unique.
destination : int or sequence of int
Destination positions for each of the original axes. These must also be
unique.
Returns
-------
result : np.ndarray
Array with moved axes. This array is a view of the input array.
See Also
--------
transpose : Permute the dimensions of an array.
swapaxes : Interchange two axes of an array.
Examples
--------
>>> import numpy as np
>>> x = np.zeros((3, 4, 5))
>>> np.moveaxis(x, 0, -1).shape
(4, 5, 3)
>>> np.moveaxis(x, -1, 0).shape
(5, 3, 4)
These all achieve the same result:
>>> np.transpose(x).shape
(5, 4, 3)
>>> np.swapaxes(x, 0, -1).shape
(5, 4, 3)
>>> np.moveaxis(x, [0, 1], [-1, -2]).shape
(5, 4, 3)
>>> np.moveaxis(x, [0, 1, 2], [-1, -2, -3]).shape
(5, 4, 3)
|
python
|
numpy/_core/numeric.py
| 1,481
|
[
"a",
"source",
"destination"
] | false
| 3
| 7.76
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
cloneIfPossible
|
public static <T> T cloneIfPossible(final T obj) {
final T clone = clone(obj);
return clone == null ? obj : clone;
}
|
Clones an object if possible.
<p>
This method is similar to {@link #clone(Object)}, but will return the provided instance as the return value instead of {@code null} if the instance is
not cloneable. This is more convenient if the caller uses different implementations (e.g. of a service) and some of the implementations do not allow
concurrent processing or have state. In such cases the implementation can simply provide a proper clone implementation and the caller's code does not
have to change.
</p>
@param <T> the type of the object.
@param obj the object to clone, null returns null.
@return the clone if the object implements {@link Cloneable} otherwise the object itself.
@throws CloneFailedException if the object is cloneable and the clone operation fails.
@since 3.0
|
java
|
src/main/java/org/apache/commons/lang3/ObjectUtils.java
| 275
|
[
"obj"
] |
T
| true
| 2
| 8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
getAsText
|
@Override
public String getAsText() {
Object value = getValue();
return (value != null ? value.toString() : "");
}
|
Create a new CharacterEditor instance.
<p>The "allowEmpty" parameter controls whether an empty String is to be
allowed in parsing, i.e. be interpreted as the {@code null} value when
{@link #setAsText(String) text is being converted}. If {@code false},
an {@link IllegalArgumentException} will be thrown at that time.
@param allowEmpty if empty strings are to be allowed
|
java
|
spring-beans/src/main/java/org/springframework/beans/propertyeditors/CharacterEditor.java
| 95
|
[] |
String
| true
| 2
| 6.72
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
value
|
public JSONStringer value(long value) throws JSONException {
if (this.stack.isEmpty()) {
throw new JSONException("Nesting problem");
}
beforeValue();
this.out.append(value);
return this;
}
|
Encodes {@code value} to this stringer.
@param value the value to encode
@return this stringer.
@throws JSONException if processing of json failed
|
java
|
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONStringer.java
| 304
|
[
"value"
] |
JSONStringer
| true
| 2
| 8.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
verify_integrity
|
def verify_integrity(self, *, session: Session = NEW_SESSION, dag_version_id: UUIDType) -> None:
"""
Verify the DagRun by checking for removed tasks or tasks that are not in the database yet.
It will set state to removed or add the task if required.
:param dag_version_id: The DAG version ID
:param session: Sqlalchemy ORM Session
"""
from airflow.settings import task_instance_mutation_hook
# Set for the empty default in airflow.settings -- if it's not set this means it has been changed
# Note: Literal[True, False] instead of bool because otherwise it doesn't correctly find the overload.
hook_is_noop: Literal[True, False] = getattr(task_instance_mutation_hook, "is_noop", False)
dag = self.get_dag()
task_ids = self._check_for_removed_or_restored_tasks(
dag, task_instance_mutation_hook, session=session
)
def task_filter(task: Operator) -> bool:
return task.task_id not in task_ids and (
self.run_type == DagRunType.BACKFILL_JOB
or (
task.start_date is None
or self.logical_date is None
or task.start_date <= self.logical_date
)
and (task.end_date is None or self.logical_date is None or self.logical_date <= task.end_date)
)
created_counts: dict[str, int] = defaultdict(int)
task_creator = self._get_task_creator(
created_counts, task_instance_mutation_hook, hook_is_noop, dag_version_id
)
# Create the missing tasks, including mapped tasks
tis_to_create = self._create_tasks(
(task for task in dag.task_dict.values() if task_filter(task)),
task_creator,
session=session,
)
self._create_task_instances(self.dag_id, tis_to_create, created_counts, hook_is_noop, session=session)
|
Verify the DagRun by checking for removed tasks or tasks that are not in the database yet.
It will set state to removed or add the task if required.
:param dag_version_id: The DAG version ID
:param session: Sqlalchemy ORM Session
|
python
|
airflow-core/src/airflow/models/dagrun.py
| 1,693
|
[
"self",
"session",
"dag_version_id"
] |
None
| true
| 8
| 6.88
|
apache/airflow
| 43,597
|
sphinx
| false
|
_get_unique_name
|
def _get_unique_name(
self,
proposed_name: str,
fail_if_exists: bool,
describe_func: Callable[[str], Any],
check_exists_func: Callable[[str, Callable[[str], Any]], bool],
resource_type: str,
) -> str:
"""
Return the proposed name if it doesn't already exist, otherwise returns it with a timestamp suffix.
:param proposed_name: Base name.
:param fail_if_exists: Will throw an error if a resource with that name already exists
instead of finding a new name.
:param check_exists_func: The function to check if the resource exists.
It should take the resource name and a describe function as arguments.
:param resource_type: Type of the resource (e.g., "model", "job").
"""
self._check_resource_type(resource_type)
name = proposed_name
while check_exists_func(name, describe_func):
# this while should loop only once in most cases, just setting it this way to regenerate a name
# in case there is collision.
if fail_if_exists:
raise AirflowException(f"A SageMaker {resource_type} with name {name} already exists.")
max_name_len = 63
timestamp = str(time.time_ns() // 1000000000) # only keep the relevant datetime (first 10 digits)
name = f"{proposed_name[: max_name_len - len(timestamp) - 1]}-{timestamp}" # we subtract one to make provision for the dash between the truncated name and timestamp
self.log.info("Changed %s name to '%s' to avoid collision.", resource_type, name)
return name
|
Return the proposed name if it doesn't already exist, otherwise returns it with a timestamp suffix.
:param proposed_name: Base name.
:param fail_if_exists: Will throw an error if a resource with that name already exists
instead of finding a new name.
:param check_exists_func: The function to check if the resource exists.
It should take the resource name and a describe function as arguments.
:param resource_type: Type of the resource (e.g., "model", "job").
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/operators/sagemaker.py
| 153
|
[
"self",
"proposed_name",
"fail_if_exists",
"describe_func",
"check_exists_func",
"resource_type"
] |
str
| true
| 3
| 7.04
|
apache/airflow
| 43,597
|
sphinx
| false
|
closeForRecordAppends
|
public void closeForRecordAppends() {
if (appendStream != CLOSED_STREAM) {
try {
appendStream.close();
} catch (IOException e) {
throw new KafkaException(e);
} finally {
appendStream = CLOSED_STREAM;
}
}
}
|
Release resources required for record appends (e.g. compression buffers). Once this method is called, it's only
possible to update the RecordBatch header.
|
java
|
clients/src/main/java/org/apache/kafka/common/record/MemoryRecordsBuilder.java
| 332
|
[] |
void
| true
| 3
| 6.4
|
apache/kafka
| 31,560
|
javadoc
| false
|
readFileAfterStat
|
function readFileAfterStat(err, stats) {
const context = this.context;
if (err)
return context.close(err);
// TODO(BridgeAR): Check if allocating a smaller chunk is better performance
// wise, similar to the promise based version (less peak memory and chunked
// stringify operations vs multiple C++/JS boundary crossings).
const size = context.size = isFileType(stats, S_IFREG) ? stats[8] : 0;
if (size > kIoMaxLength) {
err = new ERR_FS_FILE_TOO_LARGE(size);
return context.close(err);
}
try {
if (size === 0) {
// TODO(BridgeAR): If an encoding is set, use the StringDecoder to concat
// the result and reuse the buffer instead of allocating a new one.
context.buffers = [];
} else {
context.buffer = Buffer.allocUnsafeSlow(size);
}
} catch (err) {
return context.close(err);
}
context.read();
}
|
Synchronously tests whether or not the given path exists.
@param {string | Buffer | URL} path
@returns {boolean}
|
javascript
|
lib/fs.js
| 305
|
[
"err",
"stats"
] | false
| 7
| 6.4
|
nodejs/node
| 114,839
|
jsdoc
| false
|
|
isStartupShutdownThreadStuck
|
private boolean isStartupShutdownThreadStuck() {
Thread activeThread = this.startupShutdownThread;
if (activeThread != null && activeThread.getState() == Thread.State.WAITING) {
// Indefinitely waiting: might be Thread.join or the like, or System.exit
activeThread.interrupt();
try {
// Leave just a little bit of time for the interruption to show effect
Thread.sleep(1);
}
catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
if (activeThread.getState() == Thread.State.WAITING) {
// Interrupted but still waiting: very likely a System.exit call
return true;
}
}
return false;
}
|
Determine whether an active startup/shutdown thread is currently stuck,
for example, through a {@code System.exit} call in a user component.
|
java
|
spring-context/src/main/java/org/springframework/context/support/AbstractApplicationContext.java
| 1,094
|
[] | true
| 5
| 6.72
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
setValue
|
@Override
public void setValue(@Nullable Object value) {
if (value == null && this.nullAsEmptyCollection) {
super.setValue(createCollection(this.collectionType, 0));
}
else if (value == null || (this.collectionType.isInstance(value) && !alwaysCreateNewCollection())) {
// Use the source value as-is, as it matches the target type.
super.setValue(value);
}
else if (value instanceof Collection<?> source) {
// Convert Collection elements.
Collection<Object> target = createCollection(this.collectionType, source.size());
for (Object elem : source) {
target.add(convertElement(elem));
}
super.setValue(target);
}
else if (value.getClass().isArray()) {
// Convert array elements to Collection elements.
int length = Array.getLength(value);
Collection<Object> target = createCollection(this.collectionType, length);
for (int i = 0; i < length; i++) {
target.add(convertElement(Array.get(value, i)));
}
super.setValue(target);
}
else {
// A plain value: convert it to a Collection with a single element.
Collection<Object> target = createCollection(this.collectionType, 1);
target.add(convertElement(value));
super.setValue(target);
}
}
|
Convert the given value to a Collection of the target type.
|
java
|
spring-beans/src/main/java/org/springframework/beans/propertyeditors/CustomCollectionEditor.java
| 112
|
[
"value"
] |
void
| true
| 9
| 6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
deburr
|
function deburr(string) {
string = toString(string);
return string && string.replace(reLatin, deburrLetter).replace(reComboMark, '');
}
|
Deburrs `string` by converting
[Latin-1 Supplement](https://en.wikipedia.org/wiki/Latin-1_Supplement_(Unicode_block)#Character_table)
and [Latin Extended-A](https://en.wikipedia.org/wiki/Latin_Extended-A)
letters to basic Latin letters and removing
[combining diacritical marks](https://en.wikipedia.org/wiki/Combining_Diacritical_Marks).
@static
@memberOf _
@since 3.0.0
@category String
@param {string} [string=''] The string to deburr.
@returns {string} Returns the deburred string.
@example
_.deburr('déjà vu');
// => 'deja vu'
|
javascript
|
lodash.js
| 14,288
|
[
"string"
] | false
| 2
| 6.8
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
newReferenceArray
|
public static <E extends @Nullable Object> AtomicReferenceArray<E> newReferenceArray(E[] array) {
return new AtomicReferenceArray<>(array);
}
|
Creates an {@code AtomicReferenceArray} instance with the same length as, and all elements
copied from, the given array.
@param array the array to copy elements from
@return a new {@code AtomicReferenceArray} copied from the given array
|
java
|
android/guava/src/com/google/common/util/concurrent/Atomics.java
| 69
|
[
"array"
] | true
| 1
| 6.48
|
google/guava
| 51,352
|
javadoc
| false
|
|
adapt
|
public <T> Supplier<@Nullable T> adapt(Function<CachingConfigurer, @Nullable T> provider) {
return () -> {
CachingConfigurer cachingConfigurer = this.supplier.get();
return (cachingConfigurer != null ? provider.apply(cachingConfigurer) : null);
};
}
|
Adapt the {@link CachingConfigurer} supplier to another supplier
provided by the specified mapping function. If the underlying
{@link CachingConfigurer} is {@code null}, {@code null} is returned
and the mapping function is not invoked.
@param provider the provider to use to adapt the supplier
@param <T> the type of the supplier
@return another supplier mapped by the specified function
|
java
|
spring-context/src/main/java/org/springframework/cache/annotation/AbstractCachingConfiguration.java
| 123
|
[
"provider"
] | true
| 2
| 7.6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
getIpinfoDatabase
|
@Nullable
static Database getIpinfoDatabase(final String databaseType) {
// for ipinfo the database selection is more along the lines of user-agent sniffing than
// string-based dispatch. the specific database_type strings could change in the future,
// hence the somewhat loose nature of this checking.
final String cleanedType = ipinfoTypeCleanup(databaseType);
// early detection on any of the 'extended' types
if (databaseType.contains("extended")) {
// which are not currently supported
logger.trace("returning null for unsupported database_type [{}]", databaseType);
return null;
}
// early detection on 'country_asn' so the 'country' and 'asn' checks don't get faked out
if (cleanedType.contains("country_asn")) {
// but it's not currently supported
logger.trace("returning null for unsupported database_type [{}]", databaseType);
return null;
}
if (cleanedType.contains("asn")) {
return Database.AsnV2;
} else if (cleanedType.contains("country")) {
return Database.CountryV2;
} else if (cleanedType.contains("location")) { // note: catches 'location' and 'geolocation' ;)
return Database.CityV2;
} else if (cleanedType.contains("privacy")) {
return Database.PrivacyDetection;
} else {
// no match was found
logger.trace("returning null for unsupported database_type [{}]", databaseType);
return null;
}
}
|
Cleans up the database_type String from an ipinfo database by splitting on punctuation, removing stop words, and then joining
with an underscore.
<p>
e.g. "ipinfo free_foo_sample.mmdb" -> "foo"
@param type the database_type from an ipinfo database
@return a cleaned up database_type string
|
java
|
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/IpinfoIpDataLookups.java
| 77
|
[
"databaseType"
] |
Database
| true
| 7
| 8.4
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.