function_name
stringlengths 1
57
| function_code
stringlengths 20
4.99k
| documentation
stringlengths 50
2k
| language
stringclasses 5
values | file_path
stringlengths 8
166
| line_number
int32 4
16.7k
| parameters
listlengths 0
20
| return_type
stringlengths 0
131
| has_type_hints
bool 2
classes | complexity
int32 1
51
| quality_score
float32 6
9.68
| repo_name
stringclasses 34
values | repo_stars
int32 2.9k
242k
| docstring_style
stringclasses 7
values | is_async
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
synchronizedDeque
|
@J2ktIncompatible // Synchronized
public static <E extends @Nullable Object> Deque<E> synchronizedDeque(Deque<E> deque) {
return Synchronized.deque(deque, null);
}
|
Returns a synchronized (thread-safe) deque backed by the specified deque. In order to guarantee
serial access, it is critical that <b>all</b> access to the backing deque is accomplished
through the returned deque.
<p>It is imperative that the user manually synchronize on the returned deque when accessing any
of the deque's iterators:
{@snippet :
Deque<E> deque = Queues.synchronizedDeque(Queues.newArrayDeque());
...
deque.add(element); // Needn't be in synchronized block
...
synchronized (deque) { // Must synchronize on deque!
Iterator<E> i = deque.iterator(); // Must be in synchronized block
while (i.hasNext()) {
foo(i.next());
}
}
}
<p>Failure to follow this advice may result in non-deterministic behavior.
<p>The returned deque will be serializable if the specified deque is serializable.
@param deque the deque to be wrapped in a synchronized view
@return a synchronized view of the specified deque
@since 15.0
|
java
|
android/guava/src/com/google/common/collect/Queues.java
| 491
|
[
"deque"
] | true
| 1
| 6.64
|
google/guava
| 51,352
|
javadoc
| false
|
|
clear
|
public StrBuilder clear() {
size = 0;
return this;
}
|
Clears the string builder (convenience Collections API style method).
<p>
This method does not reduce the size of the internal character buffer.
To do that, call {@code clear()} followed by {@link #minimizeCapacity()}.
</p>
<p>
This method is the same as {@link #setLength(int)} called with zero
and is provided to match the API of Collections.
</p>
@return {@code this} instance.
|
java
|
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
| 1,608
|
[] |
StrBuilder
| true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
assertUrlIsNotMalformed
|
public static void assertUrlIsNotMalformed(String url) {
if (url == null || !url.startsWith(PREFIX)) {
throw new IllegalArgumentException("'url' must not be null and must use 'nested' protocol");
}
NestedLocation.parse(url.substring(PREFIX.length()));
}
|
Assert that the specified URL is a valid "nested" URL.
@param url the URL to check
|
java
|
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/net/protocol/nested/Handler.java
| 55
|
[
"url"
] |
void
| true
| 3
| 6.88
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
makeHeartbeatRequestAndLogResponse
|
private NetworkClientDelegate.UnsentRequest makeHeartbeatRequestAndLogResponse(final long currentTimeMs) {
return makeHeartbeatRequest(currentTimeMs).whenComplete((response, exception) -> {
if (response != null) {
metricsManager.recordRequestLatency(response.requestLatencyMs());
Errors error = Errors.forCode(((StreamsGroupHeartbeatResponse) response.responseBody()).data().errorCode());
if (error == Errors.NONE)
logger.debug("StreamsGroupHeartbeatRequest responded successfully: {}", response);
else
logger.error("StreamsGroupHeartbeatRequest failed because of {}: {}", error, response);
} else {
logger.error("StreamsGroupHeartbeatRequest failed because of unexpected exception.", exception);
}
});
}
|
A heartbeat should be sent without waiting for the heartbeat interval to expire if:
- the member is leaving the group
or
- the member is joining the group or acknowledging the assignment and for both cases there is no heartbeat request
in flight.
@return true if a heartbeat should be sent before the interval expires, false otherwise
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsGroupHeartbeatRequestManager.java
| 479
|
[
"currentTimeMs"
] | true
| 3
| 6.56
|
apache/kafka
| 31,560
|
javadoc
| false
|
|
as
|
default <R> ValueExtractor<R> as(Extractor<T, R> extractor) {
return (instance) -> apply(extract(instance), extractor);
}
|
Adapt the extracted value.
@param <R> the result type
@param extractor the extractor to use
@return a new {@link ValueExtractor}
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
| 727
|
[
"extractor"
] | true
| 1
| 6.48
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
callback
|
public static <C, A> Callback<C, A> callback(Class<C> callbackType, C callbackInstance, A argument,
@Nullable Object @Nullable ... additionalArguments) {
Assert.notNull(callbackType, "'callbackType' must not be null");
Assert.notNull(callbackInstance, "'callbackInstance' must not be null");
return new Callback<>(callbackType, callbackInstance, argument, additionalArguments);
}
|
Start a call to a single callback instance, dealing with common generic type
concerns and exceptions.
@param callbackType the callback type (a {@link FunctionalInterface functional
interface})
@param callbackInstance the callback instance (may be a lambda)
@param argument the primary argument passed to the callback
@param additionalArguments any additional arguments passed to the callback
@param <C> the callback type
@param <A> the primary argument type
@return a {@link Callback} instance that can be invoked.
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/util/LambdaSafe.java
| 72
|
[
"callbackType",
"callbackInstance",
"argument"
] | true
| 1
| 6.08
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
applyAsInt
|
int applyAsInt(int operand) throws E;
|
Applies this operator to the given operand.
@param operand the operand
@return the operator result
@throws E Thrown when a consumer fails.
|
java
|
src/main/java/org/apache/commons/lang3/function/FailableIntUnaryOperator.java
| 76
|
[
"operand"
] | true
| 1
| 6.64
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
message
|
public String message() {
if (exception != null)
return exception.getMessage();
return toString();
}
|
Get a friendly description of the error (if one is available).
@return the error message
|
java
|
clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
| 498
|
[] |
String
| true
| 2
| 8.08
|
apache/kafka
| 31,560
|
javadoc
| false
|
reduce
|
public O reduce(final O identity, final BinaryOperator<O> accumulator) {
makeTerminated();
return stream().reduce(identity, accumulator);
}
|
Performs a reduction on the elements of this stream, using the provided
identity value and an associative accumulation function, and returns
the reduced value. This is equivalent to:
<pre>{@code
T result = identity;
for (T element : this stream)
result = accumulator.apply(result, element)
return result;
}</pre>
but is not constrained to execute sequentially.
<p>
The {@code identity} value must be an identity for the accumulator
function. This means that for all {@code t},
{@code accumulator.apply(identity, t)} is equal to {@code t}.
The {@code accumulator} function must be an associative function.
</p>
<p>
This is an intermediate operation.
</p>
Note Sum, min, max, average, and string concatenation are all special
cases of reduction. Summing a stream of numbers can be expressed as:
<pre>{@code
Integer sum = integers.reduce(0, (a, b) -> a+b);
}</pre>
or:
<pre>{@code
Integer sum = integers.reduce(0, Integer::sum);
}</pre>
<p>
While this may seem a more roundabout way to perform an aggregation
compared to simply mutating a running total in a loop, reduction
operations parallelize more gracefully, without needing additional
synchronization and with greatly reduced risk of data races.
</p>
@param identity the identity value for the accumulating function.
@param accumulator an associative, non-interfering, stateless
function for combining two values.
@return the result of the reduction.
|
java
|
src/main/java/org/apache/commons/lang3/Streams.java
| 441
|
[
"identity",
"accumulator"
] |
O
| true
| 1
| 6.48
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
total_lines
|
def total_lines(self) -> int:
"""
Return the total number of lines captured from the stream.
Returns:
The sum of lines stored in the buffer and lines written to disk.
"""
return self._disk_lines + len(self._buffer)
|
Return the total number of lines captured from the stream.
Returns:
The sum of lines stored in the buffer and lines written to disk.
|
python
|
airflow-core/src/airflow/utils/log/log_stream_accumulator.py
| 102
|
[
"self"
] |
int
| true
| 1
| 6.56
|
apache/airflow
| 43,597
|
unknown
| false
|
format
|
@Override // Therefore has to use StringBuffer
public StringBuffer format(final Object obj, final StringBuffer toAppendTo,
final FieldPosition pos) {
return formatter.format(obj, toAppendTo, pos);
}
|
Uses the formatter Format instance.
@param obj the object to format
@param toAppendTo the {@link StringBuffer} to append to
@param pos the FieldPosition to use (or ignore).
@return {@code toAppendTo}
@see Format#format(Object, StringBuffer, FieldPosition)
|
java
|
src/main/java/org/apache/commons/lang3/text/CompositeFormat.java
| 68
|
[
"obj",
"toAppendTo",
"pos"
] |
StringBuffer
| true
| 1
| 6.24
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
findDimensionFields
|
private boolean findDimensionFields(
String indexName,
Settings allSettings,
List<CompressedXContent> combinedTemplateMappings,
List<String> dimensions
) {
var tmpIndexMetadata = IndexMetadata.builder(indexName);
int dummyPartitionSize = IndexMetadata.INDEX_ROUTING_PARTITION_SIZE_SETTING.get(allSettings);
int dummyShards = allSettings.getAsInt(
IndexMetadata.SETTING_NUMBER_OF_SHARDS,
dummyPartitionSize == 1 ? 1 : dummyPartitionSize + 1
);
int shardReplicas = allSettings.getAsInt(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0);
var finalResolvedSettings = Settings.builder()
.put(IndexMetadata.SETTING_VERSION_CREATED, IndexVersion.current())
.put(allSettings)
.put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, dummyShards)
.put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, shardReplicas)
.put(IndexMetadata.SETTING_INDEX_UUID, UUIDs.randomBase64UUID())
.put(IndexSettings.MODE.getKey(), IndexMode.TIME_SERIES)
// Avoid failing because index.routing_path is missing
.putList(INDEX_ROUTING_PATH.getKey(), List.of("path"))
.build();
tmpIndexMetadata.settings(finalResolvedSettings);
// Create MapperService just to extract keyword dimension fields:
try (var mapperService = mapperServiceFactory.apply(tmpIndexMetadata.build())) {
mapperService.merge(MapperService.SINGLE_MAPPING_NAME, combinedTemplateMappings, MapperService.MergeReason.INDEX_TEMPLATE);
DocumentMapper documentMapper = mapperService.documentMapper();
return findDimensionFields(dimensions, documentMapper);
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
|
Find fields in mapping that are time_series_dimension enabled.
Using MapperService here has an overhead, but allows the mappings from template to
be merged correctly and fetching the fields without manually parsing the mappings.
<p>
Alternatively this method can instead parse mappings into map of maps and merge that and
iterate over all values to find the field that can serve as routing value. But this requires
mapping specific logic to exist here.
@param indexName the name of the index for which the dimension fields are being found
@param allSettings the settings of the index
@param combinedTemplateMappings the combined mappings from index templates
(if any) that are applied to the index
@param dimensions a list to which the found dimension fields will be added
@return true if all potential dimension fields can be matched via the dimensions in the list, false otherwise
|
java
|
modules/data-streams/src/main/java/org/elasticsearch/datastreams/DataStreamIndexSettingsProvider.java
| 204
|
[
"indexName",
"allSettings",
"combinedTemplateMappings",
"dimensions"
] | true
| 3
| 8.08
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
colorize
|
def colorize(text="", opts=(), **kwargs):
"""
Return your text, enclosed in ANSI graphics codes.
Depends on the keyword arguments 'fg' and 'bg', and the contents of
the opts tuple/list.
Return the RESET code if no parameters are given.
Valid colors:
'black', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'white'
Valid options:
'bold'
'underscore'
'blink'
'reverse'
'conceal'
'noreset' - string will not be auto-terminated with the RESET code
Examples:
colorize('hello', fg='red', bg='blue', opts=('blink',))
colorize()
colorize('goodbye', opts=('underscore',))
print(colorize('first line', fg='red', opts=('noreset',)))
print('this should be red too')
print(colorize('and so should this'))
print('this should not be red')
"""
code_list = []
if text == "" and len(opts) == 1 and opts[0] == "reset":
return "\x1b[%sm" % RESET
for k, v in kwargs.items():
if k == "fg":
code_list.append(foreground[v])
elif k == "bg":
code_list.append(background[v])
for o in opts:
if o in opt_dict:
code_list.append(opt_dict[o])
if "noreset" not in opts:
text = "%s\x1b[%sm" % (text or "", RESET)
return "%s%s" % (("\x1b[%sm" % ";".join(code_list)), text or "")
|
Return your text, enclosed in ANSI graphics codes.
Depends on the keyword arguments 'fg' and 'bg', and the contents of
the opts tuple/list.
Return the RESET code if no parameters are given.
Valid colors:
'black', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'white'
Valid options:
'bold'
'underscore'
'blink'
'reverse'
'conceal'
'noreset' - string will not be auto-terminated with the RESET code
Examples:
colorize('hello', fg='red', bg='blue', opts=('blink',))
colorize()
colorize('goodbye', opts=('underscore',))
print(colorize('first line', fg='red', opts=('noreset',)))
print('this should be red too')
print(colorize('and so should this'))
print('this should not be red')
|
python
|
django/utils/termcolors.py
| 19
|
[
"text",
"opts"
] | false
| 12
| 7.2
|
django/django
| 86,204
|
unknown
| false
|
|
partitionLead
|
synchronized Long partitionLead(TopicPartition tp) {
TopicPartitionState topicPartitionState = assignedState(tp);
return topicPartitionState.logStartOffset == null ? null : topicPartitionState.position.offset - topicPartitionState.logStartOffset;
}
|
Attempt to complete validation with the end offset returned from the OffsetForLeaderEpoch request.
@return Log truncation details if detected and no reset policy is defined.
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
| 670
|
[
"tp"
] |
Long
| true
| 2
| 6.32
|
apache/kafka
| 31,560
|
javadoc
| false
|
getJavaDoc
|
private String getJavaDoc(RecordComponentElement recordComponent) {
String recordJavadoc = this.env.getElementUtils().getDocComment(recordComponent.getEnclosingElement());
if (recordJavadoc != null) {
Pattern paramJavadocPattern = paramJavadocPattern(recordComponent.getSimpleName().toString());
Matcher paramJavadocMatcher = paramJavadocPattern.matcher(recordJavadoc);
if (paramJavadocMatcher.find()) {
String paramJavadoc = cleanUpJavaDoc(paramJavadocMatcher.group());
return paramJavadoc.isEmpty() ? null : paramJavadoc;
}
}
return null;
}
|
Return the {@link PrimitiveType} of the specified type or {@code null} if the type
does not represent a valid wrapper type.
@param typeMirror a type
@return the primitive type or {@code null} if the type is not a wrapper type
|
java
|
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/TypeUtils.java
| 252
|
[
"recordComponent"
] |
String
| true
| 4
| 7.92
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
close
|
private void close(Duration timeout, boolean swallowException) {
long timeoutMs = timeout.toMillis();
if (timeoutMs < 0)
throw new IllegalArgumentException("The timeout cannot be negative.");
log.info("Closing the Kafka producer with timeoutMillis = {} ms.", timeoutMs);
// this will keep track of the first encountered exception
AtomicReference<Throwable> firstException = new AtomicReference<>();
boolean invokedFromCallback = Thread.currentThread() == this.ioThread;
if (timeoutMs > 0) {
if (invokedFromCallback) {
log.warn("Overriding close timeout {} ms to 0 ms in order to prevent useless blocking due to self-join. " +
"This means you have incorrectly invoked close with a non-zero timeout from the producer call-back.",
timeoutMs);
} else {
// Try to close gracefully.
final Timer closeTimer = time.timer(timeout);
clientTelemetryReporter.ifPresent(ClientTelemetryReporter::initiateClose);
closeTimer.update();
if (this.sender != null) {
this.sender.initiateClose();
closeTimer.update();
}
if (this.ioThread != null) {
try {
this.ioThread.join(closeTimer.remainingMs());
} catch (InterruptedException t) {
firstException.compareAndSet(null, new InterruptException(t));
log.error("Interrupted while joining ioThread", t);
} finally {
closeTimer.update();
}
}
}
}
if (this.sender != null && this.ioThread != null && this.ioThread.isAlive()) {
log.info("Proceeding to force close the producer since pending requests could not be completed " +
"within timeout {} ms.", timeoutMs);
this.sender.forceClose();
// Only join the sender thread when not calling from callback.
if (!invokedFromCallback) {
try {
this.ioThread.join();
} catch (InterruptedException e) {
firstException.compareAndSet(null, new InterruptException(e));
}
}
}
Utils.closeQuietly(interceptors, "producer interceptors", firstException);
Utils.closeQuietly(producerMetrics, "producer metrics wrapper", firstException);
Utils.closeQuietly(metrics, "producer metrics", firstException);
Utils.closeQuietly(keySerializerPlugin, "producer keySerializer", firstException);
Utils.closeQuietly(valueSerializerPlugin, "producer valueSerializer", firstException);
Utils.closeQuietly(partitionerPlugin, "producer partitioner", firstException);
clientTelemetryReporter.ifPresent(reporter -> Utils.closeQuietly(reporter, "producer telemetry reporter", firstException));
AppInfoParser.unregisterAppInfo(JMX_PREFIX, clientId, metrics);
Throwable exception = firstException.get();
if (exception != null && !swallowException) {
if (exception instanceof InterruptException) {
throw (InterruptException) exception;
}
throw new KafkaException("Failed to close kafka producer", exception);
}
log.debug("Kafka producer has been closed");
}
|
This method waits up to <code>timeout</code> for the producer to complete the sending of all incomplete requests.
<p>
If the producer is unable to complete all requests before the timeout expires, this method will fail
any unsent and unacknowledged records immediately. It will also abort the ongoing transaction if it's not
already completing.
<p>
If invoked from within a {@link Callback} this method will not block and will be equivalent to
<code>close(Duration.ofMillis(0))</code>. This is done since no further sending will happen while
blocking the I/O thread of the producer.
@param timeout The maximum time to wait for producer to complete any pending requests. The value should be
non-negative. Specifying a timeout of zero means do not wait for pending send requests to complete.
@throws InterruptException If the thread is interrupted while blocked.
@throws KafkaException If an unexpected error occurs while trying to close the client, this error should be treated
as fatal and indicate the client is no longer usable.
@throws IllegalArgumentException If the <code>timeout</code> is negative.
|
java
|
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java
| 1,509
|
[
"timeout",
"swallowException"
] |
void
| true
| 15
| 6.8
|
apache/kafka
| 31,560
|
javadoc
| false
|
getUnsafe
|
private static Unsafe getUnsafe() {
try {
return Unsafe.getUnsafe();
} catch (SecurityException tryReflectionInstead) {
}
try {
return AccessController.doPrivileged(
new PrivilegedExceptionAction<Unsafe>() {
@Override
public Unsafe run() throws Exception {
Class<Unsafe> k = Unsafe.class;
for (Field f : k.getDeclaredFields()) {
f.setAccessible(true);
Object x = f.get(null);
if (k.isInstance(x)) return k.cast(x);
}
throw new NoSuchFieldError("the Unsafe");
}
});
} catch (PrivilegedActionException e) {
throw new RuntimeException("Could not initialize intrinsics", e.getCause());
}
}
|
Returns a sun.misc.Unsafe. Suitable for use in a 3rd party package. Replace with a simple call
to Unsafe.getUnsafe when integrating into a jdk.
@return a sun.misc.Unsafe
|
java
|
android/guava/src/com/google/common/cache/Striped64.java
| 294
|
[] |
Unsafe
| true
| 4
| 7.92
|
google/guava
| 51,352
|
javadoc
| false
|
keep
|
public static String keep(final String str, final String... set) {
if (str == null) {
return null;
}
if (str.isEmpty() || deepEmpty(set)) {
return StringUtils.EMPTY;
}
return modify(str, set, true);
}
|
Takes an argument in set-syntax, see evaluateSet,
and keeps any of characters present in the specified string.
<pre>
CharSetUtils.keep(null, *) = null
CharSetUtils.keep("", *) = ""
CharSetUtils.keep(*, null) = ""
CharSetUtils.keep(*, "") = ""
CharSetUtils.keep("hello", "hl") = "hll"
CharSetUtils.keep("hello", "le") = "ell"
</pre>
@see CharSet#getInstance(String...) for set-syntax.
@param str String to keep characters from, may be null
@param set String[] set of characters to keep, may be null
@return the modified String, {@code null} if null string input
@since 2.0
|
java
|
src/main/java/org/apache/commons/lang3/CharSetUtils.java
| 157
|
[
"str"
] |
String
| true
| 4
| 7.6
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
shutdownAndAwaitTermination
|
@CanIgnoreReturnValue
@J2ktIncompatible
@GwtIncompatible // concurrency
@SuppressWarnings("GoodTime") // should accept a java.time.Duration
public static boolean shutdownAndAwaitTermination(
ExecutorService service, long timeout, TimeUnit unit) {
long halfTimeoutNanos = unit.toNanos(timeout) / 2;
// Disable new tasks from being submitted
service.shutdown();
try {
// Wait for half the duration of the timeout for existing tasks to terminate
if (!service.awaitTermination(halfTimeoutNanos, TimeUnit.NANOSECONDS)) {
// Cancel currently executing tasks
service.shutdownNow();
// Wait the other half of the timeout for tasks to respond to being cancelled
service.awaitTermination(halfTimeoutNanos, TimeUnit.NANOSECONDS);
}
} catch (InterruptedException ie) {
// Preserve interrupt status
Thread.currentThread().interrupt();
// (Re-)Cancel if current thread also interrupted
service.shutdownNow();
}
return service.isTerminated();
}
|
Shuts down the given executor service gradually, first disabling new submissions and later, if
necessary, cancelling remaining tasks.
<p>The method takes the following steps:
<ol>
<li>calls {@link ExecutorService#shutdown()}, disabling acceptance of new submitted tasks.
<li>awaits executor service termination for half of the specified timeout.
<li>if the timeout expires, it calls {@link ExecutorService#shutdownNow()}, cancelling
pending tasks and interrupting running tasks.
<li>awaits executor service termination for the other half of the specified timeout.
</ol>
<p>If, at any step of the process, the calling thread is interrupted, the method calls {@link
ExecutorService#shutdownNow()} and returns.
<p>For a version of this method that waits <i>indefinitely</i>, use {@link
ExecutorService#close}.
@param service the {@code ExecutorService} to shut down
@param timeout the maximum time to wait for the {@code ExecutorService} to terminate
@param unit the time unit of the timeout argument
@return {@code true} if the {@code ExecutorService} was terminated successfully, {@code false}
if the call timed out or was interrupted
@since 17.0
|
java
|
android/guava/src/com/google/common/util/concurrent/MoreExecutors.java
| 1,027
|
[
"service",
"timeout",
"unit"
] | true
| 3
| 7.6
|
google/guava
| 51,352
|
javadoc
| false
|
|
insert
|
def insert(self: Self, key: Key, value: Value) -> bool:
"""
Insert a value into the cache.
Args:
key (Key): The key to insert.
value (Value): The value to associate with the key.
Returns:
bool: True if the value was inserted, False if the key already exists.
"""
|
Insert a value into the cache.
Args:
key (Key): The key to insert.
value (Value): The value to associate with the key.
Returns:
bool: True if the value was inserted, False if the key already exists.
|
python
|
torch/_inductor/cache.py
| 53
|
[
"self",
"key",
"value"
] |
bool
| true
| 1
| 6.88
|
pytorch/pytorch
| 96,034
|
google
| false
|
createFloat
|
public static Float createFloat(final String str) {
if (str == null) {
return null;
}
return Float.valueOf(str);
}
|
Creates a {@link Float} from a {@link String}.
<p>
Returns {@code null} if the string is {@code null}.
</p>
@param str a {@link String} to convert, may be null.
@return converted {@link Float} (or null if the input is null).
@throws NumberFormatException if the value cannot be converted.
|
java
|
src/main/java/org/apache/commons/lang3/math/NumberUtils.java
| 241
|
[
"str"
] |
Float
| true
| 2
| 7.92
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
registerCustomEditor
|
@Override
public void registerCustomEditor(@Nullable Class<?> requiredType, @Nullable String propertyPath, PropertyEditor propertyEditor) {
if (requiredType == null && propertyPath == null) {
throw new IllegalArgumentException("Either requiredType or propertyPath is required");
}
if (propertyPath != null) {
if (this.customEditorsForPath == null) {
this.customEditorsForPath = new LinkedHashMap<>(16);
}
this.customEditorsForPath.put(propertyPath, new CustomEditorHolder(propertyEditor, requiredType));
}
else {
if (this.customEditors == null) {
this.customEditors = new LinkedHashMap<>(16);
}
this.customEditors.put(requiredType, propertyEditor);
this.customEditorCache = null;
}
}
|
Copy the default editors registered in this instance to the given target registry.
@param target the target registry to copy to
|
java
|
spring-beans/src/main/java/org/springframework/beans/PropertyEditorRegistrySupport.java
| 304
|
[
"requiredType",
"propertyPath",
"propertyEditor"
] |
void
| true
| 6
| 6.88
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
get_deterministic_debug_mode
|
def get_deterministic_debug_mode() -> builtins.int:
r"""Returns the current value of the debug mode for deterministic
operations. Refer to :func:`torch.set_deterministic_debug_mode`
documentation for more details.
"""
if _C._get_deterministic_algorithms():
if _C._get_deterministic_algorithms_warn_only():
return 1
else:
return 2
else:
return 0
|
r"""Returns the current value of the debug mode for deterministic
operations. Refer to :func:`torch.set_deterministic_debug_mode`
documentation for more details.
|
python
|
torch/__init__.py
| 1,578
|
[] |
builtins.int
| true
| 5
| 6.4
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
indices_to_mask
|
def indices_to_mask(indices, mask_length):
"""Convert list of indices to boolean mask.
Parameters
----------
indices : list-like
List of integers treated as indices.
mask_length : int
Length of boolean mask to be generated.
This parameter must be greater than max(indices).
Returns
-------
mask : 1d boolean nd-array
Boolean array that is True where indices are present, else False.
Examples
--------
>>> from sklearn.utils._mask import indices_to_mask
>>> indices = [1, 2 , 3, 4]
>>> indices_to_mask(indices, 5)
array([False, True, True, True, True])
"""
if mask_length <= np.max(indices):
raise ValueError("mask_length must be greater than max(indices)")
mask = np.zeros(mask_length, dtype=bool)
mask[indices] = True
return mask
|
Convert list of indices to boolean mask.
Parameters
----------
indices : list-like
List of integers treated as indices.
mask_length : int
Length of boolean mask to be generated.
This parameter must be greater than max(indices).
Returns
-------
mask : 1d boolean nd-array
Boolean array that is True where indices are present, else False.
Examples
--------
>>> from sklearn.utils._mask import indices_to_mask
>>> indices = [1, 2 , 3, 4]
>>> indices_to_mask(indices, 5)
array([False, True, True, True, True])
|
python
|
sklearn/utils/_mask.py
| 152
|
[
"indices",
"mask_length"
] | false
| 2
| 7.52
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
toString
|
function toString(value) {
return value == null ? '' : baseToString(value);
}
|
Converts `value` to a string. An empty string is returned for `null`
and `undefined` values. The sign of `-0` is preserved.
@static
@memberOf _
@since 4.0.0
@category Lang
@param {*} value The value to convert.
@returns {string} Returns the converted string.
@example
_.toString(null);
// => ''
_.toString(-0);
// => '-0'
_.toString([1, 2, 3]);
// => '1,2,3'
|
javascript
|
lodash.js
| 12,664
|
[
"value"
] | false
| 2
| 7.6
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
getPropertyValue
|
private @Nullable Object getPropertyValue(Object obj) {
// If a nested property cannot be read, simply return null
// (similar to JSTL EL). If the property doesn't exist in the
// first place, let the exception through.
try {
BeanWrapperImpl beanWrapper = new BeanWrapperImpl(false);
beanWrapper.setWrappedInstance(obj);
return beanWrapper.getPropertyValue(this.sortDefinition.getProperty());
}
catch (BeansException ex) {
logger.debug("PropertyComparator could not access property - treating as null for sorting", ex);
return null;
}
}
|
Get the SortDefinition's property value for the given object.
@param obj the object to get the property value for
@return the property value
|
java
|
spring-beans/src/main/java/org/springframework/beans/support/PropertyComparator.java
| 111
|
[
"obj"
] |
Object
| true
| 2
| 7.92
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
forMap
|
public static <K extends @Nullable Object, V extends @Nullable Object> Function<K, V> forMap(
Map<K, ? extends V> map, @ParametricNullness V defaultValue) {
return new ForMapWithDefault<>(map, defaultValue);
}
|
Returns a function which performs a map lookup with a default value. The function created by
this method returns {@code defaultValue} for all inputs that do not belong to the map's key
set. See also {@link #forMap(Map)}, which throws an exception in this case.
<p>Prefer to write the lambda expression {@code k -> map.getOrDefault(k, defaultValue)}
instead. Note that it is not serializable unless you explicitly make it {@link Serializable},
typically by writing {@code (Function<K, V> & Serializable) k -> map.getOrDefault(k,
defaultValue)}.
@param map source map that determines the function behavior
@param defaultValue the value to return for inputs that aren't map keys
@return function that returns {@code map.get(a)} when {@code a} is a key, or {@code
defaultValue} otherwise
|
java
|
android/guava/src/com/google/common/base/Functions.java
| 147
|
[
"map",
"defaultValue"
] | true
| 1
| 6.64
|
google/guava
| 51,352
|
javadoc
| false
|
|
get_serverless_dashboard_url
|
def get_serverless_dashboard_url(
*,
aws_conn_id: str | None = None,
emr_serverless_client: boto3.client = None,
application_id: str,
job_run_id: str,
) -> ParseResult | None:
"""
Retrieve the URL to EMR Serverless dashboard.
The URL is a one-use, ephemeral link that expires in 1 hour and is accessible without authentication.
Either an AWS connection ID or existing EMR Serverless client must be passed.
If the connection ID is passed, a client is generated using that connection.
"""
if not exactly_one(aws_conn_id, emr_serverless_client):
raise AirflowException("Requires either an AWS connection ID or an EMR Serverless Client.")
if aws_conn_id:
# If get_dashboard_for_job_run fails for whatever reason, fail after 1 attempt
# so that the rest of the links load in a reasonable time frame.
hook = EmrServerlessHook(aws_conn_id=aws_conn_id, config={"retries": {"total_max_attempts": 1}})
emr_serverless_client = hook.conn
response = emr_serverless_client.get_dashboard_for_job_run(
applicationId=application_id, jobRunId=job_run_id
)
if "url" not in response:
return None
log_uri = urlparse(response["url"])
return log_uri
|
Retrieve the URL to EMR Serverless dashboard.
The URL is a one-use, ephemeral link that expires in 1 hour and is accessible without authentication.
Either an AWS connection ID or existing EMR Serverless client must be passed.
If the connection ID is passed, a client is generated using that connection.
|
python
|
providers/amazon/src/airflow/providers/amazon/aws/links/emr.py
| 63
|
[
"aws_conn_id",
"emr_serverless_client",
"application_id",
"job_run_id"
] |
ParseResult | None
| true
| 4
| 6
|
apache/airflow
| 43,597
|
unknown
| false
|
indexOf
|
int indexOf(Advisor advisor);
|
Return the index (from 0) of the given advisor,
or -1 if no such advisor applies to this proxy.
<p>The return value of this method can be used to index into the advisors array.
@param advisor the advisor to search for
@return index from 0 of this advisor, or -1 if there's no such advisor
|
java
|
spring-aop/src/main/java/org/springframework/aop/framework/Advised.java
| 166
|
[
"advisor"
] | true
| 1
| 6.8
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
predict
|
def predict(self, X):
"""
Predict regression target for X.
The predicted regression target of an input sample is computed as the
mean predicted regression targets of the trees in the forest.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to
``dtype=np.float32``. If a sparse matrix is provided, it will be
converted into a sparse ``csr_matrix``.
Returns
-------
y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
The predicted values.
"""
check_is_fitted(self)
# Check data
X = self._validate_X_predict(X)
# Assign chunk of trees to jobs
n_jobs, _, _ = _partition_estimators(self.n_estimators, self.n_jobs)
# avoid storing the output of every estimator by summing them here
if self.n_outputs_ > 1:
y_hat = np.zeros((X.shape[0], self.n_outputs_), dtype=np.float64)
else:
y_hat = np.zeros((X.shape[0]), dtype=np.float64)
# Parallel loop
lock = threading.Lock()
Parallel(n_jobs=n_jobs, verbose=self.verbose, require="sharedmem")(
delayed(_accumulate_prediction)(e.predict, X, [y_hat], lock)
for e in self.estimators_
)
y_hat /= len(self.estimators_)
return y_hat
|
Predict regression target for X.
The predicted regression target of an input sample is computed as the
mean predicted regression targets of the trees in the forest.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to
``dtype=np.float32``. If a sparse matrix is provided, it will be
converted into a sparse ``csr_matrix``.
Returns
-------
y : ndarray of shape (n_samples,) or (n_samples, n_outputs)
The predicted values.
|
python
|
sklearn/ensemble/_forest.py
| 1,044
|
[
"self",
"X"
] | false
| 3
| 6.08
|
scikit-learn/scikit-learn
| 64,340
|
numpy
| false
|
|
visitImportCallExpression
|
function visitImportCallExpression(node: ImportCall): Expression {
// import("./blah")
// emit as
// System.register([], function (_export, _context) {
// return {
// setters: [],
// execute: () => {
// _context.import('./blah');
// }
// };
// });
const externalModuleName = getExternalModuleNameLiteral(factory, node, currentSourceFile, host, resolver, compilerOptions);
const firstArgument = visitNode(firstOrUndefined(node.arguments), visitor, isExpression);
// Only use the external module name if it differs from the first argument. This allows us to preserve the quote style of the argument on output.
const argument = externalModuleName && (!firstArgument || !isStringLiteral(firstArgument) || firstArgument.text !== externalModuleName.text) ? externalModuleName : firstArgument;
return factory.createCallExpression(
factory.createPropertyAccessExpression(
contextObject,
factory.createIdentifier("import"),
),
/*typeArguments*/ undefined,
argument ? [argument] : [],
);
}
|
Visit nodes to flatten destructuring assignments to exported symbols.
@param node The node to visit.
|
typescript
|
src/compiler/transformers/module/system.ts
| 1,613
|
[
"node"
] | true
| 6
| 6.72
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
getConfigurationProperty
|
@Override
public @Nullable ConfigurationProperty getConfigurationProperty(@Nullable ConfigurationPropertyName name) {
if (name == null) {
return null;
}
for (PropertyMapper mapper : this.mappers) {
try {
for (String candidate : mapper.map(name)) {
Object value = getPropertySourceProperty(candidate);
if (value != null) {
Origin origin = PropertySourceOrigin.get(this.propertySource, candidate);
return ConfigurationProperty.of(this, name, value, origin);
}
}
}
catch (Exception ex) {
// Ignore
}
}
return null;
}
|
Create a new {@link SpringConfigurationPropertySource} implementation.
@param propertySource the source property source
@param systemEnvironmentSource if the source is from the system environment
@param mappers the property mappers
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/SpringConfigurationPropertySource.java
| 84
|
[
"name"
] |
ConfigurationProperty
| true
| 4
| 6.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
copyOf
|
@IgnoreJRERequirement // Users will use this only if they're already using streams.
public static ImmutableLongArray copyOf(LongStream stream) {
// Note this uses very different growth behavior from copyOf(Iterable) and the builder.
long[] array = stream.toArray();
return (array.length == 0) ? EMPTY : new ImmutableLongArray(array);
}
|
Returns an immutable array containing all the values from {@code stream}, in order.
@since 33.4.0 (but since 22.0 in the JRE flavor)
|
java
|
android/guava/src/com/google/common/primitives/ImmutableLongArray.java
| 175
|
[
"stream"
] |
ImmutableLongArray
| true
| 2
| 6
|
google/guava
| 51,352
|
javadoc
| false
|
count_nonzero
|
def count_nonzero(a, axis=None, *, keepdims=False):
"""
Counts the number of non-zero values in the array ``a``.
The word "non-zero" is in reference to the Python 2.x
built-in method ``__nonzero__()`` (renamed ``__bool__()``
in Python 3.x) of Python objects that tests an object's
"truthfulness". For example, any number is considered
truthful if it is nonzero, whereas any string is considered
truthful if it is not the empty string. Thus, this function
(recursively) counts how many elements in ``a`` (and in
sub-arrays thereof) have their ``__nonzero__()`` or ``__bool__()``
method evaluated to ``True``.
Parameters
----------
a : array_like
The array for which to count non-zeros.
axis : int or tuple, optional
Axis or tuple of axes along which to count non-zeros.
Default is None, meaning that non-zeros will be counted
along a flattened version of ``a``.
keepdims : bool, optional
If this is set to True, the axes that are counted are left
in the result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
Returns
-------
count : int or array of int
Number of non-zero values in the array along a given axis.
Otherwise, the total number of non-zero values in the array
is returned.
See Also
--------
nonzero : Return the coordinates of all the non-zero values.
Examples
--------
>>> import numpy as np
>>> np.count_nonzero(np.eye(4))
np.int64(4)
>>> a = np.array([[0, 1, 7, 0],
... [3, 0, 2, 19]])
>>> np.count_nonzero(a)
np.int64(5)
>>> np.count_nonzero(a, axis=0)
array([1, 1, 2, 1])
>>> np.count_nonzero(a, axis=1)
array([2, 3])
>>> np.count_nonzero(a, axis=1, keepdims=True)
array([[2],
[3]])
"""
if axis is None and not keepdims:
return multiarray.count_nonzero(a)
a = asanyarray(a)
# TODO: this works around .astype(bool) not working properly (gh-9847)
if np.issubdtype(a.dtype, np.character):
a_bool = a != a.dtype.type()
else:
a_bool = a.astype(np.bool, copy=False)
return a_bool.sum(axis=axis, dtype=np.intp, keepdims=keepdims)
|
Counts the number of non-zero values in the array ``a``.
The word "non-zero" is in reference to the Python 2.x
built-in method ``__nonzero__()`` (renamed ``__bool__()``
in Python 3.x) of Python objects that tests an object's
"truthfulness". For example, any number is considered
truthful if it is nonzero, whereas any string is considered
truthful if it is not the empty string. Thus, this function
(recursively) counts how many elements in ``a`` (and in
sub-arrays thereof) have their ``__nonzero__()`` or ``__bool__()``
method evaluated to ``True``.
Parameters
----------
a : array_like
The array for which to count non-zeros.
axis : int or tuple, optional
Axis or tuple of axes along which to count non-zeros.
Default is None, meaning that non-zeros will be counted
along a flattened version of ``a``.
keepdims : bool, optional
If this is set to True, the axes that are counted are left
in the result as dimensions with size one. With this option,
the result will broadcast correctly against the input array.
Returns
-------
count : int or array of int
Number of non-zero values in the array along a given axis.
Otherwise, the total number of non-zero values in the array
is returned.
See Also
--------
nonzero : Return the coordinates of all the non-zero values.
Examples
--------
>>> import numpy as np
>>> np.count_nonzero(np.eye(4))
np.int64(4)
>>> a = np.array([[0, 1, 7, 0],
... [3, 0, 2, 19]])
>>> np.count_nonzero(a)
np.int64(5)
>>> np.count_nonzero(a, axis=0)
array([1, 1, 2, 1])
>>> np.count_nonzero(a, axis=1)
array([2, 3])
>>> np.count_nonzero(a, axis=1, keepdims=True)
array([[2],
[3]])
|
python
|
numpy/_core/numeric.py
| 484
|
[
"a",
"axis",
"keepdims"
] | false
| 5
| 7.76
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
match
|
@Override
public boolean match(E endpoint) {
if (!this.endpointType.isInstance(endpoint)) {
// Leave non-matching types for other filters
return true;
}
return match(endpoint.getEndpointId());
}
|
Create a new {@link IncludeExcludeEndpointFilter} with specific include/exclude
rules.
@param endpointType the endpoint type that should be considered (other types always
match)
@param include the include patterns
@param exclude the exclude patterns
@param defaultIncludes the default {@code includes} to use when none are specified.
|
java
|
module/spring-boot-actuator-autoconfigure/src/main/java/org/springframework/boot/actuate/autoconfigure/endpoint/expose/IncludeExcludeEndpointFilter.java
| 110
|
[
"endpoint"
] | true
| 2
| 6.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
fuzz_tensor
|
def fuzz_tensor(
size: tuple[int, ...] | None = None,
stride: tuple[int, ...] | None = None,
dtype: torch.dtype | None = None,
seed: int | None = None,
) -> tuple[torch.Tensor, int]:
"""
Create a tensor with fuzzed size, stride, and dtype.
Args:
size: Tensor shape. If None, will be randomly generated.
stride: Tensor stride. If None, will be randomly generated based on size.
dtype: Tensor data type. If None, will be randomly generated.
seed: Random seed for reproducibility. If None, will be randomly generated.
Returns:
Tuple[torch.Tensor, int]: A tuple of (tensor, seed_used) where tensor has
the specified or randomly generated properties, and seed_used is the seed
that was used for generation (for reproducibility).
"""
# Generate or use provided seed
if seed is None:
seed = random.randint(0, 2**32 - 1)
# Create a local Random instance to avoid interfering with global state
local_random = random.Random(seed)
# Set the torch random seed for reproducibility
# Save and restore global torch state to avoid side effects
torch_state = torch.get_rng_state()
torch.manual_seed(seed)
# Generate random values if not provided using local random instance
old_random_state = random.getstate()
try:
# Temporarily use local random instance for deterministic generation
random.setstate(local_random.getstate())
if size is None:
size = fuzz_tensor_size()
if dtype is None:
dtype = fuzz_torch_tensor_type("default")
if stride is None:
stride = fuzz_valid_stride(size)
# Handle empty tensor case
if len(size) == 0:
return torch.ones((), dtype=dtype), seed
# Calculate required storage size for the custom stride
required_storage = _compute_storage_size_needed(size, stride)
# Create base tensor with sufficient storage
if FuzzerConfig.use_real_values:
# Use random values based on dtype
if dtype.is_floating_point:
base_tensor = torch.randn(required_storage, dtype=dtype)
elif dtype in [torch.complex64, torch.complex128]:
# Create complex tensor with random real and imaginary parts
real_part = torch.randn(
required_storage,
dtype=torch.float32 if dtype == torch.complex64 else torch.float64,
)
imag_part = torch.randn(
required_storage,
dtype=torch.float32 if dtype == torch.complex64 else torch.float64,
)
base_tensor = torch.complex(real_part, imag_part).to(dtype)
elif dtype == torch.bool:
base_tensor = torch.randint(0, 2, (required_storage,), dtype=torch.bool)
else: # integer types
base_tensor = torch.randint(-100, 100, (required_storage,), dtype=dtype)
else:
# Use zeros (default behavior)
base_tensor = torch.ones(required_storage, dtype=dtype)
# Create strided tensor view
strided_tensor = torch.as_strided(base_tensor, size, stride)
return strided_tensor, seed
finally:
# Restore original random state
random.setstate(old_random_state)
# Restore original torch state
torch.set_rng_state(torch_state)
|
Create a tensor with fuzzed size, stride, and dtype.
Args:
size: Tensor shape. If None, will be randomly generated.
stride: Tensor stride. If None, will be randomly generated based on size.
dtype: Tensor data type. If None, will be randomly generated.
seed: Random seed for reproducibility. If None, will be randomly generated.
Returns:
Tuple[torch.Tensor, int]: A tuple of (tensor, seed_used) where tensor has
the specified or randomly generated properties, and seed_used is the seed
that was used for generation (for reproducibility).
|
python
|
tools/experimental/torchfuzz/tensor_fuzzer.py
| 336
|
[
"size",
"stride",
"dtype",
"seed"
] |
tuple[torch.Tensor, int]
| true
| 14
| 8
|
pytorch/pytorch
| 96,034
|
google
| false
|
toArray
|
public char[] @Nullable [] toArray() {
char[][] result = new char[max + 1][];
for (Entry<Character, String> entry : map.entrySet()) {
result[entry.getKey()] = entry.getValue().toCharArray();
}
return result;
}
|
Convert this builder into an array of char[]s where the maximum index is the value of the
highest character that has been seen. The array will be sparse in the sense that any unseen
index will default to null.
@return a "sparse" array that holds the replacement mappings.
|
java
|
android/guava/src/com/google/common/escape/CharEscaperBuilder.java
| 110
|
[] | true
| 1
| 7.04
|
google/guava
| 51,352
|
javadoc
| false
|
|
randomUuid
|
public static Uuid randomUuid() {
Uuid uuid = unsafeRandomUuid();
while (RESERVED.contains(uuid) || uuid.toString().startsWith("-")) {
uuid = unsafeRandomUuid();
}
return uuid;
}
|
Static factory to retrieve a type 4 (pseudo randomly generated) UUID.
This will not generate a UUID equal to 0, 1, or one whose string representation starts with a dash ("-")
|
java
|
clients/src/main/java/org/apache/kafka/common/Uuid.java
| 76
|
[] |
Uuid
| true
| 3
| 6.88
|
apache/kafka
| 31,560
|
javadoc
| false
|
getExportEqualsLocalSymbol
|
function getExportEqualsLocalSymbol(importedSymbol: Symbol, checker: TypeChecker): Symbol | undefined {
if (importedSymbol.flags & SymbolFlags.Alias) {
return checker.getImmediateAliasedSymbol(importedSymbol);
}
const decl = Debug.checkDefined(importedSymbol.valueDeclaration);
if (isExportAssignment(decl)) { // `export = class {}`
return tryCast(decl.expression, canHaveSymbol)?.symbol;
}
else if (isBinaryExpression(decl)) { // `module.exports = class {}`
return tryCast(decl.right, canHaveSymbol)?.symbol;
}
else if (isSourceFile(decl)) { // json module
return decl.symbol;
}
return undefined;
}
|
Given a local reference, we might notice that it's an import/export and recursively search for references of that.
If at an import, look locally for the symbol it imports.
If at an export, look for all imports of it.
This doesn't handle export specifiers; that is done in `getReferencesAtExportSpecifier`.
@param comingFromExport If we are doing a search for all exports, don't bother looking backwards for the imported symbol, since that's the reason we're here.
@internal
|
typescript
|
src/services/importTracker.ts
| 702
|
[
"importedSymbol",
"checker"
] | true
| 7
| 6.4
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
is_unique
|
def is_unique(self) -> bool:
"""
Return True if values in the object are unique.
Returns
-------
bool
See Also
--------
Series.unique : Return unique values of Series object.
Series.drop_duplicates : Return Series with duplicate values removed.
Series.duplicated : Indicate duplicate Series values.
Examples
--------
>>> s = pd.Series([1, 2, 3])
>>> s.is_unique
True
>>> s = pd.Series([1, 2, 3, 1])
>>> s.is_unique
False
"""
return self.nunique(dropna=False) == len(self)
|
Return True if values in the object are unique.
Returns
-------
bool
See Also
--------
Series.unique : Return unique values of Series object.
Series.drop_duplicates : Return Series with duplicate values removed.
Series.duplicated : Indicate duplicate Series values.
Examples
--------
>>> s = pd.Series([1, 2, 3])
>>> s.is_unique
True
>>> s = pd.Series([1, 2, 3, 1])
>>> s.is_unique
False
|
python
|
pandas/core/base.py
| 1,155
|
[
"self"
] |
bool
| true
| 1
| 6.24
|
pandas-dev/pandas
| 47,362
|
unknown
| false
|
doPrivileged
|
<T> T doPrivileged(PrivilegedAction<T> action);
|
Performs the specified {@code PrivilegedAction} with privileges
enabled. The action is performed with <i>all</i> of the permissions
possessed by the caller's protection domain.
<p> If the action's {@code run} method throws an (unchecked)
exception, it will propagate through this method.
<p> Note that any DomainCombiner associated with the current
AccessControlContext will be ignored while the action is performed.
@param <T> the type of the value returned by the PrivilegedAction's
{@code run} method.
@param action the action to be performed.
@return the value returned by the action's {@code run} method.
@exception NullPointerException if the action is {@code null}
@see java.security.AccessController#doPrivileged(PrivilegedAction)
|
java
|
clients/src/main/java/org/apache/kafka/common/internals/SecurityManagerCompatibility.java
| 62
|
[
"action"
] |
T
| true
| 1
| 6.16
|
apache/kafka
| 31,560
|
javadoc
| false
|
checkedCast
|
public static int checkedCast(long value) {
int result = (int) value;
checkArgument(result == value, "Out of range: %s", value);
return result;
}
|
Returns the {@code int} value that is equal to {@code value}, if possible.
<p><b>Note:</b> this method is now unnecessary and should be treated as deprecated. Use {@link
Math#toIntExact(long)} instead, but be aware that that method throws {@link
ArithmeticException} rather than {@link IllegalArgumentException}.
@param value any value in the range of the {@code int} type
@return the {@code int} value that equals {@code value}
@throws IllegalArgumentException if {@code value} is greater than {@link Integer#MAX_VALUE} or
less than {@link Integer#MIN_VALUE}
|
java
|
android/guava/src/com/google/common/primitives/Ints.java
| 93
|
[
"value"
] | true
| 1
| 6.4
|
google/guava
| 51,352
|
javadoc
| false
|
|
isListElement
|
function isListElement(parent: Node, node: Node): boolean {
switch (parent.kind) {
case SyntaxKind.ClassDeclaration:
case SyntaxKind.InterfaceDeclaration:
return rangeContainsRange((parent as InterfaceDeclaration).members, node);
case SyntaxKind.ModuleDeclaration:
const body = (parent as ModuleDeclaration).body;
return !!body && body.kind === SyntaxKind.ModuleBlock && rangeContainsRange(body.statements, node);
case SyntaxKind.SourceFile:
case SyntaxKind.Block:
case SyntaxKind.ModuleBlock:
return rangeContainsRange((parent as Block).statements, node);
case SyntaxKind.CatchClause:
return rangeContainsRange((parent as CatchClause).block.statements, node);
}
return false;
}
|
Finds the highest node enclosing `node` at the same list level as `node`
and whose end does not exceed `node.end`.
Consider typing the following
```
let x = 1;
while (true) {
}
```
Upon typing the closing curly, we want to format the entire `while`-statement, but not the preceding
variable declaration.
|
typescript
|
src/services/formatting/formatting.ts
| 290
|
[
"parent",
"node"
] | true
| 3
| 7.12
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
append
|
public ConfigurationPropertyName append(@Nullable String suffix) {
if (!StringUtils.hasLength(suffix)) {
return this;
}
Elements additionalElements = probablySingleElementOf(suffix);
return new ConfigurationPropertyName(this.elements.append(additionalElements));
}
|
Create a new {@link ConfigurationPropertyName} by appending the given suffix.
@param suffix the elements to append
@return a new {@link ConfigurationPropertyName}
@throws InvalidConfigurationPropertyNameException if the result is not valid
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertyName.java
| 213
|
[
"suffix"
] |
ConfigurationPropertyName
| true
| 2
| 7.12
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
registerBeanDefinition
|
public static void registerBeanDefinition(
BeanDefinitionHolder definitionHolder, BeanDefinitionRegistry registry)
throws BeanDefinitionStoreException {
// Register bean definition under primary name.
String beanName = definitionHolder.getBeanName();
registry.registerBeanDefinition(beanName, definitionHolder.getBeanDefinition());
// Register aliases for bean name, if any.
String[] aliases = definitionHolder.getAliases();
if (aliases != null) {
for (String alias : aliases) {
registry.registerAlias(beanName, alias);
}
}
}
|
Register the given bean definition with the given bean factory.
@param definitionHolder the bean definition including name and aliases
@param registry the bean factory to register with
@throws BeanDefinitionStoreException if registration failed
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/BeanDefinitionReaderUtils.java
| 158
|
[
"definitionHolder",
"registry"
] |
void
| true
| 2
| 6.24
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
buildProxy
|
private Object buildProxy(Class<?> beanClass, @Nullable String beanName,
Object @Nullable [] specificInterceptors, TargetSource targetSource, boolean classOnly) {
if (this.beanFactory instanceof ConfigurableListableBeanFactory clbf) {
AutoProxyUtils.exposeTargetClass(clbf, beanName, beanClass);
}
ProxyFactory proxyFactory = new ProxyFactory();
proxyFactory.copyFrom(this);
proxyFactory.setFrozen(false);
if (shouldProxyTargetClass(beanClass, beanName)) {
proxyFactory.setProxyTargetClass(true);
}
else {
Class<?>[] ifcs = (this.beanFactory instanceof ConfigurableListableBeanFactory clbf ?
AutoProxyUtils.determineExposedInterfaces(clbf, beanName) : null);
if (ifcs != null) {
proxyFactory.setProxyTargetClass(false);
for (Class<?> ifc : ifcs) {
proxyFactory.addInterface(ifc);
}
}
if (ifcs != null ? ifcs.length == 0 : !proxyFactory.isProxyTargetClass()) {
evaluateProxyInterfaces(beanClass, proxyFactory);
}
}
if (proxyFactory.isProxyTargetClass()) {
// Explicit handling of JDK proxy targets and lambdas (for introduction advice scenarios)
if (Proxy.isProxyClass(beanClass) || ClassUtils.isLambdaClass(beanClass)) {
// Must allow for introductions; can't just set interfaces to the proxy's interfaces only.
for (Class<?> ifc : beanClass.getInterfaces()) {
proxyFactory.addInterface(ifc);
}
}
}
Advisor[] advisors = buildAdvisors(beanName, specificInterceptors);
proxyFactory.addAdvisors(advisors);
proxyFactory.setTargetSource(targetSource);
customizeProxyFactory(proxyFactory);
proxyFactory.setFrozen(isFrozen());
if (advisorsPreFiltered()) {
proxyFactory.setPreFiltered(true);
}
// Use original ClassLoader if bean class not locally loaded in overriding class loader
ClassLoader classLoader = getProxyClassLoader();
if (classLoader instanceof SmartClassLoader smartClassLoader && classLoader != beanClass.getClassLoader()) {
classLoader = smartClassLoader.getOriginalClassLoader();
}
return (classOnly ? proxyFactory.getProxyClass(classLoader) : proxyFactory.getProxy(classLoader));
}
|
Create an AOP proxy for the given bean.
@param beanClass the class of the bean
@param beanName the name of the bean
@param specificInterceptors the set of interceptors that is
specific to this bean (may be empty, but not null)
@param targetSource the TargetSource for the proxy,
already pre-configured to access the bean
@return the AOP proxy for the bean
@see #buildAdvisors
|
java
|
spring-aop/src/main/java/org/springframework/aop/framework/autoproxy/AbstractAutoProxyCreator.java
| 440
|
[
"beanClass",
"beanName",
"specificInterceptors",
"targetSource",
"classOnly"
] |
Object
| true
| 14
| 7.92
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
ofInnerBean
|
public static RegisteredBean ofInnerBean(RegisteredBean parent,
@Nullable String innerBeanName, BeanDefinition innerBeanDefinition) {
Assert.notNull(parent, "'parent' must not be null");
Assert.notNull(innerBeanDefinition, "'innerBeanDefinition' must not be null");
InnerBeanResolver resolver = new InnerBeanResolver(parent, innerBeanName, innerBeanDefinition);
Supplier<String> beanName = (StringUtils.hasLength(innerBeanName) ?
() -> innerBeanName : resolver::resolveBeanName);
return new RegisteredBean(parent.getBeanFactory(), beanName,
innerBeanName == null, resolver::resolveMergedBeanDefinition, parent);
}
|
Create a new {@link RegisteredBean} instance for an inner-bean.
@param parent the parent of the inner-bean
@param innerBeanName the name of the inner bean or {@code null} to
generate a name
@param innerBeanDefinition the inner-bean definition
@return a new {@link RegisteredBean} instance
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/RegisteredBean.java
| 131
|
[
"parent",
"innerBeanName",
"innerBeanDefinition"
] |
RegisteredBean
| true
| 2
| 7.6
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
factorize_array
|
def factorize_array(
values: np.ndarray,
use_na_sentinel: bool = True,
size_hint: int | None = None,
na_value: object = None,
mask: npt.NDArray[np.bool_] | None = None,
) -> tuple[npt.NDArray[np.intp], np.ndarray]:
"""
Factorize a numpy array to codes and uniques.
This doesn't do any coercion of types or unboxing before factorization.
Parameters
----------
values : ndarray
use_na_sentinel : bool, default True
If True, the sentinel -1 will be used for NaN values. If False,
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
size_hint : int, optional
Passed through to the hashtable's 'get_labels' method
na_value : object, optional
A value in `values` to consider missing. Note: only use this
parameter when you know that you don't have any values pandas would
consider missing in the array (NaN for float data, iNaT for
datetimes, etc.).
mask : ndarray[bool], optional
If not None, the mask is used as indicator for missing values
(True = missing, False = valid) instead of `na_value` or
condition "val != val".
Returns
-------
codes : ndarray[np.intp]
uniques : ndarray
"""
original = values
if values.dtype.kind in "mM":
# _get_hashtable_algo will cast dt64/td64 to i8 via _ensure_data, so we
# need to do the same to na_value. We are assuming here that the passed
# na_value is an appropriately-typed NaT.
# e.g. test_where_datetimelike_categorical
na_value = iNaT
hash_klass, values = _get_hashtable_algo(values)
table = hash_klass(size_hint or len(values))
uniques, codes = table.factorize(
values,
na_sentinel=-1,
na_value=na_value,
mask=mask,
ignore_na=use_na_sentinel,
)
# re-cast e.g. i8->dt64/td64, uint8->bool
uniques = _reconstruct_data(uniques, original.dtype, original)
codes = ensure_platform_int(codes)
return codes, uniques
|
Factorize a numpy array to codes and uniques.
This doesn't do any coercion of types or unboxing before factorization.
Parameters
----------
values : ndarray
use_na_sentinel : bool, default True
If True, the sentinel -1 will be used for NaN values. If False,
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
size_hint : int, optional
Passed through to the hashtable's 'get_labels' method
na_value : object, optional
A value in `values` to consider missing. Note: only use this
parameter when you know that you don't have any values pandas would
consider missing in the array (NaN for float data, iNaT for
datetimes, etc.).
mask : ndarray[bool], optional
If not None, the mask is used as indicator for missing values
(True = missing, False = valid) instead of `na_value` or
condition "val != val".
Returns
-------
codes : ndarray[np.intp]
uniques : ndarray
|
python
|
pandas/core/algorithms.py
| 594
|
[
"values",
"use_na_sentinel",
"size_hint",
"na_value",
"mask"
] |
tuple[npt.NDArray[np.intp], np.ndarray]
| true
| 3
| 6.8
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
globMatch
|
public static boolean globMatch(String pattern, String str) {
if (pattern == null || str == null) {
return false;
}
int patternIndex = pattern.indexOf('*');
if (patternIndex == -1) {
// Nothing to glob
return pattern.equals(str);
}
if (patternIndex == 0) {
// If the pattern is a literal '*' then it matches any input
if (pattern.length() == 1) {
return true;
}
} else {
if (str.regionMatches(0, pattern, 0, patternIndex) == false) {
// If the pattern starts with a literal (i.e. not '*') then the input string must also start with that
return false;
}
if (patternIndex == pattern.length() - 1) {
// The pattern is "something*", so if the starting region matches, then the whole pattern matches
return true;
}
}
int strIndex = patternIndex;
while (strIndex < str.length()) {
assert pattern.charAt(patternIndex) == '*' : "Expected * at index " + patternIndex + " of [" + pattern + "]";
// skip over the '*'
patternIndex++;
if (patternIndex == pattern.length()) {
// The pattern ends in '*' (that is, "something*" or "*some*thing*", etc)
// Since we already matched everything up to the '*' we know the string matches (whatever is left over must match '*')
// so we're automatically done
return true;
}
// Look for the next '*'
int nextStar = pattern.indexOf('*', patternIndex);
while (nextStar == patternIndex) {
// Two (or more) stars in sequence, just skip the subsequent ones
patternIndex++;
nextStar = pattern.indexOf('*', patternIndex);
}
if (nextStar == -1) {
// We've come to the last '*' in a pattern (.e.g the 2nd one in "*some*thing")
// In this case we match if the input string ends in "thing" (but constrained by the current position)
final int len = pattern.length() - patternIndex;
final int strSuffixStart = str.length() - len;
if (strSuffixStart < strIndex) {
// The suffix would start before the current position. That means it's not a match
// e.g. "abc" is not a match for "ab*bc" even though "abc" does end with "bc"
return false;
}
return str.regionMatches(strSuffixStart, pattern, patternIndex, len);
} else {
// There is another star, with a literal in between the current position and that '*'
// That is, we have "*literal*"
// We want the first '*' to consume everything up until the first occurrence of "literal" in the input string
int match = str.indexOf(pattern.substring(patternIndex, nextStar), strIndex);
if (match == -1) {
// If "literal" isn't there, then the match fails.
return false;
}
// Move both index (pointer) values to the end of the literal
strIndex = match + (nextStar - patternIndex);
patternIndex = nextStar;
}
}
// We might have trailing '*'s in the pattern after completing a literal match at the end of the input string
// e.g. a glob of "el*ic*" matching "elastic" - we need to consume that last '*' without it matching anything
while (patternIndex < pattern.length() && pattern.charAt(patternIndex) == '*') {
patternIndex++;
}
// The match is successful only if we have consumed the entire pattern.
return patternIndex == pattern.length();
}
|
Match a String against the given pattern, supporting the following simple
pattern styles: "xxx*", "*xxx", "*xxx*" and "xxx*yyy" matches (with an
arbitrary number of pattern parts), as well as direct equality.
@param pattern the pattern to match against
@param str the String to match
@return whether the String matches the given pattern
|
java
|
libs/core/src/main/java/org/elasticsearch/core/Glob.java
| 28
|
[
"pattern",
"str"
] | true
| 16
| 6.64
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
remainderIsNotAlphanumeric
|
private boolean remainderIsNotAlphanumeric(Elements elements, int element, int index) {
if (elements.getType(element).isIndexed()) {
return false;
}
int length = elements.getLength(element);
do {
char c = Character.toLowerCase(elements.charAt(element, index++));
if (ElementsParser.isAlphaNumeric(c)) {
return false;
}
}
while (index < length);
return true;
}
|
Returns {@code true} if this element is an ancestor (immediate or nested parent) of
the specified name.
@param name the name to check
@return {@code true} if this name is an ancestor
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertyName.java
| 497
|
[
"elements",
"element",
"index"
] | true
| 3
| 8.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
splitPreserveAllTokens
|
public static String[] splitPreserveAllTokens(final String str, final String separatorChars, final int max) {
return splitWorker(str, separatorChars, max, true);
}
|
Splits the provided text into an array with a maximum length, separators specified, preserving all tokens, including empty tokens created by adjacent
separators.
<p>
The separator is not included in the returned String array. Adjacent separators are treated as separators for empty tokens. Adjacent separators are
treated as one separator.
</p>
<p>
A {@code null} input String returns {@code null}. A {@code null} separatorChars splits on whitespace.
</p>
<p>
If more than {@code max} delimited substrings are found, the last returned string includes all characters after the first {@code max - 1} returned
strings (including separator characters).
</p>
<pre>
StringUtils.splitPreserveAllTokens(null, *, *) = null
StringUtils.splitPreserveAllTokens("", *, *) = []
StringUtils.splitPreserveAllTokens("ab de fg", null, 0) = ["ab", "de", "fg"]
StringUtils.splitPreserveAllTokens("ab de fg", null, 0) = ["ab", "", "", "de", "fg"]
StringUtils.splitPreserveAllTokens("ab:cd:ef", ":", 0) = ["ab", "cd", "ef"]
StringUtils.splitPreserveAllTokens("ab:cd:ef", ":", 2) = ["ab", "cd:ef"]
StringUtils.splitPreserveAllTokens("ab de fg", null, 2) = ["ab", " de fg"]
StringUtils.splitPreserveAllTokens("ab de fg", null, 3) = ["ab", "", " de fg"]
StringUtils.splitPreserveAllTokens("ab de fg", null, 4) = ["ab", "", "", "de fg"]
</pre>
@param str the String to parse, may be {@code null}.
@param separatorChars the characters used as the delimiters, {@code null} splits on whitespace.
@param max the maximum number of elements to include in the array. A zero or negative value implies no limit.
@return an array of parsed Strings, {@code null} if null String input.
@since 2.1
|
java
|
src/main/java/org/apache/commons/lang3/StringUtils.java
| 7,549
|
[
"str",
"separatorChars",
"max"
] | true
| 1
| 6.32
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
andThen
|
default FailableDoubleUnaryOperator<E> andThen(final FailableDoubleUnaryOperator<E> after) {
Objects.requireNonNull(after);
return (final double t) -> after.applyAsDouble(applyAsDouble(t));
}
|
Returns a composed {@link FailableDoubleUnaryOperator} like
{@link DoubleUnaryOperator#andThen(DoubleUnaryOperator)}.
@param after the operator to apply after this one.
@return a composed {@link FailableDoubleUnaryOperator} like
{@link DoubleUnaryOperator#andThen(DoubleUnaryOperator)}.
@throws NullPointerException if after is null.
@see #compose(FailableDoubleUnaryOperator)
|
java
|
src/main/java/org/apache/commons/lang3/function/FailableDoubleUnaryOperator.java
| 66
|
[
"after"
] | true
| 1
| 6
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
|
toflex
|
def toflex(self):
"""
Transforms a masked array into a flexible-type array.
The flexible type array that is returned will have two fields:
* the ``_data`` field stores the ``_data`` part of the array.
* the ``_mask`` field stores the ``_mask`` part of the array.
Parameters
----------
None
Returns
-------
record : ndarray
A new flexible-type `ndarray` with two fields: the first element
containing a value, the second element containing the corresponding
mask boolean. The returned record shape matches self.shape.
Notes
-----
A side-effect of transforming a masked array into a flexible `ndarray` is
that meta information (``fill_value``, ...) will be lost.
Examples
--------
>>> import numpy as np
>>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4)
>>> x
masked_array(
data=[[1, --, 3],
[--, 5, --],
[7, --, 9]],
mask=[[False, True, False],
[ True, False, True],
[False, True, False]],
fill_value=999999)
>>> x.toflex()
array([[(1, False), (2, True), (3, False)],
[(4, True), (5, False), (6, True)],
[(7, False), (8, True), (9, False)]],
dtype=[('_data', '<i8'), ('_mask', '?')])
"""
# Get the basic dtype.
ddtype = self.dtype
# Make sure we have a mask
_mask = self._mask
if _mask is None:
_mask = make_mask_none(self.shape, ddtype)
# And get its dtype
mdtype = self._mask.dtype
record = np.ndarray(shape=self.shape,
dtype=[('_data', ddtype), ('_mask', mdtype)])
record['_data'] = self._data
record['_mask'] = self._mask
return record
|
Transforms a masked array into a flexible-type array.
The flexible type array that is returned will have two fields:
* the ``_data`` field stores the ``_data`` part of the array.
* the ``_mask`` field stores the ``_mask`` part of the array.
Parameters
----------
None
Returns
-------
record : ndarray
A new flexible-type `ndarray` with two fields: the first element
containing a value, the second element containing the corresponding
mask boolean. The returned record shape matches self.shape.
Notes
-----
A side-effect of transforming a masked array into a flexible `ndarray` is
that meta information (``fill_value``, ...) will be lost.
Examples
--------
>>> import numpy as np
>>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4)
>>> x
masked_array(
data=[[1, --, 3],
[--, 5, --],
[7, --, 9]],
mask=[[False, True, False],
[ True, False, True],
[False, True, False]],
fill_value=999999)
>>> x.toflex()
array([[(1, False), (2, True), (3, False)],
[(4, True), (5, False), (6, True)],
[(7, False), (8, True), (9, False)]],
dtype=[('_data', '<i8'), ('_mask', '?')])
|
python
|
numpy/ma/core.py
| 6,405
|
[
"self"
] | false
| 2
| 7.76
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
trim
|
public StrBuilder trim() {
if (size == 0) {
return this;
}
int len = size;
final char[] buf = buffer;
int pos = 0;
while (pos < len && buf[pos] <= ' ') {
pos++;
}
while (pos < len && buf[len - 1] <= ' ') {
len--;
}
if (len < size) {
delete(len, size);
}
if (pos > 0) {
delete(0, pos);
}
return this;
}
|
Trims the builder by removing characters less than or equal to a space
from the beginning and end.
@return {@code this} instance.
|
java
|
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
| 3,004
|
[] |
StrBuilder
| true
| 8
| 8.24
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
assign
|
@Override
public void assign(Collection<TopicPartition> partitions) {
acquireAndEnsureOpen();
try {
if (partitions == null) {
throw new IllegalArgumentException("Topic partitions collection to assign to cannot be null");
}
if (partitions.isEmpty()) {
unsubscribe();
return;
}
for (TopicPartition tp : partitions) {
String topic = (tp != null) ? tp.topic() : null;
if (isBlank(topic))
throw new IllegalArgumentException("Topic partitions to assign to cannot have null or empty topic");
}
// Clear the buffered data which are not a part of newly assigned topics
final Set<TopicPartition> currentTopicPartitions = new HashSet<>();
for (TopicPartition tp : subscriptions.assignedPartitions()) {
if (partitions.contains(tp))
currentTopicPartitions.add(tp);
}
fetchBuffer.retainAll(currentTopicPartitions);
// assignment change event will trigger autocommit if it is configured and the group id is specified. This is
// to make sure offsets of topic partitions the consumer is unsubscribing from are committed since there will
// be no following rebalance.
//
// See the ApplicationEventProcessor.process() method that handles this event for more detail.
applicationEventHandler.addAndGet(new AssignmentChangeEvent(
time.milliseconds(),
defaultApiTimeoutDeadlineMs(),
partitions
));
} finally {
release();
}
}
|
Get the current subscription. or an empty set if no such call has
been made.
@return The set of topics currently subscribed to
|
java
|
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java
| 1,785
|
[
"partitions"
] |
void
| true
| 6
| 7.04
|
apache/kafka
| 31,560
|
javadoc
| false
|
padStart
|
function padStart(string, length, chars) {
string = toString(string);
length = toInteger(length);
var strLength = length ? stringSize(string) : 0;
return (length && strLength < length)
? (createPadding(length - strLength, chars) + string)
: string;
}
|
Pads `string` on the left side if it's shorter than `length`. Padding
characters are truncated if they exceed `length`.
@static
@memberOf _
@since 4.0.0
@category String
@param {string} [string=''] The string to pad.
@param {number} [length=0] The padding length.
@param {string} [chars=' '] The string used as padding.
@returns {string} Returns the padded string.
@example
_.padStart('abc', 6);
// => ' abc'
_.padStart('abc', 6, '_-');
// => '_-_abc'
_.padStart('abc', 3);
// => 'abc'
|
javascript
|
lodash.js
| 14,550
|
[
"string",
"length",
"chars"
] | false
| 4
| 7.52
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
toArray
|
public static double[] toArray(Collection<? extends Number> collection) {
if (collection instanceof DoubleArrayAsList) {
return ((DoubleArrayAsList) collection).toDoubleArray();
}
Object[] boxedArray = collection.toArray();
int len = boxedArray.length;
double[] array = new double[len];
for (int i = 0; i < len; i++) {
// checkNotNull for GWT (do not optimize)
array[i] = ((Number) checkNotNull(boxedArray[i])).doubleValue();
}
return array;
}
|
Returns an array containing each value of {@code collection}, converted to a {@code double}
value in the manner of {@link Number#doubleValue}.
<p>Elements are copied from the argument collection as if by {@code collection.toArray()}.
Calling this method is as thread-safe as calling that method.
@param collection a collection of {@code Number} instances
@return an array containing the same values as {@code collection}, in the same order, converted
to primitives
@throws NullPointerException if {@code collection} or any of its elements is null
@since 1.0 (parameter was {@code Collection<Double>} before 12.0)
|
java
|
android/guava/src/com/google/common/primitives/Doubles.java
| 540
|
[
"collection"
] | true
| 3
| 7.92
|
google/guava
| 51,352
|
javadoc
| false
|
|
symmetricDifference
|
private static <T> Set<T> symmetricDifference(final Set<T> a, final Set<T> b) {
final HashSet<T> result = new HashSet<>();
result.addAll(Sets.difference(a, b));
result.addAll(Sets.difference(b, a));
return result;
}
|
Get the set of fields required by the aggregation which are missing in at least one document.
@param other the other {@link RunningStats} to check
@return a set of field names
|
java
|
modules/aggregations/src/main/java/org/elasticsearch/aggregations/metric/RunningStats.java
| 215
|
[
"a",
"b"
] | true
| 1
| 6.72
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
any
|
def any(self, *args, **kwargs):
"""
Return whether any element is Truthy.
Parameters
----------
*args
Required for compatibility with numpy.
**kwargs
Required for compatibility with numpy.
Returns
-------
bool or array-like (if axis is specified)
A single element array-like may be converted to bool.
See Also
--------
Index.all : Return whether all elements are True.
Series.all : Return whether all elements are True.
Notes
-----
Not a Number (NaN), positive infinity and negative infinity
evaluate to True because these are not equal to zero.
Examples
--------
>>> index = pd.Index([0, 1, 2])
>>> index.any()
True
>>> index = pd.Index([0, 0, 0])
>>> index.any()
False
"""
nv.validate_any(args, kwargs)
self._maybe_disable_logical_methods("any")
vals = self._values
if not isinstance(vals, np.ndarray):
# i.e. EA, call _reduce instead of "any" to get TypeError instead
# of AttributeError
return vals._reduce("any")
return np.any(vals)
|
Return whether any element is Truthy.
Parameters
----------
*args
Required for compatibility with numpy.
**kwargs
Required for compatibility with numpy.
Returns
-------
bool or array-like (if axis is specified)
A single element array-like may be converted to bool.
See Also
--------
Index.all : Return whether all elements are True.
Series.all : Return whether all elements are True.
Notes
-----
Not a Number (NaN), positive infinity and negative infinity
evaluate to True because these are not equal to zero.
Examples
--------
>>> index = pd.Index([0, 1, 2])
>>> index.any()
True
>>> index = pd.Index([0, 0, 0])
>>> index.any()
False
|
python
|
pandas/core/indexes/base.py
| 7,371
|
[
"self"
] | false
| 2
| 6.48
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
|
_handle_anti_join
|
def _handle_anti_join(
self,
join_index: Index,
left_indexer: npt.NDArray[np.intp] | None,
right_indexer: npt.NDArray[np.intp] | None,
) -> tuple[Index, npt.NDArray[np.intp] | None, npt.NDArray[np.intp] | None]:
"""
Handle anti join by returning the correct join index and indexers
Parameters
----------
join_index : Index
join index
left_indexer : np.ndarray[np.intp] or None
left indexer
right_indexer : np.ndarray[np.intp] or None
right indexer
Returns
-------
Index, np.ndarray[np.intp] or None, np.ndarray[np.intp] or None
"""
# Make sure indexers are not None
if left_indexer is None:
left_indexer = np.arange(len(self.left))
if right_indexer is None:
right_indexer = np.arange(len(self.right))
assert self.how in {"left", "right"}
if self.how == "left":
# Filter to rows where left keys are not in right keys
filt = right_indexer == -1
else:
# Filter to rows where right keys are not in left keys
filt = left_indexer == -1
join_index = join_index[filt]
left_indexer = left_indexer[filt]
right_indexer = right_indexer[filt]
return join_index, left_indexer, right_indexer
|
Handle anti join by returning the correct join index and indexers
Parameters
----------
join_index : Index
join index
left_indexer : np.ndarray[np.intp] or None
left indexer
right_indexer : np.ndarray[np.intp] or None
right indexer
Returns
-------
Index, np.ndarray[np.intp] or None, np.ndarray[np.intp] or None
|
python
|
pandas/core/reshape/merge.py
| 1,514
|
[
"self",
"join_index",
"left_indexer",
"right_indexer"
] |
tuple[Index, npt.NDArray[np.intp] | None, npt.NDArray[np.intp] | None]
| true
| 5
| 6.08
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
cast
|
def cast(cls, series, domain=None, window=None):
"""Convert series to series of this class.
The `series` is expected to be an instance of some polynomial
series of one of the types supported by by the numpy.polynomial
module, but could be some other class that supports the convert
method.
Parameters
----------
series : series
The series instance to be converted.
domain : {None, array_like}, optional
If given, the array must be of the form ``[beg, end]``, where
``beg`` and ``end`` are the endpoints of the domain. If None is
given then the class domain is used. The default is None.
window : {None, array_like}, optional
If given, the resulting array must be if the form
``[beg, end]``, where ``beg`` and ``end`` are the endpoints of
the window. If None is given then the class window is used. The
default is None.
Returns
-------
new_series : series
A series of the same kind as the calling class and equal to
`series` when evaluated.
See Also
--------
convert : similar instance method
"""
if domain is None:
domain = cls.domain
if window is None:
window = cls.window
return series.convert(domain, cls, window)
|
Convert series to series of this class.
The `series` is expected to be an instance of some polynomial
series of one of the types supported by by the numpy.polynomial
module, but could be some other class that supports the convert
method.
Parameters
----------
series : series
The series instance to be converted.
domain : {None, array_like}, optional
If given, the array must be of the form ``[beg, end]``, where
``beg`` and ``end`` are the endpoints of the domain. If None is
given then the class domain is used. The default is None.
window : {None, array_like}, optional
If given, the resulting array must be if the form
``[beg, end]``, where ``beg`` and ``end`` are the endpoints of
the window. If None is given then the class window is used. The
default is None.
Returns
-------
new_series : series
A series of the same kind as the calling class and equal to
`series` when evaluated.
See Also
--------
convert : similar instance method
|
python
|
numpy/polynomial/_polybase.py
| 1,154
|
[
"cls",
"series",
"domain",
"window"
] | false
| 3
| 6.08
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
_gotitem
|
def _gotitem(self, key, ndim, subset=None):
"""
Sub-classes to define. Return a sliced object.
Parameters
----------
key : str / list of selections
ndim : {1, 2}
requested ndim of result
subset : object, default None
subset to act on
"""
# create a new object to prevent aliasing
if subset is None:
subset = self.obj
# we need to make a shallow copy of ourselves
# with the same groupby
kwargs = {attr: getattr(self, attr) for attr in self._attributes}
selection = self._infer_selection(key, subset)
new_win = type(self)(subset, selection=selection, **kwargs)
return new_win
|
Sub-classes to define. Return a sliced object.
Parameters
----------
key : str / list of selections
ndim : {1, 2}
requested ndim of result
subset : object, default None
subset to act on
|
python
|
pandas/core/window/rolling.py
| 275
|
[
"self",
"key",
"ndim",
"subset"
] | false
| 2
| 6.08
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
|
_inherit_from_data
|
def _inherit_from_data(
name: str, delegate: type, cache: bool = False, wrap: bool = False
):
"""
Make an alias for a method of the underlying ExtensionArray.
Parameters
----------
name : str
Name of an attribute the class should inherit from its EA parent.
delegate : class
cache : bool, default False
Whether to convert wrapped properties into cache_readonly
wrap : bool, default False
Whether to wrap the inherited result in an Index.
Returns
-------
attribute, method, property, or cache_readonly
"""
attr = getattr(delegate, name)
if isinstance(attr, property) or type(attr).__name__ == "getset_descriptor":
# getset_descriptor i.e. property defined in cython class
if cache:
def cached(self):
return getattr(self._data, name)
cached.__name__ = name
cached.__doc__ = attr.__doc__
method = cache_readonly(cached)
else:
def fget(self):
result = getattr(self._data, name)
if wrap:
if isinstance(result, type(self._data)):
return type(self)._simple_new(result, name=self.name)
elif isinstance(result, ABCDataFrame):
return result.set_index(self)
return Index(result, name=self.name, dtype=result.dtype)
return result
def fset(self, value) -> None:
setattr(self._data, name, value)
fget.__name__ = name
fget.__doc__ = attr.__doc__
method = property(fget, fset)
elif not callable(attr):
# just a normal attribute, no wrapping
method = attr
else:
# error: Incompatible redefinition (redefinition with type "Callable[[Any,
# VarArg(Any), KwArg(Any)], Any]", original type "property")
def method(self, *args, **kwargs): # type: ignore[misc]
if "inplace" in kwargs:
raise ValueError(f"cannot use inplace with {type(self).__name__}")
result = attr(self._data, *args, **kwargs)
if wrap:
if isinstance(result, type(self._data)):
return type(self)._simple_new(result, name=self.name)
elif isinstance(result, ABCDataFrame):
return result.set_index(self)
return Index(result, name=self.name, dtype=result.dtype)
return result
# error: "property" has no attribute "__name__"
method.__name__ = name # type: ignore[attr-defined]
method.__doc__ = attr.__doc__
method.__signature__ = signature(attr) # type: ignore[attr-defined]
return method
|
Make an alias for a method of the underlying ExtensionArray.
Parameters
----------
name : str
Name of an attribute the class should inherit from its EA parent.
delegate : class
cache : bool, default False
Whether to convert wrapped properties into cache_readonly
wrap : bool, default False
Whether to wrap the inherited result in an Index.
Returns
-------
attribute, method, property, or cache_readonly
|
python
|
pandas/core/indexes/extension.py
| 35
|
[
"name",
"delegate",
"cache",
"wrap"
] | true
| 14
| 6.8
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
|
optInt
|
public int optInt(int index, int fallback) {
Object object = opt(index);
Integer result = JSON.toInteger(object);
return result != null ? result : fallback;
}
|
Returns the value at {@code index} if it exists and is an int or can be coerced to
an int. Returns {@code fallback} otherwise.
@param index the index to get the value from
@param fallback the fallback value
@return the value at {@code index} of {@code fallback}
|
java
|
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONArray.java
| 432
|
[
"index",
"fallback"
] | true
| 2
| 8.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
|
doEvaluate
|
private @Nullable Object doEvaluate(@Nullable String value) {
return this.beanFactory.evaluateBeanDefinitionString(value, this.beanDefinition);
}
|
Evaluate the given String value as an expression, if necessary.
@param value the original value (may be an expression)
@return the resolved value if necessary, or the original String value
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/BeanDefinitionValueResolver.java
| 311
|
[
"value"
] |
Object
| true
| 1
| 6.64
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
_expand_mapped_task_if_needed
|
def _expand_mapped_task_if_needed(ti: TI) -> Iterable[TI] | None:
"""
Try to expand the ti, if needed.
If the ti needs expansion, newly created task instances are
returned as well as the original ti.
The original ti is also modified in-place and assigned the
``map_index`` of 0.
If the ti does not need expansion, either because the task is not
mapped, or has already been expanded, *None* is returned.
"""
from airflow.models.mappedoperator import is_mapped
if TYPE_CHECKING:
assert ti.task
if ti.map_index >= 0: # Already expanded, we're good.
return None
if is_mapped(ti.task):
# If we get here, it could be that we are moving from non-mapped to mapped
# after task instance clearing or this ti is not yet expanded. Safe to clear
# the db references.
ti.clear_db_references(session=session)
try:
expanded_tis, _ = TaskMap.expand_mapped_task(ti.task, self.run_id, session=session)
except NotMapped: # Not a mapped task, nothing needed.
return None
if expanded_tis:
return expanded_tis
return ()
|
Try to expand the ti, if needed.
If the ti needs expansion, newly created task instances are
returned as well as the original ti.
The original ti is also modified in-place and assigned the
``map_index`` of 0.
If the ti does not need expansion, either because the task is not
mapped, or has already been expanded, *None* is returned.
|
python
|
airflow-core/src/airflow/models/dagrun.py
| 1,519
|
[
"ti"
] |
Iterable[TI] | None
| true
| 5
| 6
|
apache/airflow
| 43,597
|
unknown
| false
|
_parse_supported_ops_with_weights
|
def _parse_supported_ops_with_weights(spec: str) -> tuple[list[str], dict[str, float]]:
"""Parse --supported-ops string.
Format: comma-separated fully-qualified torch ops, each optionally with =weight.
Example: "torch.matmul=5,torch.nn.functional.rms_norm=5,torch.add"
Returns (ops_list, weights_dict)
"""
ops: list[str] = []
weights: dict[str, float] = {}
if not spec:
return ops, weights
for entry in spec.split(","):
entry = entry.strip()
if not entry:
continue
if "=" in entry:
name, w = entry.split("=", 1)
name = name.strip()
try:
weight = float(w.strip())
except ValueError:
continue
ops.append(name)
weights[name] = weight
else:
ops.append(entry)
return ops, weights
|
Parse --supported-ops string.
Format: comma-separated fully-qualified torch ops, each optionally with =weight.
Example: "torch.matmul=5,torch.nn.functional.rms_norm=5,torch.add"
Returns (ops_list, weights_dict)
|
python
|
tools/experimental/torchfuzz/fuzzer.py
| 22
|
[
"spec"
] |
tuple[list[str], dict[str, float]]
| true
| 6
| 6.4
|
pytorch/pytorch
| 96,034
|
unknown
| false
|
repackage
|
public void repackage(File destination, Libraries libraries, @Nullable FileTime lastModifiedTime)
throws IOException {
Assert.isTrue(destination != null && !destination.isDirectory(), "Invalid destination");
getLayout(); // get layout early
destination = destination.getAbsoluteFile();
File source = getSource();
if (isAlreadyPackaged() && source.equals(destination)) {
return;
}
File workingSource = source;
if (source.equals(destination)) {
workingSource = getBackupFile();
workingSource.delete();
renameFile(source, workingSource);
}
destination.delete();
try {
try (JarFile sourceJar = new JarFile(workingSource)) {
repackage(sourceJar, destination, libraries, lastModifiedTime);
}
}
finally {
if (!this.backupSource && !source.equals(workingSource)) {
deleteFile(workingSource);
}
}
}
|
Repackage to the given destination so that it can be launched using '
{@literal java -jar}'.
@param destination the destination file (may be the same as the source)
@param libraries the libraries required to run the archive
@param lastModifiedTime an optional last modified time to apply to the archive and
its contents
@throws IOException if the file cannot be repackaged
@since 4.0.0
|
java
|
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/Repackager.java
| 110
|
[
"destination",
"libraries",
"lastModifiedTime"
] |
void
| true
| 7
| 6.56
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
resolveDeclaredEventType
|
static @Nullable ResolvableType resolveDeclaredEventType(Class<?> listenerType) {
ResolvableType eventType = eventTypeCache.get(listenerType);
if (eventType == null) {
eventType = ResolvableType.forClass(listenerType).as(ApplicationListener.class).getGeneric();
eventTypeCache.put(listenerType, eventType);
}
return (eventType != ResolvableType.NONE ? eventType : null);
}
|
Create a new GenericApplicationListener for the given delegate.
@param delegate the delegate listener to be invoked
|
java
|
spring-context/src/main/java/org/springframework/context/event/GenericApplicationListenerAdapter.java
| 109
|
[
"listenerType"
] |
ResolvableType
| true
| 3
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
getParameterNames
|
@SuppressWarnings("NullAway") // Dataflow analysis limitation
public static @Nullable String[] getParameterNames(Constructor<?> ctor) {
ConstructorProperties cp = ctor.getAnnotation(ConstructorProperties.class);
@Nullable String[] paramNames = (cp != null ? cp.value() :
DefaultParameterNameDiscoverer.getSharedInstance().getParameterNames(ctor));
Assert.state(paramNames != null, () -> "Cannot resolve parameter names for constructor " + ctor);
int parameterCount = (KOTLIN_REFLECT_PRESENT && KotlinDelegate.hasDefaultConstructorMarker(ctor) ?
ctor.getParameterCount() - 1 : ctor.getParameterCount());
Assert.state(paramNames.length == parameterCount,
() -> "Invalid number of parameter names: " + paramNames.length + " for constructor " + ctor);
return paramNames;
}
|
Determine required parameter names for the given constructor,
considering the JavaBeans {@link ConstructorProperties} annotation
as well as Spring's {@link DefaultParameterNameDiscoverer}.
@param ctor the constructor to find parameter names for
@return the parameter names (matching the constructor's parameter count)
@throws IllegalStateException if the parameter names are not resolvable
@since 5.3
@see ConstructorProperties
@see DefaultParameterNameDiscoverer
|
java
|
spring-beans/src/main/java/org/springframework/beans/BeanUtils.java
| 655
|
[
"ctor"
] | true
| 4
| 7.12
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
hasApplicableProcessors
|
public static boolean hasApplicableProcessors(Object bean, List<DestructionAwareBeanPostProcessor> postProcessors) {
if (!CollectionUtils.isEmpty(postProcessors)) {
for (DestructionAwareBeanPostProcessor processor : postProcessors) {
if (processor.requiresDestruction(bean)) {
return true;
}
}
}
return false;
}
|
Check whether the given bean has destruction-aware post-processors applying to it.
@param bean the bean instance
@param postProcessors the post-processor candidates
|
java
|
spring-beans/src/main/java/org/springframework/beans/factory/support/DisposableBeanAdapter.java
| 466
|
[
"bean",
"postProcessors"
] | true
| 3
| 6.08
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
exec
|
function exec(command: string, options: cp.ExecOptions): Promise<{ stdout: string; stderr: string }> {
return new Promise<{ stdout: string; stderr: string }>((resolve, reject) => {
cp.exec(command, options, (error, stdout, stderr) => {
if (error) {
reject({ error, stdout, stderr });
}
resolve({ stdout, stderr });
});
});
}
|
Check if the given filename is a file.
If returns false in case the file does not exist or
the file stats cannot be accessed/queried or it
is no file at all.
@param filename
the filename to the checked
@returns
true in case the file exists, in any other case false.
|
typescript
|
extensions/gulp/src/main.ts
| 41
|
[
"command",
"options"
] | true
| 2
| 7.92
|
microsoft/vscode
| 179,840
|
jsdoc
| false
|
|
reportAvailableDependencies
|
private void reportAvailableDependencies(InitializrServiceMetadata metadata, StringBuilder report) {
report.append("Available dependencies:").append(NEW_LINE);
report.append("-----------------------").append(NEW_LINE);
List<Dependency> dependencies = getSortedDependencies(metadata);
for (Dependency dependency : dependencies) {
report.append(dependency.getId()).append(" - ").append(dependency.getName());
if (dependency.getDescription() != null) {
report.append(": ").append(dependency.getDescription());
}
report.append(NEW_LINE);
}
}
|
Generate a report for the specified service. The report contains the available
capabilities as advertised by the root endpoint.
@param url the url of the service
@return the report that describes the service
@throws IOException if the report cannot be generated
|
java
|
cli/spring-boot-cli/src/main/java/org/springframework/boot/cli/command/init/ServiceCapabilitiesReportGenerator.java
| 80
|
[
"metadata",
"report"
] |
void
| true
| 2
| 8.08
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
insertDefaultValueAssignmentForBindingPattern
|
function insertDefaultValueAssignmentForBindingPattern(statements: Statement[], parameter: ParameterDeclaration, name: BindingPattern, initializer: Expression | undefined): boolean {
// In cases where a binding pattern is simply '[]' or '{}',
// we usually don't want to emit a var declaration; however, in the presence
// of an initializer, we must emit that expression to preserve side effects.
if (name.elements.length > 0) {
insertStatementAfterCustomPrologue(
statements,
setEmitFlags(
factory.createVariableStatement(
/*modifiers*/ undefined,
factory.createVariableDeclarationList(
flattenDestructuringBinding(
parameter,
visitor,
context,
FlattenLevel.All,
factory.getGeneratedNameForNode(parameter),
),
),
),
EmitFlags.CustomPrologue,
),
);
return true;
}
else if (initializer) {
insertStatementAfterCustomPrologue(
statements,
setEmitFlags(
factory.createExpressionStatement(
factory.createAssignment(
factory.getGeneratedNameForNode(parameter),
Debug.checkDefined(visitNode(initializer, visitor, isExpression)),
),
),
EmitFlags.CustomPrologue,
),
);
return true;
}
return false;
}
|
Adds statements to the body of a function-like node for parameters with binding patterns
@param statements The statements for the new function body.
@param parameter The parameter for the function.
@param name The name of the parameter.
@param initializer The initializer for the parameter.
|
typescript
|
src/compiler/transformers/es2015.ts
| 1,937
|
[
"statements",
"parameter",
"name",
"initializer"
] | true
| 4
| 6.88
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
escape
|
protected abstract char @Nullable [] escape(int cp);
|
Returns the escaped form of the given Unicode code point, or {@code null} if this code point
does not need to be escaped. When called as part of an escaping operation, the given code point
is guaranteed to be in the range {@code 0 <= cp <= Character#MAX_CODE_POINT}.
<p>If an empty array is returned, this effectively strips the input character from the
resulting text.
<p>If the character does not need to be escaped, this method should return {@code null}, rather
than an array containing the character representation of the code point. This enables the
escaping algorithm to perform more efficiently.
<p>If the implementation of this method cannot correctly handle a particular code point then it
should either throw an appropriate runtime exception or return a suitable replacement
character. It must never silently discard invalid input as this may constitute a security risk.
@param cp the Unicode code point to escape if necessary
@return the replacement characters, or {@code null} if no escaping was needed
|
java
|
android/guava/src/com/google/common/escape/UnicodeEscaper.java
| 80
|
[
"cp"
] | true
| 1
| 6.8
|
google/guava
| 51,352
|
javadoc
| false
|
|
toString
|
@Override
public String toString() {
return "ItemHint{name='" + this.name + "', values=" + this.values + ", providers=" + this.providers + '}';
}
|
Return an {@link ItemHint} with the given prefix applied.
@param prefix the prefix to apply
@return a new {@link ItemHint} with the same of this instance whose property name
has the prefix applied to it
|
java
|
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/metadata/ItemHint.java
| 94
|
[] |
String
| true
| 1
| 6.64
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
on_signature
|
def on_signature(self, sig, **headers) -> dict:
"""Method that is called on signature stamping.
Arguments:
sig (Signature): Signature that is stamped.
headers (Dict): Partial headers that could be merged with existing headers.
Returns:
Dict: headers to update.
"""
|
Method that is called on signature stamping.
Arguments:
sig (Signature): Signature that is stamped.
headers (Dict): Partial headers that could be merged with existing headers.
Returns:
Dict: headers to update.
|
python
|
celery/canvas.py
| 165
|
[
"self",
"sig"
] |
dict
| true
| 1
| 6.56
|
celery/celery
| 27,741
|
google
| false
|
_addsub_int_array_or_scalar
|
def _addsub_int_array_or_scalar(
self, other: np.ndarray | int, op: Callable[[Any, Any], Any]
) -> Self:
"""
Add or subtract array of integers.
Parameters
----------
other : np.ndarray[int64] or int
op : {operator.add, operator.sub}
Returns
-------
result : PeriodArray
"""
assert op in [operator.add, operator.sub]
if op is operator.sub:
other = -other
res_values = add_overflowsafe(self.asi8, np.asarray(other, dtype="i8"))
return type(self)(res_values, dtype=self.dtype)
|
Add or subtract array of integers.
Parameters
----------
other : np.ndarray[int64] or int
op : {operator.add, operator.sub}
Returns
-------
result : PeriodArray
|
python
|
pandas/core/arrays/period.py
| 1,012
|
[
"self",
"other",
"op"
] |
Self
| true
| 2
| 6.08
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
masked_all_like
|
def masked_all_like(arr):
"""
Empty masked array with the properties of an existing array.
Return an empty masked array of the same shape and dtype as
the array `arr`, where all the data are masked.
Parameters
----------
arr : ndarray
An array describing the shape and dtype of the required MaskedArray.
Returns
-------
a : MaskedArray
A masked array with all data masked.
Raises
------
AttributeError
If `arr` doesn't have a shape attribute (i.e. not an ndarray)
See Also
--------
masked_all : Empty masked array with all elements masked.
Notes
-----
Unlike other masked array creation functions (e.g. `numpy.ma.zeros_like`,
`numpy.ma.ones_like`, `numpy.ma.full_like`), `masked_all_like` does not
initialize the values of the array, and may therefore be marginally
faster. However, the values stored in the newly allocated array are
arbitrary. For reproducible behavior, be sure to set each element of the
array before reading.
Examples
--------
>>> import numpy as np
>>> arr = np.zeros((2, 3), dtype=np.float32)
>>> arr
array([[0., 0., 0.],
[0., 0., 0.]], dtype=float32)
>>> np.ma.masked_all_like(arr)
masked_array(
data=[[--, --, --],
[--, --, --]],
mask=[[ True, True, True],
[ True, True, True]],
fill_value=np.float64(1e+20),
dtype=float32)
The dtype of the masked array matches the dtype of `arr`.
>>> arr.dtype
dtype('float32')
>>> np.ma.masked_all_like(arr).dtype
dtype('float32')
"""
a = np.empty_like(arr).view(MaskedArray)
a._mask = np.ones(a.shape, dtype=make_mask_descr(a.dtype))
return a
|
Empty masked array with the properties of an existing array.
Return an empty masked array of the same shape and dtype as
the array `arr`, where all the data are masked.
Parameters
----------
arr : ndarray
An array describing the shape and dtype of the required MaskedArray.
Returns
-------
a : MaskedArray
A masked array with all data masked.
Raises
------
AttributeError
If `arr` doesn't have a shape attribute (i.e. not an ndarray)
See Also
--------
masked_all : Empty masked array with all elements masked.
Notes
-----
Unlike other masked array creation functions (e.g. `numpy.ma.zeros_like`,
`numpy.ma.ones_like`, `numpy.ma.full_like`), `masked_all_like` does not
initialize the values of the array, and may therefore be marginally
faster. However, the values stored in the newly allocated array are
arbitrary. For reproducible behavior, be sure to set each element of the
array before reading.
Examples
--------
>>> import numpy as np
>>> arr = np.zeros((2, 3), dtype=np.float32)
>>> arr
array([[0., 0., 0.],
[0., 0., 0.]], dtype=float32)
>>> np.ma.masked_all_like(arr)
masked_array(
data=[[--, --, --],
[--, --, --]],
mask=[[ True, True, True],
[ True, True, True]],
fill_value=np.float64(1e+20),
dtype=float32)
The dtype of the masked array matches the dtype of `arr`.
>>> arr.dtype
dtype('float32')
>>> np.ma.masked_all_like(arr).dtype
dtype('float32')
|
python
|
numpy/ma/extras.py
| 181
|
[
"arr"
] | false
| 1
| 6.24
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
next
|
public String next(final int count, final int start, final int end, final boolean letters, final boolean numbers) {
return random(count, start, end, letters, numbers, null, random());
}
|
Creates a random string whose length is the number of characters specified.
<p>
Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
</p>
@param count the length of random string to create.
@param start the position in set of chars to start at.
@param end the position in set of chars to end before.
@param letters if {@code true}, generated string may include alphabetic characters.
@param numbers if {@code true}, generated string may include numeric characters.
@return the random string.
@throws IllegalArgumentException if {@code count} < 0.
@since 3.16.0
|
java
|
src/main/java/org/apache/commons/lang3/RandomStringUtils.java
| 749
|
[
"count",
"start",
"end",
"letters",
"numbers"
] |
String
| true
| 1
| 6.8
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
as_unit
|
def as_unit(self, unit: TimeUnit, round_ok: bool = True) -> Self:
"""
Convert to a dtype with the given unit resolution.
The limits of timestamp representation depend on the chosen resolution.
Different resolutions can be converted to each other through as_unit.
Parameters
----------
unit : {'s', 'ms', 'us', 'ns'}
round_ok : bool, default True
If False and the conversion requires rounding, raise ValueError.
Returns
-------
same type as self
Converted to the specified unit.
See Also
--------
Timestamp.as_unit : Convert to the given unit.
Examples
--------
For :class:`pandas.DatetimeIndex`:
>>> idx = pd.DatetimeIndex(["2020-01-02 01:02:03.004005006"])
>>> idx
DatetimeIndex(['2020-01-02 01:02:03.004005006'],
dtype='datetime64[ns]', freq=None)
>>> idx.as_unit("s")
DatetimeIndex(['2020-01-02 01:02:03'], dtype='datetime64[s]', freq=None)
For :class:`pandas.TimedeltaIndex`:
>>> tdelta_idx = pd.to_timedelta(["1 day 3 min 2 us 42 ns"])
>>> tdelta_idx
TimedeltaIndex(['1 days 00:03:00.000002042'],
dtype='timedelta64[ns]', freq=None)
>>> tdelta_idx.as_unit("s")
TimedeltaIndex(['1 days 00:03:00'], dtype='timedelta64[s]', freq=None)
"""
if unit not in ["s", "ms", "us", "ns"]:
raise ValueError("Supported units are 's', 'ms', 'us', 'ns'")
dtype = np.dtype(f"{self.dtype.kind}8[{unit}]")
new_values = astype_overflowsafe(self._ndarray, dtype, round_ok=round_ok)
if isinstance(self.dtype, np.dtype):
new_dtype = new_values.dtype
else:
tz = cast("DatetimeArray", self).tz
new_dtype = DatetimeTZDtype(tz=tz, unit=unit)
# error: Unexpected keyword argument "freq" for "_simple_new" of
# "NDArrayBacked" [call-arg]
return type(self)._simple_new(
new_values,
dtype=new_dtype,
freq=self.freq, # type: ignore[call-arg]
)
|
Convert to a dtype with the given unit resolution.
The limits of timestamp representation depend on the chosen resolution.
Different resolutions can be converted to each other through as_unit.
Parameters
----------
unit : {'s', 'ms', 'us', 'ns'}
round_ok : bool, default True
If False and the conversion requires rounding, raise ValueError.
Returns
-------
same type as self
Converted to the specified unit.
See Also
--------
Timestamp.as_unit : Convert to the given unit.
Examples
--------
For :class:`pandas.DatetimeIndex`:
>>> idx = pd.DatetimeIndex(["2020-01-02 01:02:03.004005006"])
>>> idx
DatetimeIndex(['2020-01-02 01:02:03.004005006'],
dtype='datetime64[ns]', freq=None)
>>> idx.as_unit("s")
DatetimeIndex(['2020-01-02 01:02:03'], dtype='datetime64[s]', freq=None)
For :class:`pandas.TimedeltaIndex`:
>>> tdelta_idx = pd.to_timedelta(["1 day 3 min 2 us 42 ns"])
>>> tdelta_idx
TimedeltaIndex(['1 days 00:03:00.000002042'],
dtype='timedelta64[ns]', freq=None)
>>> tdelta_idx.as_unit("s")
TimedeltaIndex(['1 days 00:03:00'], dtype='timedelta64[s]', freq=None)
|
python
|
pandas/core/arrays/datetimelike.py
| 2,003
|
[
"self",
"unit",
"round_ok"
] |
Self
| true
| 4
| 8.16
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
keysIn
|
function keysIn(object) {
return isArrayLike(object) ? arrayLikeKeys(object, true) : baseKeysIn(object);
}
|
Creates an array of the own and inherited enumerable property names of `object`.
**Note:** Non-object values are coerced to objects.
@static
@memberOf _
@since 3.0.0
@category Object
@param {Object} object The object to query.
@returns {Array} Returns the array of property names.
@example
function Foo() {
this.a = 1;
this.b = 2;
}
Foo.prototype.c = 3;
_.keysIn(new Foo);
// => ['a', 'b', 'c'] (iteration order is not guaranteed)
|
javascript
|
lodash.js
| 13,440
|
[
"object"
] | false
| 2
| 7.44
|
lodash/lodash
| 61,490
|
jsdoc
| false
|
|
clearCache
|
public void clearCache() {
Handler.clearCache();
org.springframework.boot.loader.net.protocol.nested.Handler.clearCache();
try {
clearJarFiles();
}
catch (IOException ex) {
// Ignore
}
for (URL url : this.urls) {
if (isJarUrl(url)) {
clearCache(url);
}
}
}
|
Clear any caches. This method is called reflectively by
{@code ClearCachesApplicationListener}.
|
java
|
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/net/protocol/jar/JarUrlClassLoader.java
| 203
|
[] |
void
| true
| 3
| 6.24
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
withBoundProperties
|
ConfigDataEnvironmentContributor withBoundProperties(Iterable<ConfigDataEnvironmentContributor> contributors,
@Nullable ConfigDataActivationContext activationContext) {
ConfigurationPropertySource configurationPropertySource = getConfigurationPropertySource();
Assert.state(configurationPropertySource != null, "'configurationPropertySource' must not be null");
Iterable<ConfigurationPropertySource> sources = Collections.singleton(configurationPropertySource);
PlaceholdersResolver placeholdersResolver = new ConfigDataEnvironmentContributorPlaceholdersResolver(
contributors, activationContext, this, true, this.conversionService);
Binder binder = new Binder(sources, placeholdersResolver, null, null, null);
ConfigDataProperties properties = ConfigDataProperties.get(binder);
if (properties != null && this.configDataOptions.contains(ConfigData.Option.IGNORE_IMPORTS)) {
properties = properties.withoutImports();
}
return new ConfigDataEnvironmentContributor(Kind.BOUND_IMPORT, this.location, this.resource,
this.fromProfileSpecificImport, this.propertySource, this.configurationPropertySource, properties,
this.configDataOptions, null, this.conversionService);
}
|
Create a new {@link ConfigDataEnvironmentContributor} with bound
{@link ConfigDataProperties}.
@param contributors the contributors used for binding
@param activationContext the activation context
@return a new contributor instance
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigDataEnvironmentContributor.java
| 247
|
[
"contributors",
"activationContext"
] |
ConfigDataEnvironmentContributor
| true
| 3
| 7.12
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
random
|
@Deprecated
public static String random(final int count, final boolean letters, final boolean numbers) {
return secure().next(count, letters, numbers);
}
|
Creates a random string whose length is the number of characters specified.
<p>
Characters will be chosen from the set of alpha-numeric characters as indicated by the arguments.
</p>
@param count the length of random string to create.
@param letters if {@code true}, generated string may include alphabetic characters.
@param numbers if {@code true}, generated string may include numeric characters.
@return the random string.
@throws IllegalArgumentException if {@code count} < 0.
@deprecated Use {@link #next(int, boolean, boolean)} from {@link #secure()}, {@link #secureStrong()}, or {@link #insecure()}.
|
java
|
src/main/java/org/apache/commons/lang3/RandomStringUtils.java
| 153
|
[
"count",
"letters",
"numbers"
] |
String
| true
| 1
| 6.32
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
check_if_pidfile_process_is_running
|
def check_if_pidfile_process_is_running(pid_file: str, process_name: str):
"""
Check if a pidfile already exists and process is still running.
If process is dead then pidfile is removed.
:param pid_file: path to the pidfile
:param process_name: name used in exception if process is up and
running
"""
pid_lock_file = PIDLockFile(path=pid_file)
# If file exists
if pid_lock_file.is_locked():
# Read the pid
pid = pid_lock_file.read_pid()
if pid is None:
return
try:
# Check if process is still running
proc = psutil.Process(pid)
if proc.is_running():
raise AirflowException(f"The {process_name} is already running under PID {pid}.")
except psutil.NoSuchProcess:
# If process is dead remove the pidfile
pid_lock_file.break_lock()
|
Check if a pidfile already exists and process is still running.
If process is dead then pidfile is removed.
:param pid_file: path to the pidfile
:param process_name: name used in exception if process is up and
running
|
python
|
airflow-core/src/airflow/utils/process_utils.py
| 349
|
[
"pid_file",
"process_name"
] | true
| 4
| 6.88
|
apache/airflow
| 43,597
|
sphinx
| false
|
|
swaplevel
|
def swaplevel(self, i=-2, j=-1) -> MultiIndex:
"""
Swap level i with level j.
Calling this method does not change the ordering of the values.
Default is to swap the last two levels of the MultiIndex.
Parameters
----------
i : int, str, default -2
First level of index to be swapped. Can pass level name as string.
Type of parameters can be mixed. If i is a negative int, the first
level is indexed relative to the end of the MultiIndex.
j : int, str, default -1
Second level of index to be swapped. Can pass level name as string.
Type of parameters can be mixed. If j is a negative int, the second
level is indexed relative to the end of the MultiIndex.
Returns
-------
MultiIndex
A new MultiIndex.
See Also
--------
Series.swaplevel : Swap levels i and j in a MultiIndex.
DataFrame.swaplevel : Swap levels i and j in a MultiIndex on a
particular axis.
Examples
--------
>>> mi = pd.MultiIndex(
... levels=[["a", "b"], ["bb", "aa"], ["aaa", "bbb"]],
... codes=[[0, 0, 1, 1], [0, 1, 0, 1], [1, 0, 1, 0]],
... )
>>> mi
MultiIndex([('a', 'bb', 'bbb'),
('a', 'aa', 'aaa'),
('b', 'bb', 'bbb'),
('b', 'aa', 'aaa')],
)
>>> mi.swaplevel()
MultiIndex([('a', 'bbb', 'bb'),
('a', 'aaa', 'aa'),
('b', 'bbb', 'bb'),
('b', 'aaa', 'aa')],
)
>>> mi.swaplevel(0)
MultiIndex([('bbb', 'bb', 'a'),
('aaa', 'aa', 'a'),
('bbb', 'bb', 'b'),
('aaa', 'aa', 'b')],
)
>>> mi.swaplevel(0, 1)
MultiIndex([('bb', 'a', 'bbb'),
('aa', 'a', 'aaa'),
('bb', 'b', 'bbb'),
('aa', 'b', 'aaa')],
)
"""
new_levels = list(self.levels)
new_codes = list(self.codes)
new_names = list(self.names)
i = self._get_level_number(i)
j = self._get_level_number(j)
new_levels[i], new_levels[j] = new_levels[j], new_levels[i]
new_codes[i], new_codes[j] = new_codes[j], new_codes[i]
new_names[i], new_names[j] = new_names[j], new_names[i]
return MultiIndex(
levels=new_levels, codes=new_codes, names=new_names, verify_integrity=False
)
|
Swap level i with level j.
Calling this method does not change the ordering of the values.
Default is to swap the last two levels of the MultiIndex.
Parameters
----------
i : int, str, default -2
First level of index to be swapped. Can pass level name as string.
Type of parameters can be mixed. If i is a negative int, the first
level is indexed relative to the end of the MultiIndex.
j : int, str, default -1
Second level of index to be swapped. Can pass level name as string.
Type of parameters can be mixed. If j is a negative int, the second
level is indexed relative to the end of the MultiIndex.
Returns
-------
MultiIndex
A new MultiIndex.
See Also
--------
Series.swaplevel : Swap levels i and j in a MultiIndex.
DataFrame.swaplevel : Swap levels i and j in a MultiIndex on a
particular axis.
Examples
--------
>>> mi = pd.MultiIndex(
... levels=[["a", "b"], ["bb", "aa"], ["aaa", "bbb"]],
... codes=[[0, 0, 1, 1], [0, 1, 0, 1], [1, 0, 1, 0]],
... )
>>> mi
MultiIndex([('a', 'bb', 'bbb'),
('a', 'aa', 'aaa'),
('b', 'bb', 'bbb'),
('b', 'aa', 'aaa')],
)
>>> mi.swaplevel()
MultiIndex([('a', 'bbb', 'bb'),
('a', 'aaa', 'aa'),
('b', 'bbb', 'bb'),
('b', 'aaa', 'aa')],
)
>>> mi.swaplevel(0)
MultiIndex([('bbb', 'bb', 'a'),
('aaa', 'aa', 'a'),
('bbb', 'bb', 'b'),
('aaa', 'aa', 'b')],
)
>>> mi.swaplevel(0, 1)
MultiIndex([('bb', 'a', 'bbb'),
('aa', 'a', 'aaa'),
('bb', 'b', 'bbb'),
('aa', 'b', 'aaa')],
)
|
python
|
pandas/core/indexes/multi.py
| 2,703
|
[
"self",
"i",
"j"
] |
MultiIndex
| true
| 1
| 7.2
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
isInSomeParsingContext
|
function isInSomeParsingContext(): boolean {
// We should be in at least one parsing context, be it SourceElements while parsing
// a SourceFile, or JSDocComment when lazily parsing JSDoc.
Debug.assert(parsingContext, "Missing parsing context");
for (let kind = 0; kind < ParsingContext.Count; kind++) {
if (parsingContext & (1 << kind)) {
if (isListElement(kind, /*inErrorRecovery*/ true) || isListTerminator(kind)) {
return true;
}
}
}
return false;
}
|
Reports a diagnostic error for the current token being an invalid name.
@param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName).
@param nameDiagnostic Diagnostic to report for all other cases.
@param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
|
typescript
|
src/compiler/parser.ts
| 3,078
|
[] | true
| 5
| 6.88
|
microsoft/TypeScript
| 107,154
|
jsdoc
| false
|
|
supportsEvent
|
protected boolean supportsEvent(
ApplicationListener<?> listener, ResolvableType eventType, @Nullable Class<?> sourceType) {
GenericApplicationListener smartListener = (listener instanceof GenericApplicationListener gal ? gal :
new GenericApplicationListenerAdapter(listener));
return (smartListener.supportsEventType(eventType) && smartListener.supportsSourceType(sourceType));
}
|
Determine whether the given listener supports the given event.
<p>The default implementation detects the {@link SmartApplicationListener}
and {@link GenericApplicationListener} interfaces. In case of a standard
{@link ApplicationListener}, a {@link GenericApplicationListenerAdapter}
will be used to introspect the generically declared type of the target listener.
@param listener the target listener to check
@param eventType the event type to check against
@param sourceType the source type to check against
@return whether the given listener should be included in the candidates
for the given event type
|
java
|
spring-context/src/main/java/org/springframework/context/event/AbstractApplicationEventMulticaster.java
| 393
|
[
"listener",
"eventType",
"sourceType"
] | true
| 3
| 7.44
|
spring-projects/spring-framework
| 59,386
|
javadoc
| false
|
|
asmatrix
|
def asmatrix(data, dtype=None):
"""
Interpret the input as a matrix.
Unlike `matrix`, `asmatrix` does not make a copy if the input is already
a matrix or an ndarray. Equivalent to ``matrix(data, copy=False)``.
Parameters
----------
data : array_like
Input data.
dtype : data-type
Data-type of the output matrix.
Returns
-------
mat : matrix
`data` interpreted as a matrix.
Examples
--------
>>> import numpy as np
>>> x = np.array([[1, 2], [3, 4]])
>>> m = np.asmatrix(x)
>>> x[0,0] = 5
>>> m
matrix([[5, 2],
[3, 4]])
"""
return matrix(data, dtype=dtype, copy=False)
|
Interpret the input as a matrix.
Unlike `matrix`, `asmatrix` does not make a copy if the input is already
a matrix or an ndarray. Equivalent to ``matrix(data, copy=False)``.
Parameters
----------
data : array_like
Input data.
dtype : data-type
Data-type of the output matrix.
Returns
-------
mat : matrix
`data` interpreted as a matrix.
Examples
--------
>>> import numpy as np
>>> x = np.array([[1, 2], [3, 4]])
>>> m = np.asmatrix(x)
>>> x[0,0] = 5
>>> m
matrix([[5, 2],
[3, 4]])
|
python
|
numpy/matrixlib/defmatrix.py
| 37
|
[
"data",
"dtype"
] | false
| 1
| 6.32
|
numpy/numpy
| 31,054
|
numpy
| false
|
|
dot7u
|
private static int dot7u(MemorySegment a, MemorySegment b, int length) {
try {
return (int) JdkVectorLibrary.dot7u$mh.invokeExact(a, b, length);
} catch (Throwable t) {
throw new AssertionError(t);
}
}
|
Computes the square distance of given float32 vectors.
@param a address of the first vector
@param b address of the second vector
@param elementCount the vector dimensions, number of float32 elements in the segment
|
java
|
libs/native/src/main/java/org/elasticsearch/nativeaccess/jdk/JdkVectorLibrary.java
| 291
|
[
"a",
"b",
"length"
] | true
| 2
| 6.56
|
elastic/elasticsearch
| 75,680
|
javadoc
| false
|
|
canApplyInference
|
bool canApplyInference(const FlowFunction &Func,
const yaml::bolt::BinaryFunctionProfile &YamlBF,
const uint64_t &MatchedBlocks) {
if (Func.Blocks.size() > opts::StaleMatchingMaxFuncSize)
return false;
if (MatchedBlocks * 100 <
opts::StaleMatchingMinMatchedBlock * YamlBF.Blocks.size())
return false;
// Returns false if the artificial sink block has no predecessors meaning
// there are no exit blocks.
if (Func.Blocks[Func.Blocks.size() - 1].isEntry())
return false;
return true;
}
|
having "unexpected" control flow (e.g., having no sink basic blocks).
|
cpp
|
bolt/lib/Profile/StaleProfileMatching.cpp
| 842
|
[] | true
| 4
| 6.88
|
llvm/llvm-project
| 36,021
|
doxygen
| false
|
|
take
|
def take(
self,
indices,
*,
allow_fill: bool = False,
fill_value=None,
axis=None,
**kwargs,
) -> Self:
"""
Take elements from the IntervalArray.
Parameters
----------
indices : sequence of integers
Indices to be taken.
allow_fill : bool, default False
How to handle negative values in `indices`.
* False: negative values in `indices` indicate positional indices
from the right (the default). This is similar to
:func:`numpy.take`.
* True: negative values in `indices` indicate
missing values. These values are set to `fill_value`. Any other
other negative values raise a ``ValueError``.
fill_value : Interval or NA, optional
Fill value to use for NA-indices when `allow_fill` is True.
This may be ``None``, in which case the default NA value for
the type, ``self.dtype.na_value``, is used.
For many ExtensionArrays, there will be two representations of
`fill_value`: a user-facing "boxed" scalar, and a low-level
physical NA value. `fill_value` should be the user-facing version,
and the implementation should handle translating that to the
physical version for processing the take if necessary.
axis : any, default None
Present for compat with IntervalIndex; does nothing.
Returns
-------
IntervalArray
Raises
------
IndexError
When the indices are out of bounds for the array.
ValueError
When `indices` contains negative values other than ``-1``
and `allow_fill` is True.
"""
nv.validate_take((), kwargs)
fill_left = fill_right = fill_value
if allow_fill:
fill_left, fill_right = self._validate_scalar(fill_value)
left_take = take(
self._left, indices, allow_fill=allow_fill, fill_value=fill_left
)
right_take = take(
self._right, indices, allow_fill=allow_fill, fill_value=fill_right
)
return self._shallow_copy(left_take, right_take)
|
Take elements from the IntervalArray.
Parameters
----------
indices : sequence of integers
Indices to be taken.
allow_fill : bool, default False
How to handle negative values in `indices`.
* False: negative values in `indices` indicate positional indices
from the right (the default). This is similar to
:func:`numpy.take`.
* True: negative values in `indices` indicate
missing values. These values are set to `fill_value`. Any other
other negative values raise a ``ValueError``.
fill_value : Interval or NA, optional
Fill value to use for NA-indices when `allow_fill` is True.
This may be ``None``, in which case the default NA value for
the type, ``self.dtype.na_value``, is used.
For many ExtensionArrays, there will be two representations of
`fill_value`: a user-facing "boxed" scalar, and a low-level
physical NA value. `fill_value` should be the user-facing version,
and the implementation should handle translating that to the
physical version for processing the take if necessary.
axis : any, default None
Present for compat with IntervalIndex; does nothing.
Returns
-------
IntervalArray
Raises
------
IndexError
When the indices are out of bounds for the array.
ValueError
When `indices` contains negative values other than ``-1``
and `allow_fill` is True.
|
python
|
pandas/core/arrays/interval.py
| 1,197
|
[
"self",
"indices",
"allow_fill",
"fill_value",
"axis"
] |
Self
| true
| 2
| 6.48
|
pandas-dev/pandas
| 47,362
|
numpy
| false
|
endWaitingFor
|
@GuardedBy("lock")
private void endWaitingFor(Guard guard) {
int waiters = --guard.waiterCount;
if (waiters == 0) {
// unlink guard from activeGuards
for (Guard p = activeGuards, pred = null; ; pred = p, p = p.next) {
if (p == guard) {
if (pred == null) {
activeGuards = p.next;
} else {
pred.next = p.next;
}
p.next = null; // help GC
break;
}
}
}
}
|
Records that the current thread is no longer waiting on the specified guard.
|
java
|
android/guava/src/com/google/common/util/concurrent/Monitor.java
| 1,163
|
[
"guard"
] |
void
| true
| 5
| 6
|
google/guava
| 51,352
|
javadoc
| false
|
type
|
@Nullable String type();
|
The key store type, for example {@code JKS} or {@code PKCS11}. A {@code null} value
will use {@link KeyStore#getDefaultType()}).
@return the key store type
|
java
|
core/spring-boot/src/main/java/org/springframework/boot/ssl/pem/PemSslStore.java
| 45
|
[] |
String
| true
| 1
| 6.32
|
spring-projects/spring-boot
| 79,428
|
javadoc
| false
|
nodeState
|
private NodeConnectionState nodeState(String id) {
NodeConnectionState state = this.nodeState.get(id);
if (state == null)
throw new IllegalStateException("No entry found for connection " + id);
return state;
}
|
Get the state of a given node.
@param id the connection to fetch the state for
|
java
|
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
| 407
|
[
"id"
] |
NodeConnectionState
| true
| 2
| 6.72
|
apache/kafka
| 31,560
|
javadoc
| false
|
validate_rc_by_pmc
|
def validate_rc_by_pmc(
distribution: str,
version: str,
task_sdk_version: str | None,
path_to_airflow_svn: Path,
checks: str | None,
):
"""
Validate a release candidate for PMC voting.
This command performs the validation checks required by the PMCs for a release.
Examples:
breeze release-management validate-rc-by-pmc \
--distribution airflow \
--version 3.1.3rc1 \
--task-sdk-version 1.1.3rc1 \
--path-to-airflow-svn ../asf-dist/dev/airflow \
--checks signatures,checksums
"""
airflow_repo_root = Path.cwd()
if not (airflow_repo_root / "airflow-core").exists():
console_print("[red]Error: Must be run from Airflow repository root[/red]")
sys.exit(1)
check_list = None
if checks:
try:
check_list = [CheckType(c.strip()) for c in checks.split(",")]
except ValueError as e:
console_print(f"[red]Invalid check type: {e}[/red]")
console_print(f"Available checks: {', '.join([c.value for c in CheckType])}")
sys.exit(1)
if distribution == "airflow":
validator = AirflowReleaseValidator(
version=version,
path_to_airflow_svn=path_to_airflow_svn,
airflow_repo_root=airflow_repo_root,
task_sdk_version=task_sdk_version,
)
elif distribution == "airflowctl":
console_print("[yellow]airflowctl validation not yet implemented[/yellow]")
sys.exit(1)
elif distribution == "providers":
console_print("[yellow]providers validation not yet implemented[/yellow]")
sys.exit(1)
else:
console_print(f"[red]Unknown distribution: {distribution}[/red]")
sys.exit(1)
if not validator.validate(checks=check_list):
console_print(f"[red]Validation failed for {distribution} {version}[/red]")
sys.exit(1)
|
Validate a release candidate for PMC voting.
This command performs the validation checks required by the PMCs for a release.
Examples:
breeze release-management validate-rc-by-pmc \
--distribution airflow \
--version 3.1.3rc1 \
--task-sdk-version 1.1.3rc1 \
--path-to-airflow-svn ../asf-dist/dev/airflow \
--checks signatures,checksums
|
python
|
dev/breeze/src/airflow_breeze/commands/release_management_validation.py
| 61
|
[
"distribution",
"version",
"task_sdk_version",
"path_to_airflow_svn",
"checks"
] | true
| 8
| 7.84
|
apache/airflow
| 43,597
|
unknown
| false
|
|
parseForValidate
|
private void parseForValidate(String name, Map<String, String> props, Map<String, Object> parsed, Map<String, ConfigValue> configs) {
if (!configKeys.containsKey(name)) {
return;
}
ConfigKey key = configKeys.get(name);
ConfigValue config = configs.get(name);
Object value = null;
if (props.containsKey(key.name)) {
try {
value = parseType(key.name, props.get(key.name), key.type);
} catch (ConfigException e) {
config.addErrorMessage(e.getMessage());
}
} else if (NO_DEFAULT_VALUE.equals(key.defaultValue)) {
config.addErrorMessage("Missing required configuration \"" + key.name + "\" which has no default value.");
} else {
value = key.defaultValue;
}
if (key.validator != null) {
try {
key.validator.ensureValid(key.name, value);
} catch (ConfigException e) {
config.addErrorMessage(e.getMessage());
}
}
config.value(value);
parsed.put(name, value);
for (String dependent: key.dependents) {
parseForValidate(dependent, props, parsed, configs);
}
}
|
Validate the current configuration values with the configuration definition.
@param props the current configuration values
@return List of Config, each Config contains the updated configuration information given
the current configuration values.
|
java
|
clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java
| 633
|
[
"name",
"props",
"parsed",
"configs"
] |
void
| true
| 7
| 7.44
|
apache/kafka
| 31,560
|
javadoc
| false
|
randomPrint
|
@Deprecated
public static String randomPrint(final int count) {
return secure().nextPrint(count);
}
|
Creates a random string whose length is the number of characters specified.
<p>
Characters will be chosen from the set of characters which match the POSIX [:print:] regular expression character
class. This class includes all visible ASCII characters and spaces (i.e. anything except control characters).
</p>
@param count the length of random string to create.
@return the random string.
@throws IllegalArgumentException if {@code count} < 0.
@since 3.5
@deprecated Use {@link #nextPrint(int)} from {@link #secure()}, {@link #secureStrong()}, or {@link #insecure()}.
|
java
|
src/main/java/org/apache/commons/lang3/RandomStringUtils.java
| 605
|
[
"count"
] |
String
| true
| 1
| 6.48
|
apache/commons-lang
| 2,896
|
javadoc
| false
|
deserialize
|
def deserialize(cls: type, version: int, data: dict):
"""
Deserialize a Pydantic class.
Pydantic models can be serialized into a Python dictionary via `pydantic.main.BaseModel.model_dump`
and the dictionary can be deserialized through `pydantic.main.BaseModel.model_validate`. This function
can deserialize arbitrary Pydantic models that are in `allowed_deserialization_classes`.
:param cls: The actual model class
:param version: Serialization version (must not exceed __version__)
:param data: Dictionary with built-in types, typically from model_dump()
:return: An instance of the actual Pydantic model
"""
if version > __version__:
raise TypeError(f"Serialized version {version} is newer than the supported version {__version__}")
if not is_pydantic_model(cls):
# no deserializer available
raise TypeError(f"No deserializer found for {qualname(cls)}")
# Perform validation-based reconstruction
return cls.model_validate(data) # type: ignore
|
Deserialize a Pydantic class.
Pydantic models can be serialized into a Python dictionary via `pydantic.main.BaseModel.model_dump`
and the dictionary can be deserialized through `pydantic.main.BaseModel.model_validate`. This function
can deserialize arbitrary Pydantic models that are in `allowed_deserialization_classes`.
:param cls: The actual model class
:param version: Serialization version (must not exceed __version__)
:param data: Dictionary with built-in types, typically from model_dump()
:return: An instance of the actual Pydantic model
|
python
|
airflow-core/src/airflow/serialization/serializers/pydantic.py
| 54
|
[
"cls",
"version",
"data"
] | true
| 3
| 7.44
|
apache/airflow
| 43,597
|
sphinx
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.