function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
device
def device(self) -> torch.device: """ Get the device of the first node. Returns: The device of the first node """ return self._input_nodes[0].get_device()
Get the device of the first node. Returns: The device of the first node
python
torch/_inductor/kernel_inputs.py
85
[ "self" ]
torch.device
true
1
6.4
pytorch/pytorch
96,034
unknown
false
setStoreByValue
public void setStoreByValue(boolean storeByValue) { if (storeByValue != this.storeByValue) { this.storeByValue = storeByValue; // Need to recreate all Cache instances with the new store-by-value configuration... recreateCaches(); } }
Specify whether this cache manager stores a copy of each entry ({@code true} or the reference ({@code false} for all of its caches. <p>Default is "false" so that the value itself is stored and no serializable contract is required on cached values. <p>Note: A change of the store-by-value setting will reset all existing caches, if any, to reconfigure them with the new store-by-value requirement. @since 4.3
java
spring-context/src/main/java/org/springframework/cache/concurrent/ConcurrentMapCacheManager.java
140
[ "storeByValue" ]
void
true
2
6
spring-projects/spring-framework
59,386
javadoc
false
maybeRecordDeprecatedPreferredReadReplica
@Deprecated // To be removed in Kafka 5.0 release. private void maybeRecordDeprecatedPreferredReadReplica(TopicPartition tp, SubscriptionState subscription) { if (shouldReportDeprecatedMetric(tp.topic())) { MetricName metricName = deprecatedPartitionPreferredReadReplicaMetricName(tp); metrics.addMetricIfAbsent( metricName, null, (Gauge<Integer>) (config, now) -> subscription.preferredReadReplica(tp, 0L).orElse(-1) ); } }
This method is called by the {@link Fetch fetch} logic before it requests fetches in order to update the internal set of metrics that are tracked. @param subscription {@link SubscriptionState} that contains the set of assigned partitions @see SubscriptionState#assignmentId()
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetchMetricsManager.java
249
[ "tp", "subscription" ]
void
true
2
6.24
apache/kafka
31,560
javadoc
false
toPlainObject
function toPlainObject(value) { return copyObject(value, keysIn(value)); }
Converts `value` to a plain object flattening inherited enumerable string keyed properties of `value` to own properties of the plain object. @static @memberOf _ @since 3.0.0 @category Lang @param {*} value The value to convert. @returns {Object} Returns the converted plain object. @example function Foo() { this.b = 2; } Foo.prototype.c = 3; _.assign({ 'a': 1 }, new Foo); // => { 'a': 1, 'b': 2 } _.assign({ 'a': 1 }, _.toPlainObject(new Foo)); // => { 'a': 1, 'b': 2, 'c': 3 }
javascript
lodash.js
12,609
[ "value" ]
false
1
6.4
lodash/lodash
61,490
jsdoc
false
setUpEntry
private void setUpEntry(JarFile jarFile, JarArchiveEntry entry, UnpackHandler unpackHandler) throws IOException { try (ZipHeaderPeekInputStream inputStream = new ZipHeaderPeekInputStream(jarFile.getInputStream(entry))) { if (inputStream.hasZipHeader() && entry.getMethod() != ZipEntry.STORED) { new StoredEntryPreparator(inputStream, unpackHandler.requiresUnpack(entry.getName())) .prepareStoredEntry(entry); } else { entry.setCompressedSize(-1); } } }
Write the specified manifest. @param manifest the manifest to write @throws IOException of the manifest cannot be written
java
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/AbstractJarWriter.java
112
[ "jarFile", "entry", "unpackHandler" ]
void
true
3
6.88
spring-projects/spring-boot
79,428
javadoc
false
serializeParameterTypesOfNode
function serializeParameterTypesOfNode(node: Node, container: ClassLikeDeclaration): ArrayLiteralExpression { const valueDeclaration = isClassLike(node) ? getFirstConstructorWithBody(node) : isFunctionLike(node) && nodeIsPresent((node as FunctionLikeDeclaration).body) ? node : undefined; const expressions: SerializedTypeNode[] = []; if (valueDeclaration) { const parameters = getParametersOfDecoratedDeclaration(valueDeclaration, container); const numParameters = parameters.length; for (let i = 0; i < numParameters; i++) { const parameter = parameters[i]; if (i === 0 && isIdentifier(parameter.name) && parameter.name.escapedText === "this") { continue; } if (parameter.dotDotDotToken) { expressions.push(serializeTypeNode(getRestParameterElementType(parameter.type))); } else { expressions.push(serializeTypeOfNode(parameter, container)); } } } return factory.createArrayLiteralExpression(expressions); }
Serializes the type of a node for use with decorator type metadata. @param node The node that should have its type serialized.
typescript
src/compiler/transformers/typeSerializer.ts
200
[ "node", "container" ]
true
11
7.04
microsoft/TypeScript
107,154
jsdoc
false
serialize
public static byte[] serialize(final Serializable obj) { final ByteArrayOutputStream baos = new ByteArrayOutputStream(512); serialize(obj, baos); return baos.toByteArray(); }
Serializes an {@link Object} to a byte array for storage/serialization. @param obj the object to serialize to bytes. @return a byte[] with the converted Serializable. @throws SerializationException (runtime) if the serialization fails.
java
src/main/java/org/apache/commons/lang3/SerializationUtils.java
235
[ "obj" ]
true
1
6.24
apache/commons-lang
2,896
javadoc
false
constant
function constant(value) { return function() { return value; }; }
Creates a function that returns `value`. @static @memberOf _ @since 2.4.0 @category Util @param {*} value The value to return from the new function. @returns {Function} Returns the new constant function. @example var objects = _.times(2, _.constant({ 'a': 1 })); console.log(objects); // => [{ 'a': 1 }, { 'a': 1 }] console.log(objects[0] === objects[1]); // => true
javascript
lodash.js
15,503
[ "value" ]
false
1
6.32
lodash/lodash
61,490
jsdoc
false
classWrapperStatementVisitor
function classWrapperStatementVisitor(node: Node): VisitResult<Node | undefined> { if (shouldVisitNode(node)) { const original = getOriginalNode(node); if (isPropertyDeclaration(original) && hasStaticModifier(original)) { const ancestorFacts = enterSubtree( HierarchyFacts.StaticInitializerExcludes, HierarchyFacts.StaticInitializerIncludes, ); const result = visitorWorker(node, /*expressionResultIsUnused*/ false); exitSubtree(ancestorFacts, HierarchyFacts.FunctionSubtreeExcludes, HierarchyFacts.None); return result; } return visitorWorker(node, /*expressionResultIsUnused*/ false); } return node; }
Restores the `HierarchyFacts` for this node's ancestor after visiting this node's subtree, propagating specific facts from the subtree. @param ancestorFacts The `HierarchyFacts` of the ancestor to restore after visiting the subtree. @param excludeFacts The existing `HierarchyFacts` of the subtree that should not be propagated. @param includeFacts The new `HierarchyFacts` of the subtree that should be propagated.
typescript
src/compiler/transformers/es2015.ts
609
[ "node" ]
true
4
6.24
microsoft/TypeScript
107,154
jsdoc
false
hashCode
@Override public int hashCode() { int result = 27; for (String pattern : this.patterns) { result = 13 * result + pattern.hashCode(); } for (String excludedPattern : this.excludedPatterns) { result = 13 * result + excludedPattern.hashCode(); } return result; }
Does the exclusion pattern at the given index match the given String? @param pattern the {@code String} pattern to match @param patternIndex index of pattern (starting from 0) @return {@code true} if there is a match, {@code false} otherwise
java
spring-aop/src/main/java/org/springframework/aop/support/AbstractRegexpMethodPointcut.java
205
[]
true
1
6.72
spring-projects/spring-framework
59,386
javadoc
false
maybeBuildRequest
private Optional<UnsentRequest> maybeBuildRequest(AcknowledgeRequestState acknowledgeRequestState, long currentTimeMs, boolean onCommitAsync, AtomicBoolean isAsyncSent) { boolean asyncSent = true; try { if (acknowledgeRequestState == null || (!acknowledgeRequestState.isCloseRequest() && acknowledgeRequestState.isEmpty()) || (acknowledgeRequestState.isCloseRequest() && acknowledgeRequestState.isProcessed)) { return Optional.empty(); } if (acknowledgeRequestState.maybeExpire()) { // Fill in TimeoutException for (TopicIdPartition tip : acknowledgeRequestState.incompleteAcknowledgements.keySet()) { metricsManager.recordFailedAcknowledgements(acknowledgeRequestState.getIncompleteAcknowledgementsCount(tip)); acknowledgeRequestState.handleAcknowledgeTimedOut(tip); } acknowledgeRequestState.incompleteAcknowledgements.clear(); // Reset timer for any future processing on the same request state. acknowledgeRequestState.maybeResetTimerAndRequestState(); return Optional.empty(); } if (!acknowledgeRequestState.canSendRequest(currentTimeMs)) { // We wait for the backoff before we can send this request. asyncSent = false; return Optional.empty(); } UnsentRequest request = acknowledgeRequestState.buildRequest(); if (request == null) { asyncSent = false; return Optional.empty(); } acknowledgeRequestState.onSendAttempt(currentTimeMs); return Optional.of(request); } finally { if (onCommitAsync) { isAsyncSent.set(asyncSent); } } }
@param acknowledgeRequestState Contains the acknowledgements to be sent. @param currentTimeMs The current time in ms. @param onCommitAsync Boolean to denote if the acknowledgements came from a commitAsync or not. @param isAsyncSent Boolean to indicate if the async request has been sent. @return Returns the request if it was built.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareConsumeRequestManager.java
429
[ "acknowledgeRequestState", "currentTimeMs", "onCommitAsync", "isAsyncSent" ]
true
10
8.08
apache/kafka
31,560
javadoc
false
matchesClassCastMessage
private boolean matchesClassCastMessage(String classCastMessage, Class<?> eventClass) { // On Java 8, the message starts with the class name: "java.lang.String cannot be cast..." if (classCastMessage.startsWith(eventClass.getName())) { return true; } // On Java 11, the message starts with "class ..." a.k.a. Class.toString() if (classCastMessage.startsWith(eventClass.toString())) { return true; } // On Java 9, the message used to contain the module name: "java.base/java.lang.String cannot be cast..." int moduleSeparatorIndex = classCastMessage.indexOf('/'); if (moduleSeparatorIndex != -1 && classCastMessage.startsWith(eventClass.getName(), moduleSeparatorIndex + 1)) { return true; } // Assuming an unrelated class cast failure... return false; }
Invoke the given listener with the given event. @param listener the ApplicationListener to invoke @param event the current event to propagate @since 4.1
java
spring-context/src/main/java/org/springframework/context/event/SimpleApplicationEventMulticaster.java
204
[ "classCastMessage", "eventClass" ]
true
5
6.56
spring-projects/spring-framework
59,386
javadoc
false
getImports
Iterable<Group.Entry> getImports() { for (DeferredImportSelectorHolder deferredImport : this.deferredImports) { this.group.process(deferredImport.getConfigurationClass().getMetadata(), deferredImport.getImportSelector()); } return this.group.selectImports(); }
Return the imports defined by the group. @return each import with its associated configuration class
java
spring-context/src/main/java/org/springframework/context/annotation/ConfigurationClassParser.java
929
[]
true
1
6.88
spring-projects/spring-framework
59,386
javadoc
false
update_flow_filter
def update_flow_filter(self, flow_name: str, filter_tasks, set_trigger_ondemand: bool = False) -> None: """ Update the flow task filter; all filters will be removed if an empty array is passed to filter_tasks. :param flow_name: The flow name :param filter_tasks: List flow tasks to be added :param set_trigger_ondemand: If True, set the trigger to on-demand; otherwise, keep the trigger as is :return: None """ response = self.conn.describe_flow(flowName=flow_name) connector_type = response["sourceFlowConfig"]["connectorType"] tasks = [] # cleanup old filter tasks for task in response["tasks"]: if ( task["taskType"] == "Filter" and task.get("connectorOperator", {}).get(connector_type) != "PROJECTION" ): self.log.info("Removing task: %s", task) else: tasks.append(task) # List of non-filter tasks tasks += filter_tasks # Add the new filter tasks if set_trigger_ondemand: # Clean up attribute to force on-demand trigger del response["triggerConfig"]["triggerProperties"] self.conn.update_flow( flowName=response["flowName"], destinationFlowConfigList=response["destinationFlowConfigList"], sourceFlowConfig=response["sourceFlowConfig"], triggerConfig=response["triggerConfig"], description=response.get("description", "Flow description."), tasks=tasks, )
Update the flow task filter; all filters will be removed if an empty array is passed to filter_tasks. :param flow_name: The flow name :param filter_tasks: List flow tasks to be added :param set_trigger_ondemand: If True, set the trigger to on-demand; otherwise, keep the trigger as is :return: None
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/appflow.py
89
[ "self", "flow_name", "filter_tasks", "set_trigger_ondemand" ]
None
true
6
8.24
apache/airflow
43,597
sphinx
false
getResourcePatternResolver
private ResourcePatternResolver getResourcePatternResolver() { if (this.resourcePatternResolver == null) { this.resourcePatternResolver = new PathMatchingResourcePatternResolver(); } return this.resourcePatternResolver; }
Return the ResourceLoader that this component provider uses.
java
spring-context/src/main/java/org/springframework/context/annotation/ClassPathScanningCandidateComponentProvider.java
278
[]
ResourcePatternResolver
true
2
6.08
spring-projects/spring-framework
59,386
javadoc
false
find
public static @Nullable ConditionEvaluationReport find(BeanFactory beanFactory) { if (beanFactory instanceof ConfigurableListableBeanFactory) { return ConditionEvaluationReport.get((ConfigurableListableBeanFactory) beanFactory); } return null; }
Attempt to find the {@link ConditionEvaluationReport} for the specified bean factory. @param beanFactory the bean factory (may be {@code null}) @return the {@link ConditionEvaluationReport} or {@code null}
java
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/condition/ConditionEvaluationReport.java
169
[ "beanFactory" ]
ConditionEvaluationReport
true
2
7.28
spring-projects/spring-boot
79,428
javadoc
false
instantiate
@SuppressWarnings("unchecked") private T instantiate(Class<?> type) throws Exception { Constructor<?>[] constructors = type.getDeclaredConstructors(); Arrays.sort(constructors, CONSTRUCTOR_COMPARATOR); for (Constructor<?> constructor : constructors) { Object[] args = getArgs(constructor.getParameterTypes()); if (args != null) { ReflectionUtils.makeAccessible(constructor); return (T) constructor.newInstance(args); } } throw new IllegalAccessException("Class [" + type.getName() + "] has no suitable constructor"); }
Get an injectable argument instance for the given type. This method can be used when manually instantiating an object without reflection. @param <A> the argument type @param type the argument type @return the argument to inject or {@code null} @since 3.4.0
java
core/spring-boot/src/main/java/org/springframework/boot/util/Instantiator.java
207
[ "type" ]
T
true
2
8.24
spring-projects/spring-boot
79,428
javadoc
false
k
public abstract double k(double q, double compression, double n);
Converts a quantile to the k-scale. The total number of points is also provided so that a normalizing function can be computed if necessary. @param q The quantile @param compression Also known as delta in literature on the t-digest @param n The total number of samples @return The corresponding value of k
java
libs/tdigest/src/main/java/org/elasticsearch/tdigest/ScaleFunction.java
497
[ "q", "compression", "n" ]
true
1
6.32
elastic/elasticsearch
75,680
javadoc
false
min
def min(self, axis: AxisInt | None = None, skipna: bool = True, *args, **kwargs): """ Return the minimum value of the Index. Parameters ---------- axis : {None} Dummy argument for consistency with Series. skipna : bool, default True Exclude NA/null values when showing the result. *args, **kwargs Additional arguments and keywords for compatibility with NumPy. Returns ------- scalar Minimum value. See Also -------- Index.max : Return the maximum value of the object. Series.min : Return the minimum value in a Series. DataFrame.min : Return the minimum values in a DataFrame. Examples -------- >>> idx = pd.Index([3, 2, 1]) >>> idx.min() 1 >>> idx = pd.Index(["c", "b", "a"]) >>> idx.min() 'a' For a MultiIndex, the minimum is determined lexicographically. >>> idx = pd.MultiIndex.from_product([("a", "b"), (2, 1)]) >>> idx.min() ('a', 1) """ nv.validate_min(args, kwargs) nv.validate_minmax_axis(axis) if not len(self): return self._na_value if len(self) and self.is_monotonic_increasing: # quick check first = self[0] if not isna(first): return maybe_unbox_numpy_scalar(first) if not self._is_multi and self.hasnans: # Take advantage of cache mask = self._isnan if not skipna or mask.all(): return self._na_value if not self._is_multi and not isinstance(self._values, np.ndarray): return self._values._reduce(name="min", skipna=skipna) return maybe_unbox_numpy_scalar(nanops.nanmin(self._values, skipna=skipna))
Return the minimum value of the Index. Parameters ---------- axis : {None} Dummy argument for consistency with Series. skipna : bool, default True Exclude NA/null values when showing the result. *args, **kwargs Additional arguments and keywords for compatibility with NumPy. Returns ------- scalar Minimum value. See Also -------- Index.max : Return the maximum value of the object. Series.min : Return the minimum value in a Series. DataFrame.min : Return the minimum values in a DataFrame. Examples -------- >>> idx = pd.Index([3, 2, 1]) >>> idx.min() 1 >>> idx = pd.Index(["c", "b", "a"]) >>> idx.min() 'a' For a MultiIndex, the minimum is determined lexicographically. >>> idx = pd.MultiIndex.from_product([("a", "b"), (2, 1)]) >>> idx.min() ('a', 1)
python
pandas/core/indexes/base.py
7,501
[ "self", "axis", "skipna" ]
true
11
8.48
pandas-dev/pandas
47,362
numpy
false
removeDefaultRootHandler
private void removeDefaultRootHandler() { try { java.util.logging.Logger rootLogger = java.util.logging.LogManager.getLogManager().getLogger(""); Handler[] handlers = rootLogger.getHandlers(); if (handlers.length == 1 && handlers[0] instanceof ConsoleHandler) { rootLogger.removeHandler(handlers[0]); } } catch (Throwable ex) { // Ignore and continue } }
Return the configuration location. The result may be: <ul> <li>{@code null}: if DefaultConfiguration is used (no explicit config loaded)</li> <li>A file path: if provided explicitly by the user</li> <li>A URI: if loaded from the classpath default or a custom location</li> </ul> @param configuration the source configuration @return the config location or {@code null}
java
core/spring-boot/src/main/java/org/springframework/boot/logging/log4j2/Log4J2LoggingSystem.java
224
[]
void
true
4
7.44
spring-projects/spring-boot
79,428
javadoc
false
view_url_template
def view_url_template(self) -> str | None: """Return a URL for viewing the DAGs in S3. Currently, versioning is not supported.""" if self.version: raise AirflowException("S3 url with version is not supported") if hasattr(self, "_view_url_template") and self._view_url_template: # Because we use this method in the view_url method, we need to handle # backward compatibility for Airflow versions that doesn't have the # _view_url_template attribute. Should be removed when we drop support for Airflow 3.0 return self._view_url_template # https://<bucket-name>.s3.<region>.amazonaws.com/<object-key> url = f"https://{self.bucket_name}.s3" if self.s3_hook.region_name: url += f".{self.s3_hook.region_name}" url += ".amazonaws.com" if self.prefix: url += f"/{self.prefix}" return url
Return a URL for viewing the DAGs in S3. Currently, versioning is not supported.
python
providers/amazon/src/airflow/providers/amazon/aws/bundles/s3.py
148
[ "self" ]
str | None
true
6
6
apache/airflow
43,597
unknown
false
isActive
boolean isActive(@Nullable ConfigDataActivationContext activationContext) { if (this.kind == Kind.UNBOUND_IMPORT) { return false; } return this.properties == null || this.properties.isActive(activationContext); }
Return if this contributor is currently active. @param activationContext the activation context @return if the contributor is active
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigDataEnvironmentContributor.java
136
[ "activationContext" ]
true
3
7.28
spring-projects/spring-boot
79,428
javadoc
false
replace
public String replace(final StringBuffer source) { if (source == null) { return null; } final StrBuilder buf = new StrBuilder(source.length()).append(source); substitute(buf, 0, buf.length()); return buf.toString(); }
Replaces all the occurrences of variables with their matching values from the resolver using the given source buffer as a template. The buffer is not altered by this method. @param source the buffer to use as a template, not changed, null returns null. @return the result of the replace operation.
java
src/main/java/org/apache/commons/lang3/text/StrSubstitutor.java
690
[ "source" ]
String
true
2
8.24
apache/commons-lang
2,896
javadoc
false
onSuppressedException
protected void onSuppressedException(Exception ex) { if (this.suppressedExceptions != null && this.suppressedExceptions.size() < SUPPRESSED_EXCEPTIONS_LIMIT) { this.suppressedExceptions.add(ex); } }
Register an exception that happened to get suppressed during the creation of a singleton bean instance, for example, a temporary circular reference resolution problem. <p>The default implementation preserves any given exception in this registry's collection of suppressed exceptions, up to a limit of 100 exceptions, adding them as related causes to an eventual top-level {@link BeanCreationException}. @param ex the Exception to register @see BeanCreationException#getRelatedCauses()
java
spring-beans/src/main/java/org/springframework/beans/factory/support/DefaultSingletonBeanRegistry.java
471
[ "ex" ]
void
true
3
6.08
spring-projects/spring-framework
59,386
javadoc
false
__init__
def __init__( self, index_array: np.ndarray | None = None, window_size: int | BaseIndexer = 0, groupby_indices: dict | None = None, window_indexer: type[BaseIndexer] = BaseIndexer, indexer_kwargs: dict | None = None, **kwargs, ) -> None: """ Parameters ---------- index_array : np.ndarray or None np.ndarray of the index of the original object that we are performing a chained groupby operation over. This index has been pre-sorted relative to the groups window_size : int or BaseIndexer window size during the windowing operation groupby_indices : dict or None dict of {group label: [positional index of rows belonging to the group]} window_indexer : BaseIndexer BaseIndexer class determining the start and end bounds of each group indexer_kwargs : dict or None Custom kwargs to be passed to window_indexer **kwargs : keyword arguments that will be available when get_window_bounds is called """ self.groupby_indices = groupby_indices or {} self.window_indexer = window_indexer self.indexer_kwargs = indexer_kwargs.copy() if indexer_kwargs else {} super().__init__( index_array=index_array, window_size=self.indexer_kwargs.pop("window_size", window_size), **kwargs, )
Parameters ---------- index_array : np.ndarray or None np.ndarray of the index of the original object that we are performing a chained groupby operation over. This index has been pre-sorted relative to the groups window_size : int or BaseIndexer window size during the windowing operation groupby_indices : dict or None dict of {group label: [positional index of rows belonging to the group]} window_indexer : BaseIndexer BaseIndexer class determining the start and end bounds of each group indexer_kwargs : dict or None Custom kwargs to be passed to window_indexer **kwargs : keyword arguments that will be available when get_window_bounds is called
python
pandas/core/indexers/objects.py
533
[ "self", "index_array", "window_size", "groupby_indices", "window_indexer", "indexer_kwargs" ]
None
true
3
6.24
pandas-dev/pandas
47,362
numpy
false
destroySingleton
public void destroySingleton(String beanName) { // Destroy the corresponding DisposableBean instance. // This also triggers the destruction of dependent beans. DisposableBean disposableBean; synchronized (this.disposableBeans) { disposableBean = this.disposableBeans.remove(beanName); } destroyBean(beanName, disposableBean); // destroySingletons() removes all singleton instances at the end, // leniently tolerating late retrieval during the shutdown phase. if (!this.singletonsCurrentlyInDestruction) { // For an individual destruction, remove the registered instance now. // As of 6.2, this happens after the current bean's destruction step, // allowing for late bean retrieval by on-demand suppliers etc. if (this.currentCreationThreads.get(beanName) == Thread.currentThread()) { // Local remove after failed creation step -> without singleton lock // since bean creation may have happened leniently without any lock. removeSingleton(beanName); } else { this.singletonLock.lock(); try { removeSingleton(beanName); } finally { this.singletonLock.unlock(); } } } }
Destroy the given bean. Delegates to {@code destroyBean} if a corresponding disposable bean instance is found. @param beanName the name of the bean @see #destroyBean
java
spring-beans/src/main/java/org/springframework/beans/factory/support/DefaultSingletonBeanRegistry.java
738
[ "beanName" ]
void
true
3
6.72
spring-projects/spring-framework
59,386
javadoc
false
useStateLike
function useStateLike<S>( name: string, initialState: (() => S) | S ): [S, (update: ((prevState: S) => S) | S) => void] { const stateRef = useRefLike( name, // @ts-expect-error S type should never be function, but there's no way to tell that to TypeScript typeof initialState === 'function' ? initialState() : initialState ); const setState = (update: ((prevState: S) => S) | S) => { // @ts-expect-error S type should never be function, but there's no way to tell that to TypeScript stateRef.current = typeof update === 'function' ? update(stateRef.current) : update; triggerUpdate(); }; return [stateRef.current, setState]; }
Returns a mutable ref object. @example ```ts const ref = useRef(0); ref.current = 1; ``` @template T The type of the ref object. @param {T} initialValue The initial value of the ref object. @returns {{ current: T }} The mutable ref object.
typescript
code/core/src/preview-api/modules/addons/hooks.ts
382
[ "name", "initialState" ]
true
3
8.48
storybookjs/storybook
88,865
jsdoc
false
findCandidateWriteMethods
private List<Method> findCandidateWriteMethods(MethodDescriptor[] methodDescriptors) { List<Method> matches = new ArrayList<>(); for (MethodDescriptor methodDescriptor : methodDescriptors) { Method method = methodDescriptor.getMethod(); if (isCandidateWriteMethod(method)) { matches.add(method); } } // Sort non-void returning write methods to guard against the ill effects of // non-deterministic sorting of methods returned from Class#getMethods. // For historical reasons, the natural sort order is reversed. // See https://github.com/spring-projects/spring-framework/issues/14744. matches.sort(Comparator.comparing(Method::toString).reversed()); return matches; }
Wrap the given {@link BeanInfo} instance; copy all its existing property descriptors locally, wrapping each in a custom {@link SimpleIndexedPropertyDescriptor indexed} or {@link SimplePropertyDescriptor non-indexed} {@code PropertyDescriptor} variant that bypasses default JDK weak/soft reference management; then search through its method descriptors to find any non-void returning write methods and update or create the corresponding {@link PropertyDescriptor} for each one found. @param delegate the wrapped {@code BeanInfo}, which is never modified @see #getPropertyDescriptors()
java
spring-beans/src/main/java/org/springframework/beans/ExtendedBeanInfo.java
132
[ "methodDescriptors" ]
true
2
6.08
spring-projects/spring-framework
59,386
javadoc
false
subarray
public static long[] subarray(final long[] array, int startIndexInclusive, int endIndexExclusive) { if (array == null) { return null; } startIndexInclusive = max0(startIndexInclusive); endIndexExclusive = Math.min(endIndexExclusive, array.length); final int newSize = endIndexExclusive - startIndexInclusive; if (newSize <= 0) { return EMPTY_LONG_ARRAY; } return arraycopy(array, startIndexInclusive, 0, newSize, long[]::new); }
Produces a new {@code long} array containing the elements between the start and end indices. <p> The start index is inclusive, the end index exclusive. Null array input produces null output. </p> @param array the input array. @param startIndexInclusive the starting index. Undervalue (&lt;0) is promoted to 0, overvalue (&gt;array.length) results in an empty array. @param endIndexExclusive elements up to endIndex-1 are present in the returned subarray. Undervalue (&lt; startIndex) produces empty array, overvalue (&gt;array.length) is demoted to array length. @return a new array containing the elements between the start and end indices. @since 2.1 @see Arrays#copyOfRange(long[], int, int)
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
7,924
[ "array", "startIndexInclusive", "endIndexExclusive" ]
true
3
7.6
apache/commons-lang
2,896
javadoc
false
as_sql
def as_sql(self, compiler, connection): """ Responsible for returning a (sql, [params]) tuple to be included in the current query. Different backends can provide their own implementation, by providing an `as_{vendor}` method and patching the Expression: ``` def override_as_sql(self, compiler, connection): # custom logic return super().as_sql(compiler, connection) setattr(Expression, 'as_' + connection.vendor, override_as_sql) ``` Arguments: * compiler: the query compiler responsible for generating the query. Must have a compile method, returning a (sql, [params]) tuple. Calling compiler(value) will return a quoted `value`. * connection: the database connection used for the current query. Return: (sql, params) Where `sql` is a string containing ordered sql parameters to be replaced with the elements of the list `params`. """ raise NotImplementedError("Subclasses must implement as_sql()")
Responsible for returning a (sql, [params]) tuple to be included in the current query. Different backends can provide their own implementation, by providing an `as_{vendor}` method and patching the Expression: ``` def override_as_sql(self, compiler, connection): # custom logic return super().as_sql(compiler, connection) setattr(Expression, 'as_' + connection.vendor, override_as_sql) ``` Arguments: * compiler: the query compiler responsible for generating the query. Must have a compile method, returning a (sql, [params]) tuple. Calling compiler(value) will return a quoted `value`. * connection: the database connection used for the current query. Return: (sql, params) Where `sql` is a string containing ordered sql parameters to be replaced with the elements of the list `params`.
python
django/db/models/expressions.py
225
[ "self", "compiler", "connection" ]
false
1
6
django/django
86,204
google
false
get_root_path
def get_root_path(import_name: str) -> str: """Find the root path of a package, or the path that contains a module. If it cannot be found, returns the current working directory. Not to be confused with the value returned by :func:`find_package`. :meta private: """ # Module already imported and has a file attribute. Use that first. mod = sys.modules.get(import_name) if mod is not None and hasattr(mod, "__file__") and mod.__file__ is not None: return os.path.dirname(os.path.abspath(mod.__file__)) # Next attempt: check the loader. try: spec = importlib.util.find_spec(import_name) if spec is None: raise ValueError except (ImportError, ValueError): loader = None else: loader = spec.loader # Loader does not exist or we're referring to an unloaded main # module or a main module without path (interactive sessions), go # with the current working directory. if loader is None: return os.getcwd() if hasattr(loader, "get_filename"): filepath = loader.get_filename(import_name) # pyright: ignore else: # Fall back to imports. __import__(import_name) mod = sys.modules[import_name] filepath = getattr(mod, "__file__", None) # If we don't have a file path it might be because it is a # namespace package. In this case pick the root path from the # first module that is contained in the package. if filepath is None: raise RuntimeError( "No root path can be found for the provided module" f" {import_name!r}. This can happen because the module" " came from an import hook that does not provide file" " name information or because it's a namespace package." " In this case the root path needs to be explicitly" " provided." ) # filepath is import_name.py for a module, or __init__.py for a package. return os.path.dirname(os.path.abspath(filepath)) # type: ignore[no-any-return]
Find the root path of a package, or the path that contains a module. If it cannot be found, returns the current working directory. Not to be confused with the value returned by :func:`find_package`. :meta private:
python
src/flask/helpers.py
571
[ "import_name" ]
str
true
10
6
pallets/flask
70,946
unknown
false
filter
default ConfigurationPropertySource filter(Predicate<ConfigurationPropertyName> filter) { return new FilteredConfigurationPropertiesSource(this, filter); }
Return a filtered variant of this source, containing only names that match the given {@link Predicate}. @param filter the filter to match @return a filtered {@link ConfigurationPropertySource} instance
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertySource.java
67
[ "filter" ]
ConfigurationPropertySource
true
1
6.16
spring-projects/spring-boot
79,428
javadoc
false
ErrorOrWarningView
function ErrorOrWarningView({ className, badgeClassName, count, message, }: ErrorOrWarningViewProps) { return ( <div className={className}> {count > 1 && <div className={badgeClassName}>{count}</div>} <div className={styles.Message} title={message}> {message} </div> </div> ); }
Copyright (c) Meta Platforms, Inc. and affiliates. This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. @flow
javascript
packages/react-devtools-shared/src/devtools/views/Components/InspectedElementErrorsAndWarningsTree.js
169
[]
false
2
6.24
facebook/react
241,750
jsdoc
false
isListTerminator
function isListTerminator(kind: ParsingContext): boolean { if (token() === SyntaxKind.EndOfFileToken) { // Being at the end of the file ends all lists. return true; } switch (kind) { case ParsingContext.BlockStatements: case ParsingContext.SwitchClauses: case ParsingContext.TypeMembers: case ParsingContext.ClassMembers: case ParsingContext.EnumMembers: case ParsingContext.ObjectLiteralMembers: case ParsingContext.ObjectBindingElements: case ParsingContext.ImportOrExportSpecifiers: case ParsingContext.ImportAttributes: return token() === SyntaxKind.CloseBraceToken; case ParsingContext.SwitchClauseStatements: return token() === SyntaxKind.CloseBraceToken || token() === SyntaxKind.CaseKeyword || token() === SyntaxKind.DefaultKeyword; case ParsingContext.HeritageClauseElement: return token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.ExtendsKeyword || token() === SyntaxKind.ImplementsKeyword; case ParsingContext.VariableDeclarations: return isVariableDeclaratorListTerminator(); case ParsingContext.TypeParameters: // Tokens other than '>' are here for better error recovery return token() === SyntaxKind.GreaterThanToken || token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.ExtendsKeyword || token() === SyntaxKind.ImplementsKeyword; case ParsingContext.ArgumentExpressions: // Tokens other than ')' are here for better error recovery return token() === SyntaxKind.CloseParenToken || token() === SyntaxKind.SemicolonToken; case ParsingContext.ArrayLiteralMembers: case ParsingContext.TupleElementTypes: case ParsingContext.ArrayBindingElements: return token() === SyntaxKind.CloseBracketToken; case ParsingContext.JSDocParameters: case ParsingContext.Parameters: case ParsingContext.RestProperties: // Tokens other than ')' and ']' (the latter for index signatures) are here for better error recovery return token() === SyntaxKind.CloseParenToken || token() === SyntaxKind.CloseBracketToken /*|| token === SyntaxKind.OpenBraceToken*/; case ParsingContext.TypeArguments: // All other tokens should cause the type-argument to terminate except comma token return token() !== SyntaxKind.CommaToken; case ParsingContext.HeritageClauses: return token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.CloseBraceToken; case ParsingContext.JsxAttributes: return token() === SyntaxKind.GreaterThanToken || token() === SyntaxKind.SlashToken; case ParsingContext.JsxChildren: return token() === SyntaxKind.LessThanToken && lookAhead(nextTokenIsSlash); default: return false; } }
Reports a diagnostic error for the current token being an invalid name. @param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName). @param nameDiagnostic Diagnostic to report for all other cases. @param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
typescript
src/compiler/parser.ts
3,000
[ "kind" ]
true
15
6.88
microsoft/TypeScript
107,154
jsdoc
false
removeAll
public static long[] removeAll(final long[] array, final int... indices) { return (long[]) removeAll((Object) array, indices); }
Removes the elements at the specified positions from the specified array. All remaining elements are shifted to the left. <p> This method returns a new array with the same elements of the input array except those at the specified positions. The component type of the returned array is always the same as that of the input array. </p> <p> If the input array is {@code null}, an IndexOutOfBoundsException will be thrown, because in that case no valid index can be specified. </p> <pre> ArrayUtils.removeAll([1], 0) = [] ArrayUtils.removeAll([2, 6], 0) = [6] ArrayUtils.removeAll([2, 6], 0, 1) = [] ArrayUtils.removeAll([2, 6, 3], 1, 2) = [2] ArrayUtils.removeAll([2, 6, 3], 0, 2) = [6] ArrayUtils.removeAll([2, 6, 3], 0, 1, 2) = [] </pre> @param array the array to remove the element from, may not be {@code null}. @param indices the positions of the elements to be removed. @return A new array containing the existing elements except those at the specified positions. @throws IndexOutOfBoundsException if any index is out of range (index &lt; 0 || index &gt;= array.length), or if the array is {@code null}. @since 3.0.1
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
5,138
[ "array" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
writeLayerIndex
private void writeLayerIndex(AbstractJarWriter writer) throws IOException { Assert.state(this.layout != null, "'layout' must not be null"); String name = this.layout.getLayersIndexFileLocation(); if (StringUtils.hasLength(name)) { Assert.state(this.layers != null, "'layers' must not be null"); Assert.state(this.layersIndex != null, "'layersIndex' must not be null"); Layer layer = this.layers.getLayer(name); this.layersIndex.add(layer, name); writer.writeEntry(name, this.layersIndex::writeTo); } }
Sets if jarmode jars relevant for the packaging should be automatically included. @param includeRelevantJarModeJars if relevant jars are included
java
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/Packager.java
253
[ "writer" ]
void
true
2
6.08
spring-projects/spring-boot
79,428
javadoc
false
polymul
def polymul(c1, c2): """ Multiply one polynomial by another. Returns the product of two polynomials `c1` * `c2`. The arguments are sequences of coefficients, from lowest order term to highest, e.g., [1,2,3] represents the polynomial ``1 + 2*x + 3*x**2.`` Parameters ---------- c1, c2 : array_like 1-D arrays of coefficients representing a polynomial, relative to the "standard" basis, and ordered from lowest order term to highest. Returns ------- out : ndarray Of the coefficients of their product. See Also -------- polyadd, polysub, polymulx, polydiv, polypow Examples -------- >>> from numpy.polynomial import polynomial as P >>> c1 = (1, 2, 3) >>> c2 = (3, 2, 1) >>> P.polymul(c1, c2) array([ 3., 8., 14., 8., 3.]) """ # c1, c2 are trimmed copies [c1, c2] = pu.as_series([c1, c2]) ret = np.convolve(c1, c2) return pu.trimseq(ret)
Multiply one polynomial by another. Returns the product of two polynomials `c1` * `c2`. The arguments are sequences of coefficients, from lowest order term to highest, e.g., [1,2,3] represents the polynomial ``1 + 2*x + 3*x**2.`` Parameters ---------- c1, c2 : array_like 1-D arrays of coefficients representing a polynomial, relative to the "standard" basis, and ordered from lowest order term to highest. Returns ------- out : ndarray Of the coefficients of their product. See Also -------- polyadd, polysub, polymulx, polydiv, polypow Examples -------- >>> from numpy.polynomial import polynomial as P >>> c1 = (1, 2, 3) >>> c2 = (3, 2, 1) >>> P.polymul(c1, c2) array([ 3., 8., 14., 8., 3.])
python
numpy/polynomial/polynomial.py
330
[ "c1", "c2" ]
false
1
6.08
numpy/numpy
31,054
numpy
false
writeBytesToImpl
abstract void writeBytesToImpl(byte[] dest, int offset, int maxLength);
Copies bytes from this hash code into {@code dest}. @param dest the byte array into which the hash code will be written @param offset the start offset in the data @param maxLength the maximum number of bytes to write @return the number of bytes written to {@code dest} @throws IndexOutOfBoundsException if there is not enough room in {@code dest}
java
android/guava/src/com/google/common/hash/HashCode.java
91
[ "dest", "offset", "maxLength" ]
void
true
1
6.48
google/guava
51,352
javadoc
false
make_biclusters
def make_biclusters( shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None, ): """Generate a constant block diagonal structure array for biclustering. Read more in the :ref:`User Guide <sample_generators>`. Parameters ---------- shape : tuple of shape (n_rows, n_cols) The shape of the result. n_clusters : int The number of biclusters. noise : float, default=0.0 The standard deviation of the gaussian noise. minval : float, default=10 Minimum value of a bicluster. maxval : float, default=100 Maximum value of a bicluster. shuffle : bool, default=True Shuffle the samples. random_state : int, RandomState instance or None, default=None Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- X : ndarray of shape `shape` The generated array. rows : ndarray of shape (n_clusters, X.shape[0]) The indicators for cluster membership of each row. cols : ndarray of shape (n_clusters, X.shape[1]) The indicators for cluster membership of each column. See Also -------- make_checkerboard: Generate an array with block checkerboard structure for biclustering. References ---------- .. [1] Dhillon, I. S. (2001, August). Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 269-274). ACM. Examples -------- >>> from sklearn.datasets import make_biclusters >>> data, rows, cols = make_biclusters( ... shape=(10, 20), n_clusters=2, random_state=42 ... ) >>> data.shape (10, 20) >>> rows.shape (2, 10) >>> cols.shape (2, 20) """ generator = check_random_state(random_state) n_rows, n_cols = shape consts = generator.uniform(minval, maxval, n_clusters) # row and column clusters of approximately equal sizes row_sizes = generator.multinomial(n_rows, np.repeat(1.0 / n_clusters, n_clusters)) col_sizes = generator.multinomial(n_cols, np.repeat(1.0 / n_clusters, n_clusters)) row_labels = np.hstack( [np.repeat(val, rep) for val, rep in zip(range(n_clusters), row_sizes)] ) col_labels = np.hstack( [np.repeat(val, rep) for val, rep in zip(range(n_clusters), col_sizes)] ) result = np.zeros(shape, dtype=np.float64) for i in range(n_clusters): selector = np.outer(row_labels == i, col_labels == i) result[selector] += consts[i] if noise > 0: result += generator.normal(scale=noise, size=result.shape) if shuffle: result, row_idx, col_idx = _shuffle(result, random_state) row_labels = row_labels[row_idx] col_labels = col_labels[col_idx] rows = np.vstack([row_labels == c for c in range(n_clusters)]) cols = np.vstack([col_labels == c for c in range(n_clusters)]) return result, rows, cols
Generate a constant block diagonal structure array for biclustering. Read more in the :ref:`User Guide <sample_generators>`. Parameters ---------- shape : tuple of shape (n_rows, n_cols) The shape of the result. n_clusters : int The number of biclusters. noise : float, default=0.0 The standard deviation of the gaussian noise. minval : float, default=10 Minimum value of a bicluster. maxval : float, default=100 Maximum value of a bicluster. shuffle : bool, default=True Shuffle the samples. random_state : int, RandomState instance or None, default=None Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- X : ndarray of shape `shape` The generated array. rows : ndarray of shape (n_clusters, X.shape[0]) The indicators for cluster membership of each row. cols : ndarray of shape (n_clusters, X.shape[1]) The indicators for cluster membership of each column. See Also -------- make_checkerboard: Generate an array with block checkerboard structure for biclustering. References ---------- .. [1] Dhillon, I. S. (2001, August). Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 269-274). ACM. Examples -------- >>> from sklearn.datasets import make_biclusters >>> data, rows, cols = make_biclusters( ... shape=(10, 20), n_clusters=2, random_state=42 ... ) >>> data.shape (10, 20) >>> rows.shape (2, 10) >>> cols.shape (2, 20)
python
sklearn/datasets/_samples_generator.py
2,134
[ "shape", "n_clusters", "noise", "minval", "maxval", "shuffle", "random_state" ]
false
4
7.28
scikit-learn/scikit-learn
64,340
numpy
false
leavingAfterReleasingActiveTasks
private void leavingAfterReleasingActiveTasks(Throwable callbackError) { if (callbackError != null) { log.error("Member {} callback to revoke task assignment failed. It will proceed " + "to clear its assignment and send a leave group heartbeat", memberId, callbackError); } else { log.info("Member {} completed callback to revoke task assignment. It will proceed " + "to clear its assignment and send a leave group heartbeat", memberId); } leaving(); }
Leaves the group. <p> This method does the following: <ol> <li>Transitions member state to {@link MemberState#PREPARE_LEAVING}.</li> <li>Requests the invocation of the revocation callback or lost callback.</li> <li>Once the callback completes, it clears the current and target assignment, unsubscribes from all topics and transitions the member state to {@link MemberState#LEAVING}.</li> </ol> States {@link MemberState#PREPARE_LEAVING} and {@link MemberState#LEAVING} cause the heartbeat request manager to send a leave group heartbeat. </p> @return future that will complete when the revocation callback execution completes and the heartbeat to leave the group has been sent out.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/StreamsMembershipManager.java
958
[ "callbackError" ]
void
true
2
6.4
apache/kafka
31,560
javadoc
false
identity
static <T, E extends Throwable> FailableFunction<T, T, E> identity() { return t -> t; }
Returns a function that always returns its input argument. @param <T> the type of the input and output objects to the function @param <E> The type of thrown exception or error. @return a function that always returns its input argument
java
src/main/java/org/apache/commons/lang3/function/FailableFunction.java
59
[]
true
1
6.96
apache/commons-lang
2,896
javadoc
false
append
def append(self, other: Index | Sequence[Index]) -> Index: """ Append a collection of Index options together. Parameters ---------- other : Index or list/tuple of indices Single Index or a collection of indices, which can be either a list or a tuple. Returns ------- Index Returns a new Index object resulting from appending the provided other indices to the original Index. See Also -------- Index.insert : Make new Index inserting new item at location. Examples -------- >>> idx = pd.Index([1, 2, 3]) >>> idx.append(pd.Index([4])) Index([1, 2, 3, 4], dtype='int64') """ to_concat = [self] if isinstance(other, (list, tuple)): to_concat += list(other) else: # error: Argument 1 to "append" of "list" has incompatible type # "Union[Index, Sequence[Index]]"; expected "Index" to_concat.append(other) # type: ignore[arg-type] for obj in to_concat: if not isinstance(obj, Index): raise TypeError("all inputs must be Index") names = {obj.name for obj in to_concat} name = None if len(names) > 1 else self.name return self._concat(to_concat, name)
Append a collection of Index options together. Parameters ---------- other : Index or list/tuple of indices Single Index or a collection of indices, which can be either a list or a tuple. Returns ------- Index Returns a new Index object resulting from appending the provided other indices to the original Index. See Also -------- Index.insert : Make new Index inserting new item at location. Examples -------- >>> idx = pd.Index([1, 2, 3]) >>> idx.append(pd.Index([4])) Index([1, 2, 3, 4], dtype='int64')
python
pandas/core/indexes/base.py
5,373
[ "self", "other" ]
Index
true
6
8.48
pandas-dev/pandas
47,362
numpy
false
getExports
function getExports(name: Identifier): ModuleExportName[] | undefined { if (!isGeneratedIdentifier(name)) { const importDeclaration = resolver.getReferencedImportDeclaration(name); if (importDeclaration) { return currentModuleInfo?.exportedBindings[getOriginalNodeId(importDeclaration)]; } // An exported namespace or enum may merge with an ambient declaration, which won't show up in .js emit, so // we analyze all value exports of a symbol. const bindingsSet = new Set<Identifier>(); const declarations = resolver.getReferencedValueDeclarations(name); if (declarations) { for (const declaration of declarations) { const bindings = currentModuleInfo?.exportedBindings[getOriginalNodeId(declaration)]; if (bindings) { for (const binding of bindings) { bindingsSet.add(binding); } } } if (bindingsSet.size) { return arrayFrom(bindingsSet); } } } else if (isFileLevelReservedGeneratedIdentifier(name)) { const exportSpecifiers = currentModuleInfo?.exportSpecifiers.get(name); if (exportSpecifiers) { const exportedNames: ModuleExportName[] = []; for (const exportSpecifier of exportSpecifiers) { exportedNames.push(exportSpecifier.name); } return exportedNames; } } }
Gets the additional exports of a name. @param name The name.
typescript
src/compiler/transformers/module/module.ts
2,460
[ "name" ]
true
9
6.72
microsoft/TypeScript
107,154
jsdoc
false
register
@Override protected final void register(String description, ServletContext servletContext) { D registration = addRegistration(description, servletContext); if (registration == null) { if (this.ignoreRegistrationFailure) { logger.info(StringUtils.capitalize(description) + " was not registered (possibly already registered?)"); return; } throw new IllegalStateException( "Failed to register '%s' on the servlet context. Possibly already registered?" .formatted(description)); } configure(registration); }
Add a single init-parameter, replacing any existing parameter with the same name. @param name the init-parameter name @param value the init-parameter value
java
core/spring-boot/src/main/java/org/springframework/boot/web/servlet/DynamicRegistrationBean.java
113
[ "description", "servletContext" ]
void
true
3
6.4
spring-projects/spring-boot
79,428
javadoc
false
weakCompareAndSet
public final boolean weakCompareAndSet(double expect, double update) { return value.weakCompareAndSet(doubleToRawLongBits(expect), doubleToRawLongBits(update)); }
Atomically sets the value to the given updated value if the current value is <a href="#bitEquals">bitwise equal</a> to the expected value. <p>May <a href="http://download.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/package-summary.html#Spurious"> fail spuriously</a> and does not provide ordering guarantees, so is only rarely an appropriate alternative to {@code compareAndSet}. @param expect the expected value @param update the new value @return {@code true} if successful
java
android/guava/src/com/google/common/util/concurrent/AtomicDouble.java
141
[ "expect", "update" ]
true
1
6.16
google/guava
51,352
javadoc
false
eq
def eq( self, other, level: Level | None = None, fill_value: float | None = None, axis: Axis = 0, ) -> Series: """ Return Equal to of series and other, element-wise (binary operator `eq`). Equivalent to ``series == other``, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters ---------- other : object When a Series is provided, will align on indexes. For all other types, will behave the same as ``==`` but with possibly different results due to the other arguments. level : int or name Broadcast across a level, matching Index values on the passed MultiIndex level. fill_value : None or float value, default None (NaN) Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. axis : {0 or 'index'} Unused. Parameter needed for compatibility with DataFrame. Returns ------- Series The result of the operation. See Also -------- Series.ge : Return elementwise Greater than or equal to of series and other. Series.le : Return elementwise Less than or equal to of series and other. Series.gt : Return elementwise Greater than of series and other. Series.lt : Return elementwise Less than of series and other. Examples -------- >>> a = pd.Series([1, 1, 1, np.nan], index=["a", "b", "c", "d"]) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=["a", "b", "d", "e"]) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.eq(b, fill_value=0) a True b False c False d False e False dtype: bool """ return self._flex_method( other, operator.eq, level=level, fill_value=fill_value, axis=axis )
Return Equal to of series and other, element-wise (binary operator `eq`). Equivalent to ``series == other``, but with support to substitute a fill_value for missing data in either one of the inputs. Parameters ---------- other : object When a Series is provided, will align on indexes. For all other types, will behave the same as ``==`` but with possibly different results due to the other arguments. level : int or name Broadcast across a level, matching Index values on the passed MultiIndex level. fill_value : None or float value, default None (NaN) Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result of filling (at that location) will be missing. axis : {0 or 'index'} Unused. Parameter needed for compatibility with DataFrame. Returns ------- Series The result of the operation. See Also -------- Series.ge : Return elementwise Greater than or equal to of series and other. Series.le : Return elementwise Less than or equal to of series and other. Series.gt : Return elementwise Greater than of series and other. Series.lt : Return elementwise Less than of series and other. Examples -------- >>> a = pd.Series([1, 1, 1, np.nan], index=["a", "b", "c", "d"]) >>> a a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=["a", "b", "d", "e"]) >>> b a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.eq(b, fill_value=0) a True b False c False d False e False dtype: bool
python
pandas/core/series.py
6,811
[ "self", "other", "level", "fill_value", "axis" ]
Series
true
1
7.04
pandas-dev/pandas
47,362
numpy
false
maybeEnsureValid
private void maybeEnsureValid(RecordBatch batch, boolean checkCrcs) { if (checkCrcs && batch.magic() >= RecordBatch.MAGIC_VALUE_V2) { try { batch.ensureValid(); } catch (CorruptRecordException e) { throw new CorruptRecordException("Record batch for partition " + partition.topicPartition() + " at offset " + batch.baseOffset() + " is invalid, cause: " + e.getMessage()); } } }
Scans for the next record in the available batches, skipping control records @param checkCrcs Whether to check the CRC of fetched records @return true if the current batch has more records, else false
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareCompletedFetch.java
392
[ "batch", "checkCrcs" ]
void
true
4
7.28
apache/kafka
31,560
javadoc
false
out_dtype
def out_dtype(self) -> torch.dtype: """ Get the output dtype, whether passed in or inferred from the nodes Returns: The output dtype """
Get the output dtype, whether passed in or inferred from the nodes Returns: The output dtype
python
torch/_inductor/kernel_inputs.py
178
[ "self" ]
torch.dtype
true
1
6.24
pytorch/pytorch
96,034
unknown
false
apply
def apply(self, X): """Apply trees in the ensemble to X, return leaf indices. .. versionadded:: 0.17 Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted to a sparse ``csr_matrix``. Returns ------- X_leaves : array-like of shape (n_samples, n_estimators, n_classes) For each datapoint x in X and for each tree in the ensemble, return the index of the leaf x ends up in each estimator. In the case of binary classification n_classes is 1. """ self._check_initialized() X = self.estimators_[0, 0]._validate_X_predict(X, check_input=True) # n_classes will be equal to 1 in the binary classification or the # regression case. n_estimators, n_classes = self.estimators_.shape leaves = np.zeros((X.shape[0], n_estimators, n_classes)) for i in range(n_estimators): for j in range(n_classes): estimator = self.estimators_[i, j] leaves[:, i, j] = estimator.apply(X, check_input=False) return leaves
Apply trees in the ensemble to X, return leaf indices. .. versionadded:: 0.17 Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The input samples. Internally, its dtype will be converted to ``dtype=np.float32``. If a sparse matrix is provided, it will be converted to a sparse ``csr_matrix``. Returns ------- X_leaves : array-like of shape (n_samples, n_estimators, n_classes) For each datapoint x in X and for each tree in the ensemble, return the index of the leaf x ends up in each estimator. In the case of binary classification n_classes is 1.
python
sklearn/ensemble/_gb.py
1,093
[ "self", "X" ]
false
3
6.08
scikit-learn/scikit-learn
64,340
numpy
false
_slice
def _slice( self, slicer: slice | npt.NDArray[np.bool_] | npt.NDArray[np.intp] ) -> ExtensionArray: """ Return a slice of my values. Parameters ---------- slicer : slice, ndarray[int], or ndarray[bool] Valid (non-reducing) indexer for self.values. Returns ------- ExtensionArray """ # Notes: ndarray[bool] is only reachable when via get_rows_with_mask, which # is only for Series, i.e. self.ndim == 1. # return same dims as we currently have if self.ndim == 2: # reached via getitem_block via _slice_take_blocks_ax0 # TODO(EA2D): won't be necessary with 2D EAs if not isinstance(slicer, slice): raise AssertionError( "invalid slicing for a 1-ndim ExtensionArray", slicer ) # GH#32959 only full-slicers along fake-dim0 are valid # TODO(EA2D): won't be necessary with 2D EAs # range(1) instead of self._mgr_locs to avoid exception on [::-1] # see test_iloc_getitem_slice_negative_step_ea_block new_locs = range(1)[slicer] if not len(new_locs): raise AssertionError( "invalid slicing for a 1-ndim ExtensionArray", slicer ) slicer = slice(None) return self.values[slicer]
Return a slice of my values. Parameters ---------- slicer : slice, ndarray[int], or ndarray[bool] Valid (non-reducing) indexer for self.values. Returns ------- ExtensionArray
python
pandas/core/internals/blocks.py
2,041
[ "self", "slicer" ]
ExtensionArray
true
4
6.4
pandas-dev/pandas
47,362
numpy
false
hasAtLeastOneGeoipProcessor
private static boolean hasAtLeastOneGeoipProcessor( List<Map<String, Object>> processors, boolean downloadDatabaseOnPipelineCreation, Map<String, PipelineConfiguration> pipelineConfigById, Map<String, Boolean> pipelineHasGeoProcessorById ) { if (processors != null) { // note: this loop is unrolled rather than streaming-style because it's hot enough to show up in a flamegraph for (Map<String, Object> processor : processors) { if (hasAtLeastOneGeoipProcessor( processor, downloadDatabaseOnPipelineCreation, pipelineConfigById, pipelineHasGeoProcessorById )) { return true; } } } return false; }
Check if a list of processor contains at least a geoip processor. @param processors List of processors. @param downloadDatabaseOnPipelineCreation Should the download_database_on_pipeline_creation of the geoip processor be true or false. @param pipelineConfigById A Map of pipeline id to PipelineConfiguration @param pipelineHasGeoProcessorById A Map of pipeline id to Boolean, indicating whether the pipeline references a geoip processor (true), does not reference a geoip processor (false), or we are currently trying to figure that out (null). @return true if a geoip processor is found in the processor list.
java
modules/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/GeoIpDownloaderTaskExecutor.java
340
[ "processors", "downloadDatabaseOnPipelineCreation", "pipelineConfigById", "pipelineHasGeoProcessorById" ]
true
3
7.76
elastic/elasticsearch
75,680
javadoc
false
convertForProperty
public @Nullable Object convertForProperty(@Nullable Object value, String propertyName) throws TypeMismatchException { CachedIntrospectionResults cachedIntrospectionResults = getCachedIntrospectionResults(); PropertyDescriptor pd = cachedIntrospectionResults.getPropertyDescriptor(propertyName); if (pd == null) { throw new InvalidPropertyException(getRootClass(), getNestedPath() + propertyName, "No property '" + propertyName + "' found"); } TypeDescriptor td = ((GenericTypeAwarePropertyDescriptor) pd).getTypeDescriptor(); return convertForProperty(propertyName, null, value, td); }
Convert the given value for the specified property to the latter's type. <p>This method is only intended for optimizations in a BeanFactory. Use the {@code convertIfNecessary} methods for programmatic conversion. @param value the value to convert @param propertyName the target property (note that nested or indexed properties are not supported here) @return the new value, possibly the result of type conversion @throws TypeMismatchException if type conversion failed
java
spring-beans/src/main/java/org/springframework/beans/BeanWrapperImpl.java
180
[ "value", "propertyName" ]
Object
true
2
7.44
spring-projects/spring-framework
59,386
javadoc
false
compress
def compress(self, condition, axis=None, out=None): """ Return `a` where condition is ``True``. If condition is a `~ma.MaskedArray`, missing values are considered as ``False``. Parameters ---------- condition : var Boolean 1-d array selecting which entries to return. If len(condition) is less than the size of a along the axis, then output is truncated to length of condition array. axis : {None, int}, optional Axis along which the operation must be performed. out : {None, ndarray}, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. Returns ------- result : MaskedArray A :class:`~ma.MaskedArray` object. Notes ----- Please note the difference with :meth:`compressed` ! The output of :meth:`compress` has a mask, the output of :meth:`compressed` does not. Examples -------- >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.compress([1, 0, 1]) masked_array(data=[1, 3], mask=[False, False], fill_value=999999) >>> x.compress([1, 0, 1], axis=1) masked_array( data=[[1, 3], [--, --], [7, 9]], mask=[[False, False], [ True, True], [False, False]], fill_value=999999) """ # Get the basic components (_data, _mask) = (self._data, self._mask) # Force the condition to a regular ndarray and forget the missing # values. condition = np.asarray(condition) _new = _data.compress(condition, axis=axis, out=out).view(type(self)) _new._update_from(self) if _mask is not nomask: _new._mask = _mask.compress(condition, axis=axis) return _new
Return `a` where condition is ``True``. If condition is a `~ma.MaskedArray`, missing values are considered as ``False``. Parameters ---------- condition : var Boolean 1-d array selecting which entries to return. If len(condition) is less than the size of a along the axis, then output is truncated to length of condition array. axis : {None, int}, optional Axis along which the operation must be performed. out : {None, ndarray}, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. Returns ------- result : MaskedArray A :class:`~ma.MaskedArray` object. Notes ----- Please note the difference with :meth:`compressed` ! The output of :meth:`compress` has a mask, the output of :meth:`compressed` does not. Examples -------- >>> import numpy as np >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.compress([1, 0, 1]) masked_array(data=[1, 3], mask=[False, False], fill_value=999999) >>> x.compress([1, 0, 1], axis=1) masked_array( data=[[1, 3], [--, --], [7, 9]], mask=[[False, False], [ True, True], [False, False]], fill_value=999999)
python
numpy/ma/core.py
3,973
[ "self", "condition", "axis", "out" ]
false
2
7.76
numpy/numpy
31,054
numpy
false
add
public synchronized boolean add(final MetricName metricName, final MeasurableStat stat, final MetricConfig config) { if (hasExpired()) { return false; } else if (metrics.containsKey(metricName)) { return true; } else { final MetricConfig statConfig = config == null ? this.config : config; final KafkaMetric metric = new KafkaMetric( metricLock(), Objects.requireNonNull(metricName), Objects.requireNonNull(stat), statConfig, time ); KafkaMetric existingMetric = registry.registerMetric(metric); if (existingMetric != null) { throw new IllegalArgumentException("A metric named '" + metricName + "' already exists, can't register another one."); } metrics.put(metric.metricName(), metric); stats.add(new StatAndConfig(Objects.requireNonNull(stat), metric::config)); return true; } }
Register a metric with this sensor @param metricName The name of the metric @param stat The statistic to keep @param config A special configuration for this metric. If null use the sensor default configuration. @return true if metric is added to sensor, false if sensor is expired
java
clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java
328
[ "metricName", "stat", "config" ]
true
5
7.76
apache/kafka
31,560
javadoc
false
memdump
def memdump(samples=10, file=None): # pragma: no cover """Dump memory statistics. Will print a sample of all RSS memory samples added by calling :func:`sample_mem`, and in addition print used RSS memory after :func:`gc.collect`. """ say = partial(print, file=file) if ps() is None: say('- rss: (psutil not installed).') return prev, after_collect = _memdump(samples) if prev: say('- rss (sample):') for mem in prev: say(f'- > {mem},') say(f'- rss (end): {after_collect}.')
Dump memory statistics. Will print a sample of all RSS memory samples added by calling :func:`sample_mem`, and in addition print used RSS memory after :func:`gc.collect`.
python
celery/utils/debug.py
83
[ "samples", "file" ]
false
4
6.24
celery/celery
27,741
unknown
false
fillSet
private static <T> Set<T> fillSet(Set<T> baseSet, Set<T> fillSet, Predicate<T> predicate) { Set<T> result = new HashSet<>(baseSet); for (T element : fillSet) { if (predicate.test(element)) { result.add(element); } } return result; }
Copies {@code baseSet} and adds all non-existent elements in {@code fillSet} such that {@code predicate} is true. In other words, all elements of {@code baseSet} will be contained in the result, with additional non-overlapping elements in {@code fillSet} where the predicate is true. @param baseSet the base elements for the resulting set @param fillSet elements to be filled into the resulting set @param predicate tested against the fill set to determine whether elements should be added to the base set
java
clients/src/main/java/org/apache/kafka/clients/MetadataSnapshot.java
215
[ "baseSet", "fillSet", "predicate" ]
true
2
6.56
apache/kafka
31,560
javadoc
false
validateIndex
protected void validateIndex(final int index) { if (index < 0 || index > size) { throw new StringIndexOutOfBoundsException(index); } }
Validates parameters defining a single index in the builder. @param index the index, must be valid @throws IndexOutOfBoundsException if the index is invalid
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
3,032
[ "index" ]
void
true
3
6.4
apache/commons-lang
2,896
javadoc
false
anyNull
public static boolean anyNull(final Object... values) { return !allNotNull(values); }
Tests if any value in the given array is {@code null}. <p> If any of the values are {@code null} or the array is {@code null}, then {@code true} is returned, otherwise {@code false} is returned. </p> <pre> ObjectUtils.anyNull(*) = false ObjectUtils.anyNull(*, *) = false ObjectUtils.anyNull(null) = true ObjectUtils.anyNull(null, null) = true ObjectUtils.anyNull(null, *) = true ObjectUtils.anyNull(*, null) = true ObjectUtils.anyNull(*, *, null, *) = true </pre> @param values the values to test, may be {@code null} or empty. @return {@code true} if there is at least one {@code null} value in the array, {@code false} if all the values are non-null. If the array is {@code null} or empty, {@code true} is also returned. @since 3.11
java
src/main/java/org/apache/commons/lang3/ObjectUtils.java
219
[]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
builder
static ExponentialHistogramBuilder builder(int scale, ExponentialHistogramCircuitBreaker breaker) { return new ExponentialHistogramBuilder(scale, breaker); }
Create a builder for an exponential histogram with the given scale. @param scale the scale of the histogram to build @param breaker the circuit breaker to use @return a new builder
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialHistogram.java
229
[ "scale", "breaker" ]
ExponentialHistogramBuilder
true
1
6.96
elastic/elasticsearch
75,680
javadoc
false
_bind_queues
def _bind_queues(self, app: Celery, connection: Connection) -> None: """Bind all application queues to delayed delivery exchanges. Args: app: The Celery application instance connection: The broker connection to use Raises: Exception: If queue binding fails """ queues: ValuesView[Queue] = app.amqp.queues.values() if not queues: logger.warning("No queues found to bind for delayed delivery") return exceptions: list[Exception] = [] for queue in queues: try: logger.debug("Binding queue %r to delayed delivery exchange", queue.name) bind_queue_to_native_delayed_delivery_exchange(connection, queue) except Exception as e: logger.error( "Failed to bind queue %r: %s", queue.name, str(e) ) # We must re-raise on retried exceptions to ensure they are # caught with the outer retry_over_time mechanism. # # This could be removed if one of: # * The minimum python version for Celery and Kombu is # increased to 3.11. Kombu updated to use the `except*` # clause to catch specific exceptions from an ExceptionGroup. # * Kombu's retry_over_time utility is updated to use the # catch utility from agronholm's exceptiongroup backport. if isinstance(e, RETRIED_EXCEPTIONS): raise exceptions.append(e) if exceptions: raise ExceptionGroup( ("One or more failures occurred while binding queues to " "delayed delivery exchanges"), exceptions, )
Bind all application queues to delayed delivery exchanges. Args: app: The Celery application instance connection: The broker connection to use Raises: Exception: If queue binding fails
python
celery/worker/consumer/delayed_delivery.py
149
[ "self", "app", "connection" ]
None
true
5
6.24
celery/celery
27,741
google
false
_compute_contiguous_strides
def _compute_contiguous_strides(size: tuple[int, ...]) -> list[int]: """ Helper function to compute standard contiguous strides for a given size. Args: size: Tensor shape/size as a tuple of integers Returns: list[int]: List of contiguous strides """ strides: list[int] = [] current_stride: int = 1 # Calculate strides from right to left for i in range(len(size) - 1, -1, -1): strides.insert(0, current_stride) # For dimensions with size 0, keep stride as is if size[i] != 0: current_stride *= size[i] return strides
Helper function to compute standard contiguous strides for a given size. Args: size: Tensor shape/size as a tuple of integers Returns: list[int]: List of contiguous strides
python
tools/experimental/torchfuzz/tensor_fuzzer.py
206
[ "size" ]
list[int]
true
3
7.92
pytorch/pytorch
96,034
google
false
createBeanFactoryBasedTargetSource
protected abstract @Nullable AbstractBeanFactoryBasedTargetSource createBeanFactoryBasedTargetSource( Class<?> beanClass, String beanName);
Subclasses must implement this method to return a new AbstractPrototypeBasedTargetSource if they wish to create a custom TargetSource for this bean, or {@code null} if they are not interested it in, in which case no special target source will be created. Subclasses should not call {@code setTargetBeanName} or {@code setBeanFactory} on the AbstractPrototypeBasedTargetSource: This class' implementation of {@code getTargetSource()} will do that. @param beanClass the class of the bean to create a TargetSource for @param beanName the name of the bean @return the AbstractPrototypeBasedTargetSource, or {@code null} if we don't match this
java
spring-aop/src/main/java/org/springframework/aop/framework/autoproxy/target/AbstractBeanFactoryBasedTargetSourceCreator.java
195
[ "beanClass", "beanName" ]
AbstractBeanFactoryBasedTargetSource
true
1
6.16
spring-projects/spring-framework
59,386
javadoc
false
addFirst
public static boolean[] addFirst(final boolean[] array, final boolean element) { return array == null ? add(array, element) : insert(0, array, element); }
Copies the given array and adds the given element at the beginning of the new array. <p> The new array contains the same elements of the input array plus the given element in the first position. The component type of the new array is the same as that of the input array. </p> <p> If the input array is {@code null}, a new one element array is returned whose component type is the same as the element. </p> <pre> ArrayUtils.addFirst(null, true) = [true] ArrayUtils.addFirst([true], false) = [false, true] ArrayUtils.addFirst([true, false], true) = [true, true, false] </pre> @param array the array to "add" the element to, may be {@code null}. @param element the object to add. @return A new array containing the existing elements plus the new element The returned array type will be that of the input array (unless null), in which case it will have the same type as the element. @since 3.10
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
1,166
[ "array", "element" ]
true
2
8
apache/commons-lang
2,896
javadoc
false
getFormatter
public DateTimeFormatter getFormatter(DateTimeFormatter formatter) { if (this.chronology != null) { formatter = formatter.withChronology(this.chronology); } if (this.timeZone != null) { formatter = formatter.withZone(this.timeZone); } else { LocaleContext localeContext = LocaleContextHolder.getLocaleContext(); if (localeContext instanceof TimeZoneAwareLocaleContext timeZoneAware) { TimeZone timeZone = timeZoneAware.getTimeZone(); if (timeZone != null) { formatter = formatter.withZone(timeZone.toZoneId()); } } } return formatter; }
Get the DateTimeFormatter with this context's settings applied to the base {@code formatter}. @param formatter the base formatter that establishes default formatting rules, generally context-independent @return the contextual DateTimeFormatter
java
spring-context/src/main/java/org/springframework/format/datetime/standard/DateTimeContext.java
87
[ "formatter" ]
DateTimeFormatter
true
5
7.28
spring-projects/spring-framework
59,386
javadoc
false
is_potential_multi_index
def is_potential_multi_index( columns: Sequence[Hashable] | MultiIndex, index_col: bool | Sequence[int] | None = None, ) -> bool: """ Check whether or not the `columns` parameter could be converted into a MultiIndex. Parameters ---------- columns : array-like Object which may or may not be convertible into a MultiIndex index_col : None, bool or list, optional Column or columns to use as the (possibly hierarchical) index Returns ------- bool : Whether or not columns could become a MultiIndex """ if index_col is None or isinstance(index_col, bool): index_columns = set() else: index_columns = set(index_col) return bool( len(columns) and not isinstance(columns, ABCMultiIndex) and all(isinstance(c, tuple) for c in columns if c not in index_columns) )
Check whether or not the `columns` parameter could be converted into a MultiIndex. Parameters ---------- columns : array-like Object which may or may not be convertible into a MultiIndex index_col : None, bool or list, optional Column or columns to use as the (possibly hierarchical) index Returns ------- bool : Whether or not columns could become a MultiIndex
python
pandas/io/common.py
1,217
[ "columns", "index_col" ]
bool
true
6
6.4
pandas-dev/pandas
47,362
numpy
false
add
@Override public void add(double x, long w) { checkValue(x); if (tempUsed >= tempWeight.size() - lastUsedCell - 1) { mergeNewValues(); } int where = tempUsed++; tempWeight.set(where, w); tempMean.set(where, x); unmergedWeight += w; if (x < min) { min = x; } if (x > max) { max = x; } }
Fully specified constructor. Normally only used for deserializing a buffer t-digest. @param compression Compression factor @param bufferSize Number of temporary centroids @param size Size of main buffer
java
libs/tdigest/src/main/java/org/elasticsearch/tdigest/MergingDigest.java
268
[ "x", "w" ]
void
true
4
6.08
elastic/elasticsearch
75,680
javadoc
false
bindSourceFileIfExternalModule
function bindSourceFileIfExternalModule() { setExportContextFlag(file); if (isExternalModule(file)) { bindSourceFileAsExternalModule(); } else if (isJsonSourceFile(file)) { bindSourceFileAsExternalModule(); // Create symbol equivalent for the module.exports = {} const originalSymbol = file.symbol; declareSymbol(file.symbol.exports!, file.symbol, file, SymbolFlags.Property, SymbolFlags.All); file.symbol = originalSymbol; } }
Declares a Symbol for the node and adds it to symbols. Reports errors for conflicting identifier names. @param symbolTable - The symbol table which node will be added to. @param parent - node's parent declaration. @param node - The declaration to be added to the symbol table @param includes - The SymbolFlags that node has in addition to its declaration type (eg: export, ambient, etc.) @param excludes - The flags which node cannot be declared alongside in a symbol table. Used to report forbidden declarations.
typescript
src/compiler/binder.ts
3,107
[]
false
4
6.08
microsoft/TypeScript
107,154
jsdoc
false
_remove_nan_1d
def _remove_nan_1d(arr1d, second_arr1d=None, overwrite_input=False): """ Equivalent to arr1d[~arr1d.isnan()], but in a different order Presumably faster as it incurs fewer copies Parameters ---------- arr1d : ndarray Array to remove nans from second_arr1d : ndarray or None A second array which will have the same positions removed as arr1d. overwrite_input : bool True if `arr1d` can be modified in place Returns ------- res : ndarray Array with nan elements removed second_res : ndarray or None Second array with nan element positions of first array removed. overwrite_input : bool True if `res` can be modified in place, given the constraint on the input """ if arr1d.dtype == object: # object arrays do not support `isnan` (gh-9009), so make a guess c = np.not_equal(arr1d, arr1d, dtype=bool) else: c = np.isnan(arr1d) s = np.nonzero(c)[0] if s.size == arr1d.size: warnings.warn("All-NaN slice encountered", RuntimeWarning, stacklevel=6) if second_arr1d is None: return arr1d[:0], None, True else: return arr1d[:0], second_arr1d[:0], True elif s.size == 0: return arr1d, second_arr1d, overwrite_input else: if not overwrite_input: arr1d = arr1d.copy() # select non-nans at end of array enonan = arr1d[-s.size:][~c[-s.size:]] # fill nans in beginning of array with non-nans of end arr1d[s[:enonan.size]] = enonan if second_arr1d is None: return arr1d[:-s.size], None, True else: if not overwrite_input: second_arr1d = second_arr1d.copy() enonan = second_arr1d[-s.size:][~c[-s.size:]] second_arr1d[s[:enonan.size]] = enonan return arr1d[:-s.size], second_arr1d[:-s.size], True
Equivalent to arr1d[~arr1d.isnan()], but in a different order Presumably faster as it incurs fewer copies Parameters ---------- arr1d : ndarray Array to remove nans from second_arr1d : ndarray or None A second array which will have the same positions removed as arr1d. overwrite_input : bool True if `arr1d` can be modified in place Returns ------- res : ndarray Array with nan elements removed second_res : ndarray or None Second array with nan element positions of first array removed. overwrite_input : bool True if `res` can be modified in place, given the constraint on the input
python
numpy/lib/_nanfunctions_impl.py
144
[ "arr1d", "second_arr1d", "overwrite_input" ]
false
12
6
numpy/numpy
31,054
numpy
false
incrementThrottleTime
void incrementThrottleTime(String nodeId, long throttleTimeMs) { requests.getOrDefault(nodeId, new ArrayDeque<>()). forEach(request -> request.incrementThrottleTime(throttleTimeMs)); }
Returns a list of nodes with pending in-flight request, that need to be timed out @param now current time in milliseconds @return list of nodes
java
clients/src/main/java/org/apache/kafka/clients/InFlightRequests.java
182
[ "nodeId", "throttleTimeMs" ]
void
true
1
6.48
apache/kafka
31,560
javadoc
false
update_orm_from_pydantic
def update_orm_from_pydantic( pool_name: str, patch_body: PoolBody | PoolPatchBody, update_mask: list[str] | None, session: SessionDep, ) -> Pool: """ Update an existing pool. :param pool_name: The name of the existing Pool to be updated. :param patch_body: Pydantic model containing the fields to update. :param update_mask: Specific fields to update. If None, all provided fields will be considered. :param session: The database session dependency. :return: The updated Pool instance. :raises HTTPException: If attempting to update disallowed fields on ``default_pool``. """ # Special restriction: default pool only allows limited fields to be patched pool = session.scalar(select(Pool).where(Pool.pool == pool_name).limit(1)) if not pool: raise HTTPException( status.HTTP_404_NOT_FOUND, detail=f"The Pool with name: `{pool_name}` was not found" ) if pool_name == Pool.DEFAULT_POOL_NAME: if update_mask and all(mask.strip() in {"slots", "include_deferred"} for mask in update_mask): # Validate only slots/include_deferred try: patch_body_subset = patch_body.model_dump( include={"slots", "include_deferred"}, exclude_unset=True, by_alias=True ) # Re-run validation with BasePool but only on allowed fields PoolPatchBody.model_validate(patch_body_subset) except ValidationError as e: raise RequestValidationError(errors=e.errors()) else: raise HTTPException( status.HTTP_400_BAD_REQUEST, "Only slots and included_deferred can be modified on Default Pool", ) else: fields_to_update = patch_body.model_fields_set try: # Dump with both input + output aliases handled body_dict = patch_body.model_dump( include=fields_to_update, by_alias=True, # ensures we get the API-facing alias keys ) # Normalize keys for BasePool (expects "pool") if "name" in body_dict and "pool" not in body_dict: body_dict["pool"] = body_dict.pop("name") BasePool.model_validate(body_dict) except ValidationError as e: raise RequestValidationError(errors=e.errors()) # Delegate patch application to the common utility return cast( "Pool", BulkService.apply_patch_with_update_mask( model=pool, patch_body=patch_body, update_mask=update_mask, non_update_fields=None, ), )
Update an existing pool. :param pool_name: The name of the existing Pool to be updated. :param patch_body: Pydantic model containing the fields to update. :param update_mask: Specific fields to update. If None, all provided fields will be considered. :param session: The database session dependency. :return: The updated Pool instance. :raises HTTPException: If attempting to update disallowed fields on ``default_pool``.
python
airflow-core/src/airflow/api_fastapi/core_api/services/public/pools.py
45
[ "pool_name", "patch_body", "update_mask", "session" ]
Pool
true
9
8
apache/airflow
43,597
sphinx
false
isEtagUsable
function isEtagUsable (etag) { if (etag.length <= 2) { // Shortest an etag can be is two chars (just ""). This is where we deviate // from the spec requiring a min of 3 chars however return false } if (etag[0] === '"' && etag[etag.length - 1] === '"') { // ETag: ""asd123"" or ETag: "W/"asd123"", kinda undefined behavior in the // spec. Some servers will accept these while others don't. // ETag: "asd123" return !(etag[1] === '"' || etag.startsWith('"W/')) } if (etag.startsWith('W/"') && etag[etag.length - 1] === '"') { // ETag: W/"", also where we deviate from the spec & require a min of 3 // chars // ETag: for W/"", W/"asd123" return etag.length !== 4 } // Anything else return false }
Note: this deviates from the spec a little. Empty etags ("", W/"") are valid, however, including them in cached resposnes serves little to no purpose. @see https://www.rfc-editor.org/rfc/rfc9110.html#name-etag @param {string} etag @returns {boolean}
javascript
deps/undici/src/lib/util/cache.js
307
[ "etag" ]
false
7
6.08
nodejs/node
114,839
jsdoc
false
containsAll
@Override boolean containsAll(Collection<?> elements);
Returns {@code true} if this multiset contains at least one occurrence of each element in the specified collection. <p>This method refines {@link Collection#containsAll} to further specify that it <b>may not</b> throw an exception in response to any of {@code elements} being null or of the wrong type. <p><b>Note:</b> this method does not take into account the occurrence count of an element in the two collections; it may still return {@code true} even if {@code elements} contains several occurrences of an element and this multiset contains only one. This is no different than any other collection type like {@link List}, but it may be unexpected to the user of a multiset. @param elements the collection of elements to be checked for containment in this multiset @return {@code true} if this multiset contains at least one occurrence of each element contained in {@code elements} @throws NullPointerException if {@code elements} is null
java
android/guava/src/com/google/common/collect/Multiset.java
408
[ "elements" ]
true
1
6.32
google/guava
51,352
javadoc
false
canonicalPropertyNames
public static String @Nullable [] canonicalPropertyNames(String @Nullable [] propertyNames) { if (propertyNames == null) { return null; } String[] result = new String[propertyNames.length]; for (int i = 0; i < propertyNames.length; i++) { result[i] = canonicalPropertyName(propertyNames[i]); } return result; }
Determine the canonical names for the given property paths. @param propertyNames the bean property paths (as array) @return the canonical representation of the property paths (as array of the same size) @see #canonicalPropertyName(String)
java
spring-beans/src/main/java/org/springframework/beans/PropertyAccessorUtils.java
176
[ "propertyNames" ]
true
3
7.28
spring-projects/spring-framework
59,386
javadoc
false
merge
private void merge( TDigestDoubleArray incomingMean, TDigestDoubleArray incomingWeight, int incomingCount, TDigestIntArray incomingOrder, double unmergedWeight, boolean runBackwards, double compression ) { // when our incoming buffer fills up, we combine our existing centroids with the incoming data, // and then reduce the centroids by merging if possible incomingMean.set(incomingCount, mean, 0, lastUsedCell); incomingWeight.set(incomingCount, weight, 0, lastUsedCell); incomingCount += lastUsedCell; Sort.stableSort(incomingOrder, incomingMean, incomingCount); totalWeight += unmergedWeight; // option to run backwards is to help investigate bias in errors if (runBackwards) { Sort.reverse(incomingOrder, 0, incomingCount); } // start by copying the least incoming value to the normal buffer lastUsedCell = 0; mean.set(lastUsedCell, incomingMean.get(incomingOrder.get(0))); weight.set(lastUsedCell, incomingWeight.get(incomingOrder.get(0))); double wSoFar = 0; // weight will contain all zeros after this loop double normalizer = scale.normalizer(compression, totalWeight); double k1 = scale.k(0, normalizer); double wLimit = totalWeight * scale.q(k1 + 1, normalizer); for (int i = 1; i < incomingCount; i++) { int ix = incomingOrder.get(i); double proposedWeight = weight.get(lastUsedCell) + incomingWeight.get(ix); double projectedW = wSoFar + proposedWeight; boolean addThis; if (useWeightLimit) { double q0 = wSoFar / totalWeight; double q2 = (wSoFar + proposedWeight) / totalWeight; addThis = proposedWeight <= totalWeight * Math.min(scale.max(q0, normalizer), scale.max(q2, normalizer)); } else { addThis = projectedW <= wLimit; } if (i == 1 || i == incomingCount - 1) { // force first and last centroid to never merge addThis = false; } if (lastUsedCell == mean.size() - 1) { // use the last centroid, there's no more addThis = true; } if (addThis) { // next point will fit // so merge into existing centroid weight.set(lastUsedCell, weight.get(lastUsedCell) + incomingWeight.get(ix)); mean.set( lastUsedCell, mean.get(lastUsedCell) + (incomingMean.get(ix) - mean.get(lastUsedCell)) * incomingWeight.get(ix) / weight.get( lastUsedCell ) ); incomingWeight.set(ix, 0); } else { // didn't fit ... move to next output, copy out first centroid wSoFar += weight.get(lastUsedCell); if (useWeightLimit == false) { k1 = scale.k(wSoFar / totalWeight, normalizer); wLimit = totalWeight * scale.q(k1 + 1, normalizer); } lastUsedCell++; mean.set(lastUsedCell, incomingMean.get(ix)); weight.set(lastUsedCell, incomingWeight.get(ix)); incomingWeight.set(ix, 0); } } // points to next empty cell lastUsedCell++; // sanity check double sum = 0; for (int i = 0; i < lastUsedCell; i++) { sum += weight.get(i); } assert sum == totalWeight; if (runBackwards) { Sort.reverse(mean, 0, lastUsedCell); Sort.reverse(weight, 0, lastUsedCell); } if (totalWeight > 0) { min = Math.min(min, mean.get(0)); max = Math.max(max, mean.get(lastUsedCell - 1)); } }
Fully specified constructor. Normally only used for deserializing a buffer t-digest. @param compression Compression factor @param bufferSize Number of temporary centroids @param size Size of main buffer
java
libs/tdigest/src/main/java/org/elasticsearch/tdigest/MergingDigest.java
304
[ "incomingMean", "incomingWeight", "incomingCount", "incomingOrder", "unmergedWeight", "runBackwards", "compression" ]
void
true
12
6.16
elastic/elasticsearch
75,680
javadoc
false
asInputStream
default InputStream asInputStream() throws IOException { return new DataBlockInputStream(this); }
Return this {@link DataBlock} as an {@link InputStream}. @return an {@link InputStream} to read the data block content @throws IOException on IO error
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/zip/DataBlock.java
78
[]
InputStream
true
1
6.64
spring-projects/spring-boot
79,428
javadoc
false
maybeCompleteReceive
public NetworkReceive maybeCompleteReceive() { if (receive != null && receive.complete()) { receive.payload().rewind(); NetworkReceive result = receive; receive = null; return result; } return null; }
Returns the port to which this channel's socket is connected or 0 if the socket has never been connected. If the socket was connected prior to being closed, then this method will continue to return the connected port number after the socket is closed.
java
clients/src/main/java/org/apache/kafka/common/network/KafkaChannel.java
425
[]
NetworkReceive
true
3
6.88
apache/kafka
31,560
javadoc
false
bitSet
public BitSet bitSet() { return bitSet; }
Gets the wrapped bit set. @return the wrapped bit set.
java
src/main/java/org/apache/commons/lang3/util/FluentBitSet.java
120
[]
BitSet
true
1
6.64
apache/commons-lang
2,896
javadoc
false
drainAll
private void drainAll() { lock.lock(); try { completedFetches.forEach(ShareCompletedFetch::drain); completedFetches.clear(); if (nextInLineFetch != null) { nextInLineFetch.drain(); nextInLineFetch = null; } } finally { lock.unlock(); } }
Return the set of {@link TopicIdPartition partitions} for which we have data in the buffer. @return {@link TopicIdPartition Partition} set
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareFetchBuffer.java
192
[]
void
true
2
7.6
apache/kafka
31,560
javadoc
false
remove_unused_categories
def remove_unused_categories(self) -> Self: """ Remove categories which are not used. This method is useful when working with datasets that undergo dynamic changes where categories may no longer be relevant, allowing to maintain a clean, efficient data structure. Returns ------- Categorical Categorical with unused categories dropped. See Also -------- rename_categories : Rename categories. reorder_categories : Reorder categories. add_categories : Add new categories. remove_categories : Remove the specified categories. set_categories : Set the categories to the specified ones. Examples -------- >>> c = pd.Categorical(["a", "c", "b", "c", "d"]) >>> c ['a', 'c', 'b', 'c', 'd'] Categories (4, str): ['a', 'b', 'c', 'd'] >>> c[2] = "a" >>> c[4] = "c" >>> c ['a', 'c', 'a', 'c', 'c'] Categories (4, str): ['a', 'b', 'c', 'd'] >>> c.remove_unused_categories() ['a', 'c', 'a', 'c', 'c'] Categories (2, str): ['a', 'c'] """ idx, inv = np.unique(self._codes, return_inverse=True) if idx.size != 0 and idx[0] == -1: # na sentinel idx, inv = idx[1:], inv - 1 new_categories = self.dtype.categories.take(idx) new_dtype = CategoricalDtype._from_fastpath( new_categories, ordered=self.ordered ) new_codes = coerce_indexer_dtype(inv, new_dtype.categories) cat = self.copy() NDArrayBacked.__init__(cat, new_codes, new_dtype) return cat
Remove categories which are not used. This method is useful when working with datasets that undergo dynamic changes where categories may no longer be relevant, allowing to maintain a clean, efficient data structure. Returns ------- Categorical Categorical with unused categories dropped. See Also -------- rename_categories : Rename categories. reorder_categories : Reorder categories. add_categories : Add new categories. remove_categories : Remove the specified categories. set_categories : Set the categories to the specified ones. Examples -------- >>> c = pd.Categorical(["a", "c", "b", "c", "d"]) >>> c ['a', 'c', 'b', 'c', 'd'] Categories (4, str): ['a', 'b', 'c', 'd'] >>> c[2] = "a" >>> c[4] = "c" >>> c ['a', 'c', 'a', 'c', 'c'] Categories (4, str): ['a', 'b', 'c', 'd'] >>> c.remove_unused_categories() ['a', 'c', 'a', 'c', 'c'] Categories (2, str): ['a', 'c']
python
pandas/core/arrays/categorical.py
1,472
[ "self" ]
Self
true
3
8.16
pandas-dev/pandas
47,362
unknown
false
are_dependencies_met
def are_dependencies_met( self, dep_context: DepContext | None = None, session: Session = NEW_SESSION, verbose: bool = False ) -> bool: """ Are all conditions met for this task instance to be run given the context for the dependencies. (e.g. a task instance being force run from the UI will ignore some dependencies). :param dep_context: The execution context that determines the dependencies that should be evaluated. :param session: database session :param verbose: whether log details on failed dependencies on info or debug log level """ dep_context = dep_context or DepContext() failed = False verbose_aware_logger = self.log.info if verbose else self.log.debug for dep_status in self.get_failed_dep_statuses(dep_context=dep_context, session=session): failed = True verbose_aware_logger( "Dependencies not met for %s, dependency '%s' FAILED: %s", self, dep_status.dep_name, dep_status.reason, ) if failed: return False verbose_aware_logger("Dependencies all met for dep_context=%s ti=%s", dep_context.description, self) return True
Are all conditions met for this task instance to be run given the context for the dependencies. (e.g. a task instance being force run from the UI will ignore some dependencies). :param dep_context: The execution context that determines the dependencies that should be evaluated. :param session: database session :param verbose: whether log details on failed dependencies on info or debug log level
python
airflow-core/src/airflow/models/taskinstance.py
883
[ "self", "dep_context", "session", "verbose" ]
bool
true
5
7.04
apache/airflow
43,597
sphinx
false
masked_equal
def masked_equal(x, value, copy=True): """ Mask an array where equal to a given value. Return a MaskedArray, masked where the data in array `x` are equal to `value`. The fill_value of the returned MaskedArray is set to `value`. For floating point arrays, consider using ``masked_values(x, value)``. See Also -------- masked_where : Mask where a condition is met. masked_values : Mask using floating point equality. Examples -------- >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_equal(a, 2) masked_array(data=[0, 1, --, 3], mask=[False, False, True, False], fill_value=2) """ output = masked_where(equal(x, value), x, copy=copy) output.fill_value = value return output
Mask an array where equal to a given value. Return a MaskedArray, masked where the data in array `x` are equal to `value`. The fill_value of the returned MaskedArray is set to `value`. For floating point arrays, consider using ``masked_values(x, value)``. See Also -------- masked_where : Mask where a condition is met. masked_values : Mask using floating point equality. Examples -------- >>> import numpy as np >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_equal(a, 2) masked_array(data=[0, 1, --, 3], mask=[False, False, True, False], fill_value=2)
python
numpy/ma/core.py
2,132
[ "x", "value", "copy" ]
false
1
6.48
numpy/numpy
31,054
unknown
false
get
static @Nullable Object get( @Nullable Object hashTableObject, @Nullable Object[] alternatingKeysAndValues, int size, int keyOffset, @Nullable Object key) { if (key == null) { return null; } else if (size == 1) { // requireNonNull is safe because the first 2 elements have been filled in. return requireNonNull(alternatingKeysAndValues[keyOffset]).equals(key) ? requireNonNull(alternatingKeysAndValues[keyOffset ^ 1]) : null; } else if (hashTableObject == null) { return null; } if (hashTableObject instanceof byte[]) { byte[] hashTable = (byte[]) hashTableObject; int mask = hashTable.length - 1; for (int h = Hashing.smear(key.hashCode()); ; h++) { h &= mask; int keyIndex = hashTable[h] & BYTE_MASK; // unsigned read if (keyIndex == BYTE_MASK) { // -1 signed becomes 255 unsigned return null; } else if (key.equals(alternatingKeysAndValues[keyIndex])) { return alternatingKeysAndValues[keyIndex ^ 1]; } } } else if (hashTableObject instanceof short[]) { short[] hashTable = (short[]) hashTableObject; int mask = hashTable.length - 1; for (int h = Hashing.smear(key.hashCode()); ; h++) { h &= mask; int keyIndex = hashTable[h] & SHORT_MASK; // unsigned read if (keyIndex == SHORT_MASK) { // -1 signed becomes 65_535 unsigned return null; } else if (key.equals(alternatingKeysAndValues[keyIndex])) { return alternatingKeysAndValues[keyIndex ^ 1]; } } } else { int[] hashTable = (int[]) hashTableObject; int mask = hashTable.length - 1; for (int h = Hashing.smear(key.hashCode()); ; h++) { h &= mask; int keyIndex = hashTable[h]; if (keyIndex == ABSENT) { return null; } else if (key.equals(alternatingKeysAndValues[keyIndex])) { return alternatingKeysAndValues[keyIndex ^ 1]; } } } }
Returns a hash table for the specified keys and values, and ensures that neither keys nor values are null. This method may update {@code alternatingKeysAndValues} if there are duplicate keys. If so, the return value will indicate how many entries are still valid, and will also include a {@link Builder.DuplicateKey} in case duplicate keys are not allowed now or will not be allowed on a later {@link Builder#buildOrThrow()} call. @param keyOffset 1 if this is the reverse direction of a BiMap, 0 otherwise. @return an {@code Object} that is a {@code byte[]}, {@code short[]}, or {@code int[]}, the smallest possible to fit {@code tableSize}; or an {@code Object[]} where [0] is one of these; [1] indicates how many element pairs in {@code alternatingKeysAndValues} are valid; and [2] is a {@link Builder.DuplicateKey} for the first duplicate key encountered.
java
android/guava/src/com/google/common/collect/RegularImmutableMap.java
320
[ "hashTableObject", "alternatingKeysAndValues", "size", "keyOffset", "key" ]
Object
true
16
6.96
google/guava
51,352
javadoc
false
wildcardTypeToString
private static String wildcardTypeToString(final WildcardType wildcardType) { final StringBuilder builder = new StringBuilder().append('?'); final Type[] lowerBounds = wildcardType.getLowerBounds(); final Type[] upperBounds = wildcardType.getUpperBounds(); if (lowerBounds.length > 1 || lowerBounds.length == 1 && lowerBounds[0] != null) { AMP_JOINER.join(builder.append(" super "), lowerBounds); } else if (upperBounds.length > 1 || upperBounds.length == 1 && !Object.class.equals(upperBounds[0])) { AMP_JOINER.join(builder.append(" extends "), upperBounds); } return builder.toString(); }
Formats a {@link WildcardType} as a {@link String}. @param wildcardType {@link WildcardType} to format. @return String.
java
src/main/java/org/apache/commons/lang3/reflect/TypeUtils.java
1,711
[ "wildcardType" ]
String
true
7
7.76
apache/commons-lang
2,896
javadoc
false
resolveFieldValuesFor
private void resolveFieldValuesFor(Map<String, Object> values, TypeElement element) { try { this.fieldValuesParser.getFieldValues(element).forEach((name, value) -> { if (!values.containsKey(name)) { values.put(name, value); } }); } catch (Exception ex) { // continue } Element superType = this.typeUtils.asElement(element.getSuperclass()); if (superType instanceof TypeElement && superType.asType().getKind() != TypeKind.NONE) { resolveFieldValuesFor(values, (TypeElement) superType); } }
Collect the annotations that are annotated or meta-annotated with the specified {@link TypeElement annotation}. @param element the element to inspect @param annotationType the annotation to discover @return the annotations that are annotated or meta-annotated with this annotation
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/MetadataGenerationEnvironment.java
393
[ "values", "element" ]
void
true
5
7.44
spring-projects/spring-boot
79,428
javadoc
false
collectDependencyGroups
function collectDependencyGroups(externalImports: (ImportDeclaration | ImportEqualsDeclaration | ExportDeclaration)[]) { const groupIndices = new Map<string, number>(); const dependencyGroups: DependencyGroup[] = []; for (const externalImport of externalImports) { const externalModuleName = getExternalModuleNameLiteral(factory, externalImport, currentSourceFile, host, resolver, compilerOptions); if (externalModuleName) { const text = externalModuleName.text; const groupIndex = groupIndices.get(text); if (groupIndex !== undefined) { // deduplicate/group entries in dependency list by the dependency name dependencyGroups[groupIndex].externalImports.push(externalImport); } else { groupIndices.set(text, dependencyGroups.length); dependencyGroups.push({ name: externalModuleName, externalImports: [externalImport], }); } } } return dependencyGroups; }
Collects the dependency groups for this files imports. @param externalImports The imports for the file.
typescript
src/compiler/transformers/module/system.ts
279
[ "externalImports" ]
false
4
6.08
microsoft/TypeScript
107,154
jsdoc
false
getEnvVars
private Map<String, String> getEnvVars() { try { return System.getenv(); } catch (Exception e) { log.error("Could not read environment variables", e); throw new ConfigException("Could not read environment variables"); } }
@param path path, not used for environment variables @param keys the keys whose values will be retrieved. @return the configuration data.
java
clients/src/main/java/org/apache/kafka/common/config/provider/EnvVarConfigProvider.java
111
[]
true
2
7.76
apache/kafka
31,560
javadoc
false
partitions
public Set<TopicPartition> partitions() { return partitions; }
returns all partitions for which no offsets are defined. @return all partitions without offsets
java
clients/src/main/java/org/apache/kafka/clients/consumer/NoOffsetForPartitionException.java
49
[]
true
1
6.8
apache/kafka
31,560
javadoc
false
removeAll
public static String removeAll(final String text, final String regex) { return replaceAll(text, regex, StringUtils.EMPTY); }
Removes each substring of the text String that matches the given regular expression. This method is a {@code null} safe equivalent to: <ul> <li>{@code text.replaceAll(regex, StringUtils.EMPTY)}</li> <li>{@code Pattern.compile(regex).matcher(text).replaceAll(StringUtils.EMPTY)}</li> </ul> <p>A {@code null} reference passed to this method is a no-op.</p> <p>Unlike in the {@link #removePattern(CharSequence, String)} method, the {@link Pattern#DOTALL} option is NOT automatically added. To use the DOTALL option prepend {@code "(?s)"} to the regex. DOTALL is also known as single-line mode in Perl.</p> <pre>{@code StringUtils.removeAll(null, *) = null StringUtils.removeAll("any", (String) null) = "any" StringUtils.removeAll("any", "") = "any" StringUtils.removeAll("any", ".*") = "" StringUtils.removeAll("any", ".+") = "" StringUtils.removeAll("abc", ".?") = "" StringUtils.removeAll("A<__>\n<__>B", "<.*>") = "A\nB" StringUtils.removeAll("A<__>\n<__>B", "(?s)<.*>") = "AB" StringUtils.removeAll("ABCabc123abc", "[a-z]") = "ABC123" }</pre> @param text text to remove from, may be null @param regex the regular expression to which this string is to be matched @return the text with any removes processed, {@code null} if null String input. @throws java.util.regex.PatternSyntaxException if the regular expression's syntax is invalid. @see #replaceAll(String, String, String) @see #removePattern(CharSequence, String) @see String#replaceAll(String, String) @see java.util.regex.Pattern @see java.util.regex.Pattern#DOTALL
java
src/main/java/org/apache/commons/lang3/RegExUtils.java
192
[ "text", "regex" ]
String
true
1
6.32
apache/commons-lang
2,896
javadoc
false
registerHints
public void registerHints(RuntimeHints hints) { for (Bindable<?> bindable : this.bindables) { try { new Processor(bindable).process(hints.reflection()); } catch (Exception ex) { logger.debug("Skipping hints for " + bindable, ex); } } }
Contribute hints to the given {@link RuntimeHints} instance. @param hints the hints contributed so far for the deployment unit
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/BindableRuntimeHintsRegistrar.java
95
[ "hints" ]
void
true
2
6.56
spring-projects/spring-boot
79,428
javadoc
false
reduce
public T reduce(final T identity, final BinaryOperator<T> accumulator) { makeTerminated(); return stream().reduce(identity, accumulator); }
Performs a reduction on the elements of this stream, using the provided identity value and an associative accumulation function, and returns the reduced value. This is equivalent to: <pre> {@code T result = identity; for (T element : this stream) result = accumulator.apply(result, element) return result; } </pre> but is not constrained to execute sequentially. <p> The {@code identity} value must be an identity for the accumulator function. This means that for all {@code t}, {@code accumulator.apply(identity, t)} is equal to {@code t}. The {@code accumulator} function must be an associative function. </p> <p> This is a terminal operation. </p> Note Sum, min, max, average, and string concatenation are all special cases of reduction. Summing a stream of numbers can be expressed as: <pre> {@code Integer sum = integers.reduce(0, (a, b) -> a + b); } </pre> or: <pre> {@code Integer sum = integers.reduce(0, Integer::sum); } </pre> <p> While this may seem a more roundabout way to perform an aggregation compared to simply mutating a running total in a loop, reduction operations parallelize more gracefully, without needing additional synchronization and with greatly reduced risk of data races. </p> @param identity the identity value for the accumulating function @param accumulator an associative, non-interfering, stateless function for combining two values @return the result of the reduction
java
src/main/java/org/apache/commons/lang3/stream/Streams.java
466
[ "identity", "accumulator" ]
T
true
1
6.48
apache/commons-lang
2,896
javadoc
false
doLoadDocument
protected Document doLoadDocument(InputSource inputSource, Resource resource) throws Exception { return this.documentLoader.loadDocument(inputSource, getEntityResolver(), this.errorHandler, getValidationModeForResource(resource), isNamespaceAware()); }
Actually load the specified document using the configured DocumentLoader. @param inputSource the SAX InputSource to read from @param resource the resource descriptor for the XML file @return the DOM Document @throws Exception when thrown from the DocumentLoader @see #setDocumentLoader @see DocumentLoader#loadDocument
java
spring-beans/src/main/java/org/springframework/beans/factory/xml/XmlBeanDefinitionReader.java
438
[ "inputSource", "resource" ]
Document
true
1
6
spring-projects/spring-framework
59,386
javadoc
false
abort
def abort(code: int | BaseResponse, *args: t.Any, **kwargs: t.Any) -> t.NoReturn: """Raise an :exc:`~werkzeug.exceptions.HTTPException` for the given status code. If :data:`~flask.current_app` is available, it will call its :attr:`~flask.Flask.aborter` object, otherwise it will use :func:`werkzeug.exceptions.abort`. :param code: The status code for the exception, which must be registered in ``app.aborter``. :param args: Passed to the exception. :param kwargs: Passed to the exception. .. versionadded:: 2.2 Calls ``current_app.aborter`` if available instead of always using Werkzeug's default ``abort``. """ if (ctx := _cv_app.get(None)) is not None: ctx.app.aborter(code, *args, **kwargs) _wz_abort(code, *args, **kwargs)
Raise an :exc:`~werkzeug.exceptions.HTTPException` for the given status code. If :data:`~flask.current_app` is available, it will call its :attr:`~flask.Flask.aborter` object, otherwise it will use :func:`werkzeug.exceptions.abort`. :param code: The status code for the exception, which must be registered in ``app.aborter``. :param args: Passed to the exception. :param kwargs: Passed to the exception. .. versionadded:: 2.2 Calls ``current_app.aborter`` if available instead of always using Werkzeug's default ``abort``.
python
src/flask/helpers.py
265
[ "code" ]
t.NoReturn
true
2
6.4
pallets/flask
70,946
sphinx
false
printDebugInfo
static void printDebugInfo(raw_ostream &OS, const MCInst &Instruction, const BinaryFunction *Function, DWARFContext *DwCtx) { const ClusteredRows *LineTableRows = ClusteredRows::fromSMLoc(Instruction.getLoc()); if (LineTableRows == nullptr) return; // File name and line number should be the same for all CUs. // So it is sufficient to check the first one. DebugLineTableRowRef RowRef = LineTableRows->getRows().front(); const DWARFDebugLine::LineTable *LineTable = DwCtx->getLineTableForUnit( DwCtx->getCompileUnitForOffset(RowRef.DwCompileUnitIndex)); if (!LineTable) return; const DWARFDebugLine::Row &Row = LineTable->Rows[RowRef.RowIndex - 1]; StringRef FileName = ""; if (std::optional<const char *> FName = dwarf::toString(LineTable->Prologue.getFileNameEntry(Row.File).Name)) FileName = *FName; OS << " # debug line " << FileName << ":" << Row.Line; if (Row.Column) OS << ":" << Row.Column; if (Row.Discriminator) OS << " discriminator:" << Row.Discriminator; }
Handles DWO sections that can either be in .o, .dwo or .dwp files.
cpp
bolt/lib/Core/BinaryContext.cpp
2,081
[]
true
6
7.2
llvm/llvm-project
36,021
doxygen
false
getTopicMetadata
private Map<String, List<PartitionInfo>> getTopicMetadata(MetadataRequest.Builder request, Timer timer) { // Save the round trip if no topics are requested. if (!request.isAllTopics() && request.emptyTopicList()) return Collections.emptyMap(); long attempts = 0L; do { RequestFuture<ClientResponse> future = sendMetadataRequest(request); client.poll(future, timer); if (future.failed() && !future.isRetriable()) throw future.exception(); if (future.succeeded()) { MetadataResponse response = (MetadataResponse) future.value().responseBody(); Cluster cluster = response.buildCluster(); Set<String> unauthorizedTopics = cluster.unauthorizedTopics(); if (!unauthorizedTopics.isEmpty()) throw new TopicAuthorizationException(unauthorizedTopics); boolean shouldRetry = false; Map<String, Errors> errors = response.errors(); if (!errors.isEmpty()) { // if there were errors, we need to check whether they were fatal or whether // we should just retry log.debug("Topic metadata fetch included errors: {}", errors); for (Map.Entry<String, Errors> errorEntry : errors.entrySet()) { String topic = errorEntry.getKey(); Errors error = errorEntry.getValue(); if (error == Errors.INVALID_TOPIC_EXCEPTION) throw new InvalidTopicException("Topic '" + topic + "' is invalid"); else if (error == Errors.UNKNOWN_TOPIC_OR_PARTITION) // if a requested topic is unknown, we just continue and let it be absent // in the returned map continue; else if (error.exception() instanceof RetriableException) shouldRetry = true; else throw new KafkaException("Unexpected error fetching metadata for topic " + topic, error.exception()); } } if (!shouldRetry) { HashMap<String, List<PartitionInfo>> topicsPartitionInfos = new HashMap<>(); for (String topic : cluster.topics()) topicsPartitionInfos.put(topic, cluster.partitionsForTopic(topic)); return topicsPartitionInfos; } } timer.sleep(retryBackoff.backoff(attempts++)); } while (timer.notExpired()); throw new TimeoutException("Timeout expired while fetching topic metadata"); }
Get metadata for all topics present in Kafka cluster. @param request The MetadataRequest to send @param timer Timer bounding how long this method can block @return The map of topics with their partition information
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/TopicMetadataFetcher.java
94
[ "request", "timer" ]
true
12
7.92
apache/kafka
31,560
javadoc
false
convertSingleImport
function convertSingleImport( name: BindingName, moduleSpecifier: StringLiteralLike, checker: TypeChecker, identifiers: Identifiers, target: ScriptTarget, quotePreference: QuotePreference, ): ConvertedImports { switch (name.kind) { case SyntaxKind.ObjectBindingPattern: { const importSpecifiers = mapAllOrFail(name.elements, e => e.dotDotDotToken || e.initializer || e.propertyName && !isIdentifier(e.propertyName) || !isIdentifier(e.name) ? undefined : makeImportSpecifier(e.propertyName && e.propertyName.text, e.name.text)); if (importSpecifiers) { return convertedImports([makeImport(/*defaultImport*/ undefined, importSpecifiers, moduleSpecifier, quotePreference)]); } } // falls through -- object destructuring has an interesting pattern and must be a variable declaration case SyntaxKind.ArrayBindingPattern: { /* import x from "x"; const [a, b, c] = x; */ const tmp = makeUniqueName(moduleSpecifierToValidIdentifier(moduleSpecifier.text, target), identifiers); return convertedImports([ makeImport(factory.createIdentifier(tmp), /*namedImports*/ undefined, moduleSpecifier, quotePreference), makeConst(/*modifiers*/ undefined, getSynthesizedDeepClone(name), factory.createIdentifier(tmp)), ]); } case SyntaxKind.Identifier: return convertSingleIdentifierImport(name, moduleSpecifier, checker, identifiers, quotePreference); default: return Debug.assertNever(name, `Convert to ES module got invalid name kind ${(name as BindingName).kind}`); } }
Converts `const <<name>> = require("x");`. Returns nodes that will replace the variable declaration for the commonjs import. May also make use `changes` to remove qualifiers at the use sites of imports, to change `mod.x` to `x`.
typescript
src/services/codefixes/convertToEsModule.ts
485
[ "name", "moduleSpecifier", "checker", "identifiers", "target", "quotePreference" ]
true
8
6
microsoft/TypeScript
107,154
jsdoc
false
deleteConsumerGroups
DeleteConsumerGroupsResult deleteConsumerGroups(Collection<String> groupIds, DeleteConsumerGroupsOptions options);
Delete consumer groups from the cluster. @param options The options to use when deleting a consumer group. @return The DeleteConsumerGroupsResult.
java
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
984
[ "groupIds", "options" ]
DeleteConsumerGroupsResult
true
1
6
apache/kafka
31,560
javadoc
false
getSplitNanoTime
public long getSplitNanoTime() { if (splitState != SplitState.SPLIT) { throw new IllegalStateException("Stopwatch must be split to get the split time."); } return splits.get(splits.size() - 1).getRight().toNanos(); }
Gets the split time in nanoseconds. <p> This is the time between start and latest split. </p> @return the split time in nanoseconds. @throws IllegalStateException if this StopWatch has not yet been split. @since 3.0
java
src/main/java/org/apache/commons/lang3/time/StopWatch.java
436
[]
true
2
8.08
apache/commons-lang
2,896
javadoc
false
cloneIfNecessary
@Override public AutowireCandidateResolver cloneIfNecessary() { try { return (AutowireCandidateResolver) clone(); } catch (CloneNotSupportedException ex) { throw new IllegalStateException(ex); } }
This implementation clones all instance fields through standard {@link Cloneable} support, allowing for subsequent reconfiguration of the cloned instance through a fresh {@link #setBeanFactory} call. @see #clone()
java
spring-beans/src/main/java/org/springframework/beans/factory/support/GenericTypeAwareAutowireCandidateResolver.java
205
[]
AutowireCandidateResolver
true
2
6.08
spring-projects/spring-framework
59,386
javadoc
false
showError
function showError() { vscode.window.showWarningMessage(vscode.l10n.t("Problem finding gulp tasks. See the output for more information."), vscode.l10n.t("Go to output")).then((choice) => { if (choice !== undefined) { _channel.show(true); } }); }
Check if the given filename is a file. If returns false in case the file does not exist or the file stats cannot be accessed/queried or it is no file at all. @param filename the filename to the checked @returns true in case the file exists, in any other case false.
typescript
extensions/gulp/src/main.ts
80
[]
false
2
7.12
microsoft/vscode
179,840
jsdoc
false
_find_option_with_arg
def _find_option_with_arg(argv, short_opts=None, long_opts=None): """Search argv for options specifying short and longopt alternatives. Returns: str: value for option found Raises: KeyError: if option not found. """ for i, arg in enumerate(argv): if arg.startswith('-'): if long_opts and arg.startswith('--'): name, sep, val = arg.partition('=') if name in long_opts: return val if sep else argv[i + 1] if short_opts and arg in short_opts: return argv[i + 1] raise KeyError('|'.join(short_opts or [] + long_opts or []))
Search argv for options specifying short and longopt alternatives. Returns: str: value for option found Raises: KeyError: if option not found.
python
celery/__init__.py
81
[ "argv", "short_opts", "long_opts" ]
false
11
6.96
celery/celery
27,741
unknown
false