function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
completeExceptionally
@Override public boolean completeExceptionally(Throwable ex) { throw erroneousCompletionException(); }
Completes this future exceptionally. For internal use by the Kafka clients, not by user code. @param throwable the exception. @return {@code true} if this invocation caused this CompletableFuture to transition to a completed state, else {@code false}
java
clients/src/main/java/org/apache/kafka/common/internals/KafkaCompletableFuture.java
57
[ "ex" ]
true
1
6.64
apache/kafka
31,560
javadoc
false
byteValue
@Override public byte byteValue() { return value; }
Returns the value of this MutableByte as a byte. @return the numeric value represented by this object after conversion to type byte.
java
src/main/java/org/apache/commons/lang3/mutable/MutableByte.java
136
[]
true
1
6.48
apache/commons-lang
2,896
javadoc
false
nancumprod
def nancumprod(a, axis=None, dtype=None, out=None): """ Return the cumulative product of array elements over a given axis treating Not a Numbers (NaNs) as one. The cumulative product does not change when NaNs are encountered and leading NaNs are replaced by ones. Ones are returned for slices that are all-NaN or empty. Parameters ---------- a : array_like Input array. axis : int, optional Axis along which the cumulative product is computed. By default the input is flattened. dtype : dtype, optional Type of the returned array, as well as of the accumulator in which the elements are multiplied. If *dtype* is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead. out : ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. Returns ------- nancumprod : ndarray A new array holding the result is returned unless `out` is specified, in which case it is returned. See Also -------- numpy.cumprod : Cumulative product across array propagating NaNs. isnan : Show which elements are NaN. Examples -------- >>> import numpy as np >>> np.nancumprod(1) array([1]) >>> np.nancumprod([1]) array([1]) >>> np.nancumprod([1, np.nan]) array([1., 1.]) >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nancumprod(a) array([1., 2., 6., 6.]) >>> np.nancumprod(a, axis=0) array([[1., 2.], [3., 2.]]) >>> np.nancumprod(a, axis=1) array([[1., 2.], [3., 3.]]) """ a, mask = _replace_nan(a, 1) return np.cumprod(a, axis=axis, dtype=dtype, out=out)
Return the cumulative product of array elements over a given axis treating Not a Numbers (NaNs) as one. The cumulative product does not change when NaNs are encountered and leading NaNs are replaced by ones. Ones are returned for slices that are all-NaN or empty. Parameters ---------- a : array_like Input array. axis : int, optional Axis along which the cumulative product is computed. By default the input is flattened. dtype : dtype, optional Type of the returned array, as well as of the accumulator in which the elements are multiplied. If *dtype* is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead. out : ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. Returns ------- nancumprod : ndarray A new array holding the result is returned unless `out` is specified, in which case it is returned. See Also -------- numpy.cumprod : Cumulative product across array propagating NaNs. isnan : Show which elements are NaN. Examples -------- >>> import numpy as np >>> np.nancumprod(1) array([1]) >>> np.nancumprod([1]) array([1]) >>> np.nancumprod([1, np.nan]) array([1., 1.]) >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nancumprod(a) array([1., 2., 6., 6.]) >>> np.nancumprod(a, axis=0) array([[1., 2.], [3., 2.]]) >>> np.nancumprod(a, axis=1) array([[1., 2.], [3., 3.]])
python
numpy/lib/_nanfunctions_impl.py
886
[ "a", "axis", "dtype", "out" ]
false
1
6.4
numpy/numpy
31,054
numpy
false
sendMetadataRequest
private RequestFuture<ClientResponse> sendMetadataRequest(MetadataRequest.Builder request) { final Node node = client.leastLoadedNode(); if (node == null) return RequestFuture.noBrokersAvailable(); else return client.send(node, request); }
Send Metadata Request to the least loaded node in Kafka cluster asynchronously @return A future that indicates result of sent metadata request
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/TopicMetadataFetcher.java
159
[ "request" ]
true
2
6.24
apache/kafka
31,560
javadoc
false
optString
public String optString(String name, String fallback) { Object object = opt(name); String result = JSON.toString(object); return result != null ? result : fallback; }
Returns the value mapped by {@code name} if it exists, coercing it if necessary. Returns {@code fallback} if no such mapping exists. @param name the name of the property @param fallback a fallback value @return the value or {@code fallback}
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONObject.java
582
[ "name", "fallback" ]
String
true
2
8.24
spring-projects/spring-boot
79,428
javadoc
false
getAbbreviatedName
public static String getAbbreviatedName(final Class<?> cls, final int lengthHint) { if (cls == null) { return StringUtils.EMPTY; } return getAbbreviatedName(cls.getName(), lengthHint); }
Gets the abbreviated name of a {@link Class}. @param cls the class to get the abbreviated name for, may be {@code null}. @param lengthHint the desired length of the abbreviated name. @return the abbreviated name or an empty string. @throws IllegalArgumentException if len &lt;= 0. @see #getAbbreviatedName(String, int) @since 3.4
java
src/main/java/org/apache/commons/lang3/ClassUtils.java
246
[ "cls", "lengthHint" ]
String
true
2
8.08
apache/commons-lang
2,896
javadoc
false
ObserverContainer
ObserverContainer(ObserverContainer&&) = delete;
Invokes an observer interface method on all observers. @param fn Function to call for each observer that takes a pointer to the observer and invokes the interface method.
cpp
folly/ObserverContainer.h
1,011
[]
true
2
6.32
facebook/folly
30,157
doxygen
false
handle
void handle(ConfigurationClass configClass, DeferredImportSelector importSelector) { DeferredImportSelectorHolder holder = new DeferredImportSelectorHolder(configClass, importSelector); if (this.deferredImportSelectors == null) { DeferredImportSelectorGroupingHandler handler = new DeferredImportSelectorGroupingHandler(); handler.register(holder); handler.processGroupImports(); } else { this.deferredImportSelectors.add(holder); } }
Handle the specified {@link DeferredImportSelector}. If deferred import selectors are being collected, this registers this instance to the list. If they are being processed, the {@link DeferredImportSelector} is also processed immediately according to its {@link DeferredImportSelector.Group}. @param configClass the source configuration class @param importSelector the selector to handle
java
spring-context/src/main/java/org/springframework/context/annotation/ConfigurationClassParser.java
811
[ "configClass", "importSelector" ]
void
true
2
6.24
spring-projects/spring-framework
59,386
javadoc
false
find_mismatched_vars
def find_mismatched_vars( var: Any, types: type | tuple[type, ...], allow_none: bool = False ) -> set[VariableTracker]: """ Recursively finds variables whose type is not an instance of the specified types. Args: var: The variable to check. types: A tuple of allowed types. allow_none (bool): Whether to allow None values. Defaults to False. Returns: A set of variables whose type is not an instance of the specified types. """ mismatched_vars = set() if isinstance(var, (list, tuple)): for item in var: mismatched_vars.update(find_mismatched_vars(item, types, allow_none)) elif isinstance(var, (TupleVariable, ListVariable)): for item in var.items: mismatched_vars.update(find_mismatched_vars(item, types, allow_none)) elif isinstance(var, ConstDictVariable): for value in var.items.values(): mismatched_vars.update(find_mismatched_vars(value, types, allow_none)) else: if not isinstance(var, types) and not (allow_none and var.is_constant_none()): mismatched_vars.add(var) return mismatched_vars
Recursively finds variables whose type is not an instance of the specified types. Args: var: The variable to check. types: A tuple of allowed types. allow_none (bool): Whether to allow None values. Defaults to False. Returns: A set of variables whose type is not an instance of the specified types.
python
torch/_dynamo/variables/higher_order_ops.py
196
[ "var", "types", "allow_none" ]
set[VariableTracker]
true
11
8.08
pytorch/pytorch
96,034
google
false
memberId
protected synchronized String memberId() { return generation.memberId; }
Get the current generation state if the group is stable, otherwise return null @return the current generation or null
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
1,057
[]
String
true
1
6.32
apache/kafka
31,560
javadoc
false
strides_symbolic
def strides_symbolic(self) -> tuple[tuple[sympy.Integer, ...], ...]: """ Get the symbolic strides of all input nodes. Returns: A tuple of stride tuples for each input node """ return tuple(node.get_stride() for node in self._input_nodes)
Get the symbolic strides of all input nodes. Returns: A tuple of stride tuples for each input node
python
torch/_inductor/kernel_inputs.py
132
[ "self" ]
tuple[tuple[sympy.Integer, ...], ...]
true
1
6.56
pytorch/pytorch
96,034
unknown
false
resolveArguments
private @Nullable AutowiredArguments resolveArguments(RegisteredBean registeredBean, Method method) { String beanName = registeredBean.getBeanName(); Class<?> beanClass = registeredBean.getBeanClass(); ConfigurableBeanFactory beanFactory = registeredBean.getBeanFactory(); Assert.isInstanceOf(AutowireCapableBeanFactory.class, beanFactory); AutowireCapableBeanFactory autowireCapableBeanFactory = (AutowireCapableBeanFactory) beanFactory; int argumentCount = method.getParameterCount(); @Nullable Object[] arguments = new Object[argumentCount]; Set<String> autowiredBeanNames = CollectionUtils.newLinkedHashSet(argumentCount); TypeConverter typeConverter = beanFactory.getTypeConverter(); for (int i = 0; i < argumentCount; i++) { MethodParameter parameter = new MethodParameter(method, i); DependencyDescriptor descriptor = new DependencyDescriptor(parameter, this.required); descriptor.setContainingClass(beanClass); String shortcut = (this.shortcutBeanNames != null ? this.shortcutBeanNames[i] : null); if (shortcut != null) { descriptor = new ShortcutDependencyDescriptor(descriptor, shortcut); } try { Object argument = autowireCapableBeanFactory.resolveDependency( descriptor, beanName, autowiredBeanNames, typeConverter); if (argument == null && !this.required) { return null; } arguments[i] = argument; } catch (BeansException ex) { throw new UnsatisfiedDependencyException(null, beanName, new InjectionPoint(parameter), ex); } } registerDependentBeans(beanFactory, beanName, autowiredBeanNames); return AutowiredArguments.of(arguments); }
Resolve the method arguments for the specified registered bean and invoke the method using reflection. @param registeredBean the registered bean @param instance the bean instance
java
spring-beans/src/main/java/org/springframework/beans/factory/aot/AutowiredMethodArgumentsResolver.java
156
[ "registeredBean", "method" ]
AutowiredArguments
true
7
6.24
spring-projects/spring-framework
59,386
javadoc
false
_encode_relation
def _encode_relation(self, name): '''(INTERNAL) Decodes a relation line. The relation declaration is a line with the format ``@RELATION <relation-name>``, where ``relation-name`` is a string. :param name: a string. :return: a string with the encoded relation declaration. ''' for char in ' %{},': if char in name: name = '"%s"'%name break return '%s %s'%(_TK_RELATION, name)
(INTERNAL) Decodes a relation line. The relation declaration is a line with the format ``@RELATION <relation-name>``, where ``relation-name`` is a string. :param name: a string. :return: a string with the encoded relation declaration.
python
sklearn/externals/_arff.py
921
[ "self", "name" ]
false
3
7.12
scikit-learn/scikit-learn
64,340
sphinx
false
_to_object_array
def _to_object_array(sequence): """Convert sequence to a 1-D NumPy array of object dtype. numpy.array constructor has a similar use but it's output is ambiguous. It can be 1-D NumPy array of object dtype if the input is a ragged array, but if the input is a list of equal length arrays, then the output is a 2D numpy.array. _to_object_array solves this ambiguity by guarantying that the output is a 1-D NumPy array of objects for any input. Parameters ---------- sequence : array-like of shape (n_elements,) The sequence to be converted. Returns ------- out : ndarray of shape (n_elements,), dtype=object The converted sequence into a 1-D NumPy array of object dtype. Examples -------- >>> import numpy as np >>> from sklearn.utils.validation import _to_object_array >>> _to_object_array([np.array([0]), np.array([1])]) array([array([0]), array([1])], dtype=object) >>> _to_object_array([np.array([0]), np.array([1, 2])]) array([array([0]), array([1, 2])], dtype=object) >>> _to_object_array([np.array([0]), np.array([1, 2])]) array([array([0]), array([1, 2])], dtype=object) """ out = np.empty(len(sequence), dtype=object) out[:] = sequence return out
Convert sequence to a 1-D NumPy array of object dtype. numpy.array constructor has a similar use but it's output is ambiguous. It can be 1-D NumPy array of object dtype if the input is a ragged array, but if the input is a list of equal length arrays, then the output is a 2D numpy.array. _to_object_array solves this ambiguity by guarantying that the output is a 1-D NumPy array of objects for any input. Parameters ---------- sequence : array-like of shape (n_elements,) The sequence to be converted. Returns ------- out : ndarray of shape (n_elements,), dtype=object The converted sequence into a 1-D NumPy array of object dtype. Examples -------- >>> import numpy as np >>> from sklearn.utils.validation import _to_object_array >>> _to_object_array([np.array([0]), np.array([1])]) array([array([0]), array([1])], dtype=object) >>> _to_object_array([np.array([0]), np.array([1, 2])]) array([array([0]), array([1, 2])], dtype=object) >>> _to_object_array([np.array([0]), np.array([1, 2])]) array([array([0]), array([1, 2])], dtype=object)
python
sklearn/utils/validation.py
2,597
[ "sequence" ]
false
1
6
scikit-learn/scikit-learn
64,340
numpy
false
matchesBeanType
private boolean matchesBeanType(Class<?> targetType, String beanName, BeanFactory beanFactory) { Class<?> beanType = beanFactory.getType(beanName); return (beanType != null && targetType.isAssignableFrom(beanType)); }
Retrieve all applicable Lifecycle beans: all singletons that have already been created, as well as all SmartLifecycle beans (even if they are marked as lazy-init). @return the Map of applicable beans, with bean names as keys and bean instances as values
java
spring-context/src/main/java/org/springframework/context/support/DefaultLifecycleProcessor.java
548
[ "targetType", "beanName", "beanFactory" ]
true
2
6.48
spring-projects/spring-framework
59,386
javadoc
false
tokenRangeFromRange
function tokenRangeFromRange(from: SyntaxKind, to: SyntaxKind, except: readonly SyntaxKind[] = []): TokenRange { const tokens: SyntaxKind[] = []; for (let token = from; token <= to; token++) { if (!contains(except, token)) { tokens.push(token); } } return tokenRangeFrom(tokens); }
A rule takes a two tokens (left/right) and a particular context for which you're meant to look at them. You then declare what should the whitespace annotation be between these tokens via the action param. @param debugName Name to print @param left The left side of the comparison @param right The right side of the comparison @param context A set of filters to narrow down the space in which this formatter rule applies @param action a declaration of the expected whitespace @param flags whether the rule deletes a line or not, defaults to no-op
typescript
src/services/formatting/rules.ts
455
[ "from", "to", "except" ]
true
3
6.24
microsoft/TypeScript
107,154
jsdoc
false
additionalArgumentsStartsWith
private boolean additionalArgumentsStartsWith(Predicate<@Nullable Object> startsWith) { if (this.additionalArguments == null) { return false; } return Stream.of(this.additionalArguments).anyMatch(startsWith); }
Use a specific filter to determine when a callback should apply. If no explicit filter is set filter will be attempted using the generic type on the callback type. @param filter the filter to use @return this instance @since 3.4.8
java
core/spring-boot/src/main/java/org/springframework/boot/util/LambdaSafe.java
182
[ "startsWith" ]
true
2
8.24
spring-projects/spring-boot
79,428
javadoc
false
getTopLevelType
private Element getTopLevelType(Element element) { if (!(element.getEnclosingElement() instanceof TypeElement)) { return element; } return getTopLevelType(element.getEnclosingElement()); }
Return if this property has been explicitly marked as nested (for example using an annotation}. @param environment the metadata generation environment @return if the property has been marked as nested
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/PropertyDescriptor.java
173
[ "element" ]
Element
true
2
7.92
spring-projects/spring-boot
79,428
javadoc
false
attach
public static void attach(Environment environment) { Assert.isInstanceOf(ConfigurableEnvironment.class, environment); MutablePropertySources sources = ((ConfigurableEnvironment) environment).getPropertySources(); PropertySource<?> attached = getAttached(sources); if (!isUsingSources(attached, sources)) { attached = new ConfigurationPropertySourcesPropertySource(ATTACHED_PROPERTY_SOURCE_NAME, new SpringConfigurationPropertySources(sources)); } sources.remove(ATTACHED_PROPERTY_SOURCE_NAME); sources.addFirst(attached); }
Attach a {@link ConfigurationPropertySource} support to the specified {@link Environment}. Adapts each {@link PropertySource} managed by the environment to a {@link ConfigurationPropertySource} and allows classic {@link PropertySourcesPropertyResolver} calls to resolve using {@link ConfigurationPropertyName configuration property names}. <p> The attached resolver will dynamically track any additions or removals from the underlying {@link Environment} property sources. @param environment the source environment (must be an instance of {@link ConfigurableEnvironment}) @see #get(Environment)
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertySources.java
89
[ "environment" ]
void
true
2
6.24
spring-projects/spring-boot
79,428
javadoc
false
toString
public static String toString(final Boolean bool, final String trueString, final String falseString, final String nullString) { if (bool == null) { return nullString; } return bool.booleanValue() ? trueString : falseString; }
Converts a Boolean to a String returning one of the input Strings. <pre> BooleanUtils.toString(Boolean.TRUE, "true", "false", null) = "true" BooleanUtils.toString(Boolean.FALSE, "true", "false", null) = "false" BooleanUtils.toString(null, "true", "false", null) = null; </pre> @param bool the Boolean to check @param trueString the String to return if {@code true}, may be {@code null} @param falseString the String to return if {@code false}, may be {@code null} @param nullString the String to return if {@code null}, may be {@code null} @return one of the three input Strings
java
src/main/java/org/apache/commons/lang3/BooleanUtils.java
1,037
[ "bool", "trueString", "falseString", "nullString" ]
String
true
3
8.08
apache/commons-lang
2,896
javadoc
false
sqr7uBulkWithOffsets
private static void sqr7uBulkWithOffsets( MemorySegment a, MemorySegment b, int length, int pitch, MemorySegment offsets, int count, MemorySegment result ) { try { JdkVectorLibrary.sqr7uBulkWithOffsets$mh.invokeExact(a, b, length, pitch, offsets, count, result); } catch (Throwable t) { throw new AssertionError(t); } }
Computes the square distance of given float32 vectors. @param a address of the first vector @param b address of the second vector @param elementCount the vector dimensions, number of float32 elements in the segment
java
libs/native/src/main/java/org/elasticsearch/nativeaccess/jdk/JdkVectorLibrary.java
339
[ "a", "b", "length", "pitch", "offsets", "count", "result" ]
void
true
2
6.56
elastic/elasticsearch
75,680
javadoc
false
updateLayerIndex
private void updateLayerIndex(JarArchiveEntry entry, @Nullable Library library) { if (this.layers != null && this.layersIndex != null && !entry.getName().endsWith("/")) { Layer layer = (library != null) ? this.layers.getLayer(library) : this.layers.getLayer(entry.getName()); this.layersIndex.add(layer, entry.getName()); } }
Perform the actual write of a {@link JarEntry}. All other write methods delegate to this one. @param entry the entry to write @param library the library for the entry or {@code null} @param entryWriter the entry writer or {@code null} if there is no content @throws IOException in case of I/O errors
java
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/AbstractJarWriter.java
269
[ "entry", "library" ]
void
true
5
6.72
spring-projects/spring-boot
79,428
javadoc
false
containsDescendantOfForRandom
private static ConfigurationPropertyState containsDescendantOfForRandom(String prefix, ConfigurationPropertyName name) { if (name.getNumberOfElements() > 1 && name.getElement(0, Form.DASHED).equals(prefix)) { return ConfigurationPropertyState.PRESENT; } return ConfigurationPropertyState.ABSENT; }
Create a new {@link SpringConfigurationPropertySource} implementation. @param propertySource the source property source @param systemEnvironmentSource if the source is from the system environment @param mappers the property mappers
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/SpringConfigurationPropertySource.java
134
[ "prefix", "name" ]
ConfigurationPropertyState
true
3
6.08
spring-projects/spring-boot
79,428
javadoc
false
buildLazyResourceProxy
private Object buildLazyResourceProxy(RegisteredBean registeredBean) { Class<?> lookupType = getLookupType(registeredBean); TargetSource ts = new TargetSource() { @Override public Class<?> getTargetClass() { return lookupType; } @Override public Object getTarget() { return resolveValue(registeredBean); } }; ProxyFactory pf = new ProxyFactory(); pf.setTargetSource(ts); if (lookupType.isInterface()) { pf.addInterface(lookupType); } return pf.getProxy(registeredBean.getBeanFactory().getBeanClassLoader()); }
Create a suitable {@link DependencyDescriptor} for the specified bean. @param registeredBean the registered bean @return a descriptor for that bean
java
spring-context/src/main/java/org/springframework/context/annotation/ResourceElementResolver.java
150
[ "registeredBean" ]
Object
true
2
7.28
spring-projects/spring-framework
59,386
javadoc
false
from_ordinals
def from_ordinals(cls, ordinals, *, freq, name=None) -> Self: """ Construct a PeriodIndex from ordinals. Parameters ---------- ordinals : array-like of int The period offsets from the proleptic Gregorian epoch. freq : str or period object One of pandas period strings or corresponding objects. name : str, default None Name of the resulting PeriodIndex. Returns ------- PeriodIndex See Also -------- PeriodIndex.from_fields : Construct a PeriodIndex from fields (year, month, day, etc.). PeriodIndex.to_timestamp : Cast to DatetimeArray/Index. Examples -------- >>> idx = pd.PeriodIndex.from_ordinals([-1, 0, 1], freq="Q") >>> idx PeriodIndex(['1969Q4', '1970Q1', '1970Q2'], dtype='period[Q-DEC]') """ ordinals = np.asarray(ordinals, dtype=np.int64) dtype = PeriodDtype(freq) data = PeriodArray._simple_new(ordinals, dtype=dtype) return cls._simple_new(data, name=name)
Construct a PeriodIndex from ordinals. Parameters ---------- ordinals : array-like of int The period offsets from the proleptic Gregorian epoch. freq : str or period object One of pandas period strings or corresponding objects. name : str, default None Name of the resulting PeriodIndex. Returns ------- PeriodIndex See Also -------- PeriodIndex.from_fields : Construct a PeriodIndex from fields (year, month, day, etc.). PeriodIndex.to_timestamp : Cast to DatetimeArray/Index. Examples -------- >>> idx = pd.PeriodIndex.from_ordinals([-1, 0, 1], freq="Q") >>> idx PeriodIndex(['1969Q4', '1970Q1', '1970Q2'], dtype='period[Q-DEC]')
python
pandas/core/indexes/period.py
322
[ "cls", "ordinals", "freq", "name" ]
Self
true
1
6.64
pandas-dev/pandas
47,362
numpy
false
groups
def groups(self) -> dict[Hashable, Index]: """ Dict {group name -> group labels}. This property provides a dictionary representation of the groupings formed during a groupby operation, where each key represents a unique group value from the specified column(s), and each value is a list of index labels that belong to that group. See Also -------- core.groupby.DataFrameGroupBy.get_group : Retrieve group from a ``DataFrameGroupBy`` object with provided name. core.groupby.SeriesGroupBy.get_group : Retrieve group from a ``SeriesGroupBy`` object with provided name. core.resample.Resampler.get_group : Retrieve group from a ``Resampler`` object with provided name. Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b"] >>> ser = pd.Series([1, 2, 3], index=lst) >>> ser a 1 a 2 b 3 dtype: int64 >>> ser.groupby(level=0).groups {'a': ['a', 'a'], 'b': ['b']} For DataFrameGroupBy: >>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]] >>> df = pd.DataFrame(data, columns=["a", "b", "c"]) >>> df a b c 0 1 2 3 1 1 5 6 2 7 8 9 >>> df.groupby(by="a").groups {1: [0, 1], 7: [2]} For Resampler: >>> ser = pd.Series( ... [1, 2, 3, 4], ... index=pd.DatetimeIndex( ... ["2023-01-01", "2023-01-15", "2023-02-01", "2023-02-15"] ... ), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 2023-02-15 4 dtype: int64 >>> ser.resample("MS").groups {Timestamp('2023-01-01 00:00:00'): np.int64(2), Timestamp('2023-02-01 00:00:00'): np.int64(4)} """ if isinstance(self.keys, list) and len(self.keys) == 1: warnings.warn( "`groups` by one element list returns scalar is deprecated " "and will be removed. In a future version `groups` by one element " "list will return tuple. Use ``df.groupby(by='a').groups`` " "instead of ``df.groupby(by=['a']).groups`` to avoid this warning", Pandas4Warning, stacklevel=find_stack_level(), ) return self._grouper.groups
Dict {group name -> group labels}. This property provides a dictionary representation of the groupings formed during a groupby operation, where each key represents a unique group value from the specified column(s), and each value is a list of index labels that belong to that group. See Also -------- core.groupby.DataFrameGroupBy.get_group : Retrieve group from a ``DataFrameGroupBy`` object with provided name. core.groupby.SeriesGroupBy.get_group : Retrieve group from a ``SeriesGroupBy`` object with provided name. core.resample.Resampler.get_group : Retrieve group from a ``Resampler`` object with provided name. Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b"] >>> ser = pd.Series([1, 2, 3], index=lst) >>> ser a 1 a 2 b 3 dtype: int64 >>> ser.groupby(level=0).groups {'a': ['a', 'a'], 'b': ['b']} For DataFrameGroupBy: >>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]] >>> df = pd.DataFrame(data, columns=["a", "b", "c"]) >>> df a b c 0 1 2 3 1 1 5 6 2 7 8 9 >>> df.groupby(by="a").groups {1: [0, 1], 7: [2]} For Resampler: >>> ser = pd.Series( ... [1, 2, 3, 4], ... index=pd.DatetimeIndex( ... ["2023-01-01", "2023-01-15", "2023-02-01", "2023-02-15"] ... ), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 2023-02-15 4 dtype: int64 >>> ser.resample("MS").groups {Timestamp('2023-01-01 00:00:00'): np.int64(2), Timestamp('2023-02-01 00:00:00'): np.int64(4)}
python
pandas/core/groupby/groupby.py
488
[ "self" ]
dict[Hashable, Index]
true
3
8.4
pandas-dev/pandas
47,362
unknown
false
drop_fields
def drop_fields(base, drop_names, usemask=True, asrecarray=False): """ Return a new array with fields in `drop_names` dropped. Nested fields are supported. Parameters ---------- base : array Input array drop_names : string or sequence String or sequence of strings corresponding to the names of the fields to drop. usemask : {False, True}, optional Whether to return a masked array or not. asrecarray : string or sequence, optional Whether to return a recarray or a mrecarray (`asrecarray=True`) or a plain ndarray or masked array with flexible dtype. The default is False. Examples -------- >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], ... dtype=[('a', np.int64), ('b', [('ba', np.double), ('bb', np.int64)])]) >>> rfn.drop_fields(a, 'a') array([((2., 3),), ((5., 6),)], dtype=[('b', [('ba', '<f8'), ('bb', '<i8')])]) >>> rfn.drop_fields(a, 'ba') array([(1, (3,)), (4, (6,))], dtype=[('a', '<i8'), ('b', [('bb', '<i8')])]) >>> rfn.drop_fields(a, ['ba', 'bb']) array([(1,), (4,)], dtype=[('a', '<i8')]) """ if _is_string_like(drop_names): drop_names = [drop_names] else: drop_names = set(drop_names) def _drop_descr(ndtype, drop_names): names = ndtype.names newdtype = [] for name in names: current = ndtype[name] if name in drop_names: continue if current.names is not None: descr = _drop_descr(current, drop_names) if descr: newdtype.append((name, descr)) else: newdtype.append((name, current)) return newdtype newdtype = _drop_descr(base.dtype, drop_names) output = np.empty(base.shape, dtype=newdtype) output = recursive_fill_fields(base, output) return _fix_output(output, usemask=usemask, asrecarray=asrecarray)
Return a new array with fields in `drop_names` dropped. Nested fields are supported. Parameters ---------- base : array Input array drop_names : string or sequence String or sequence of strings corresponding to the names of the fields to drop. usemask : {False, True}, optional Whether to return a masked array or not. asrecarray : string or sequence, optional Whether to return a recarray or a mrecarray (`asrecarray=True`) or a plain ndarray or masked array with flexible dtype. The default is False. Examples -------- >>> import numpy as np >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], ... dtype=[('a', np.int64), ('b', [('ba', np.double), ('bb', np.int64)])]) >>> rfn.drop_fields(a, 'a') array([((2., 3),), ((5., 6),)], dtype=[('b', [('ba', '<f8'), ('bb', '<i8')])]) >>> rfn.drop_fields(a, 'ba') array([(1, (3,)), (4, (6,))], dtype=[('a', '<i8'), ('b', [('bb', '<i8')])]) >>> rfn.drop_fields(a, ['ba', 'bb']) array([(1,), (4,)], dtype=[('a', '<i8')])
python
numpy/lib/recfunctions.py
505
[ "base", "drop_names", "usemask", "asrecarray" ]
false
8
7.6
numpy/numpy
31,054
numpy
false
get_template_configs
def get_template_configs( self, kernel_inputs: KernelInputs, templates: list[Union[KernelTemplate, ExternKernelChoice]], op_name: str, kwarg_overrides: Optional[dict[str, dict[str, Any]]] = None, ) -> list[ChoiceCaller]: """ Get list of ChoiceCallers for MM templates using template-specific heuristics. Args: kernel_inputs: MMKernelInputs containing input tensor nodes and matrix indices layout: Output layout templates: List of template objects (KernelTemplate or ExternKernelChoice) op_name: Operation name (e.g., "bmm", "baddbmm", "addmm", "mm_plus_mm") kwarg_overrides: Optional dict of kwargs to override for each template heuristic, indexed by template.uid. These only override the per config kwargs, not the extra kwargs Returns: List of ChoiceCaller objects from the templates """ if kwarg_overrides is None: kwarg_overrides = {} input_tensors = kernel_inputs.nodes() if len(input_tensors) < 2: raise ValueError(f"Need at least 2 input tensors, got {len(input_tensors)}") layout = kernel_inputs.output_layout() # First pass: Create dict of template.uid to generator of KernelTemplateChoice objects template_choices = {} for template in templates: template_choices[template.uid] = self.get_ktc( kernel_inputs, template, op_name, kwarg_overrides.get(template.uid, {}), ) # Second pass: Adjust the template choices adjusted_choices = self._finalize_template_configs( template_choices, kernel_inputs, templates, op_name, kwarg_overrides, ) # Layout optimization: if all choices are ExternKernelChoice and layout is FixedLayout, convert to FlexibleLayout if self._need_to_fix_layout(adjusted_choices, op_name): layout = kernel_inputs.output_layout(flexible=False) for ktc in adjusted_choices: ktc.layout = layout # for good measure, delete the cached ChoiceCaller from the ktc if it existed. # ExternKernelChoice are cheap to generate if hasattr(ktc, "_choice"): del ktc._choice # Third pass: Convert to ChoiceCaller objects return [ktc.choice for ktc in adjusted_choices if ktc.choice is not None]
Get list of ChoiceCallers for MM templates using template-specific heuristics. Args: kernel_inputs: MMKernelInputs containing input tensor nodes and matrix indices layout: Output layout templates: List of template objects (KernelTemplate or ExternKernelChoice) op_name: Operation name (e.g., "bmm", "baddbmm", "addmm", "mm_plus_mm") kwarg_overrides: Optional dict of kwargs to override for each template heuristic, indexed by template.uid. These only override the per config kwargs, not the extra kwargs Returns: List of ChoiceCaller objects from the templates
python
torch/_inductor/choices.py
269
[ "self", "kernel_inputs", "templates", "op_name", "kwarg_overrides" ]
list[ChoiceCaller]
true
7
7.52
pytorch/pytorch
96,034
google
false
check_bool_indexer
def check_bool_indexer(index: Index, key) -> np.ndarray: """ Check if key is a valid boolean indexer for an object with such index and perform reindexing or conversion if needed. This function assumes that is_bool_indexer(key) == True. Parameters ---------- index : Index Index of the object on which the indexing is done. key : list-like Boolean indexer to check. Returns ------- np.array Resulting key. Raises ------ IndexError If the key does not have the same length as index. IndexingError If the index of the key is unalignable to index. """ result = key if isinstance(key, ABCSeries) and not key.index.equals(index): indexer = result.index.get_indexer_for(index) if -1 in indexer: raise IndexingError( "Unalignable boolean Series provided as " "indexer (index of the boolean Series and of " "the indexed object do not match)." ) result = result.take(indexer) # fall through for boolean if not isinstance(result.dtype, ExtensionDtype): return result.astype(bool)._values if is_object_dtype(key): # key might be object-dtype bool, check_array_indexer needs bool array result = np.asarray(result, dtype=bool) elif not is_array_like(result): # GH 33924 # key may contain nan elements, check_array_indexer needs bool array result = pd_array(result, dtype=bool) return check_array_indexer(index, result)
Check if key is a valid boolean indexer for an object with such index and perform reindexing or conversion if needed. This function assumes that is_bool_indexer(key) == True. Parameters ---------- index : Index Index of the object on which the indexing is done. key : list-like Boolean indexer to check. Returns ------- np.array Resulting key. Raises ------ IndexError If the key does not have the same length as index. IndexingError If the index of the key is unalignable to index.
python
pandas/core/indexing.py
2,647
[ "index", "key" ]
np.ndarray
true
7
6.88
pandas-dev/pandas
47,362
numpy
false
mode
def mode( values: ArrayLike, dropna: bool = True, mask: npt.NDArray[np.bool_] | None = None ) -> tuple[np.ndarray, npt.NDArray[np.bool_]] | ExtensionArray: """ Returns the mode(s) of an array. Parameters ---------- values : array-like Array over which to check for duplicate values. dropna : bool, default True Don't consider counts of NaN/NaT. Returns ------- Union[Tuple[np.ndarray, npt.NDArray[np.bool_]], ExtensionArray] """ values = _ensure_arraylike(values, func_name="mode") original = values if needs_i8_conversion(values.dtype): # Got here with ndarray; dispatch to DatetimeArray/TimedeltaArray. values = ensure_wrapped_if_datetimelike(values) values = cast("ExtensionArray", values) return values._mode(dropna=dropna) values = _ensure_data(values) npresult, res_mask = htable.mode(values, dropna=dropna, mask=mask) if res_mask is None: res_mask = np.zeros(npresult.shape, dtype=np.bool_) else: return npresult, res_mask try: npresult = safe_sort(npresult) except TypeError as err: warnings.warn( f"Unable to sort modes: {err}", stacklevel=find_stack_level(), ) result = _reconstruct_data(npresult, original.dtype, original) return result, res_mask
Returns the mode(s) of an array. Parameters ---------- values : array-like Array over which to check for duplicate values. dropna : bool, default True Don't consider counts of NaN/NaT. Returns ------- Union[Tuple[np.ndarray, npt.NDArray[np.bool_]], ExtensionArray]
python
pandas/core/algorithms.py
1,015
[ "values", "dropna", "mask" ]
tuple[np.ndarray, npt.NDArray[np.bool_]] | ExtensionArray
true
4
6.4
pandas-dev/pandas
47,362
numpy
false
countNonNull
private int countNonNull(Object... instances) { int result = 0; for (Object instance : instances) { if (instance != null) { result += 1; } } return result; }
Generate a default cache name for the specified {@link Method}. @param method the annotated method @return the default cache name, according to JSR-107
java
spring-context-support/src/main/java/org/springframework/cache/jcache/interceptor/AnnotationJCacheOperationSource.java
230
[]
true
2
8.08
spring-projects/spring-framework
59,386
javadoc
false
bean
<T> T bean(ParameterizedTypeReference<T> beanType) throws BeansException;
Return the bean instance that uniquely matches the given generics-containing type, if any. @param beanType the generics-containing type the bean must match; can be an interface or superclass @return an instance of the single bean matching the bean type @see BeanFactory#getBean(String)
java
spring-beans/src/main/java/org/springframework/beans/factory/BeanRegistry.java
238
[ "beanType" ]
T
true
1
6.32
spring-projects/spring-framework
59,386
javadoc
false
times
function times(n: number) { return () => --n > -1 }
Repeat an {@link Accumulable} function. @param f to be repeated until... @param again return false to exit @returns @example ```ts // concats `[2]` 10 times on `[1]` repeat(concat, times(10))([1], [2]) ```
typescript
helpers/blaze/repeat.ts
35
[ "n" ]
false
1
7.76
prisma/prisma
44,834
jsdoc
false
infer_dtype_from
def infer_dtype_from(val) -> tuple[DtypeObj, Any]: """ Interpret the dtype from a scalar or array. Parameters ---------- val : object """ if not is_list_like(val): return infer_dtype_from_scalar(val) return infer_dtype_from_array(val)
Interpret the dtype from a scalar or array. Parameters ---------- val : object
python
pandas/core/dtypes/cast.py
666
[ "val" ]
tuple[DtypeObj, Any]
true
2
6.88
pandas-dev/pandas
47,362
numpy
false
indexesOf
public static BitSet indexesOf(final int[] array, final int valueToFind) { return indexesOf(array, valueToFind, 0); }
Finds the indices of the given value in the array. <p>This method returns an empty BitSet for a {@code null} input array.</p> @param array the array to search for the object, may be {@code null}. @param valueToFind the value to find. @return a BitSet of all the indices of the value within the array, an empty BitSet if not found or {@code null} array input. @since 3.10
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
2,185
[ "array", "valueToFind" ]
BitSet
true
1
6.8
apache/commons-lang
2,896
javadoc
false
newHint
public static ItemHint newHint(String name, ValueHint... values) { return new ItemHint(name, Arrays.asList(values), Collections.emptyList()); }
Return an {@link ItemHint} with the given prefix applied. @param prefix the prefix to apply @return a new {@link ItemHint} with the same of this instance whose property name has the prefix applied to it
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/metadata/ItemHint.java
90
[ "name" ]
ItemHint
true
1
6.64
spring-projects/spring-boot
79,428
javadoc
false
score_samples
def score_samples(self, X): """Compute the log-likelihood of each sample under the model. Parameters ---------- X : array-like of shape (n_samples, n_features) An array of points to query. Last dimension should match dimension of training data (n_features). Returns ------- density : ndarray of shape (n_samples,) Log-likelihood of each sample in `X`. These are normalized to be probability densities, so values will be low for high-dimensional data. """ check_is_fitted(self) # The returned density is normalized to the number of points. # For it to be a probability, we must scale it. For this reason # we'll also scale atol. X = validate_data(self, X, order="C", dtype=np.float64, reset=False) if self.tree_.sample_weight is None: N = self.tree_.data.shape[0] else: N = self.tree_.sum_weight atol_N = self.atol * N log_density = self.tree_.kernel_density( X, h=self.bandwidth_, kernel=self.kernel, atol=atol_N, rtol=self.rtol, breadth_first=self.breadth_first, return_log=True, ) log_density -= np.log(N) return log_density
Compute the log-likelihood of each sample under the model. Parameters ---------- X : array-like of shape (n_samples, n_features) An array of points to query. Last dimension should match dimension of training data (n_features). Returns ------- density : ndarray of shape (n_samples,) Log-likelihood of each sample in `X`. These are normalized to be probability densities, so values will be low for high-dimensional data.
python
sklearn/neighbors/_kde.py
252
[ "self", "X" ]
false
3
6.08
scikit-learn/scikit-learn
64,340
numpy
false
check_if_buildx_plugin_installed
def check_if_buildx_plugin_installed() -> bool: """ Checks if buildx plugin is locally available. :return True if the buildx plugin is installed. """ check_buildx = ["docker", "buildx", "version"] docker_buildx_version_result = run_command( check_buildx, no_output_dump_on_exception=True, capture_output=True, text=True, check=False, ) if docker_buildx_version_result.returncode == 0: return True return False
Checks if buildx plugin is locally available. :return True if the buildx plugin is installed.
python
dev/breeze/src/airflow_breeze/utils/run_utils.py
393
[]
bool
true
2
7.04
apache/airflow
43,597
unknown
false
of
static SslStoreBundle of(@Nullable KeyStore keyStore, @Nullable String keyStorePassword, @Nullable KeyStore trustStore) { return new SslStoreBundle() { @Override public @Nullable KeyStore getKeyStore() { return keyStore; } @Override public @Nullable KeyStore getTrustStore() { return trustStore; } @Override public @Nullable String getKeyStorePassword() { return keyStorePassword; } @Override public String toString() { ToStringCreator creator = new ToStringCreator(this); creator.append("keyStore.type", (keyStore != null) ? keyStore.getType() : "none"); creator.append("keyStorePassword", (keyStorePassword != null) ? "******" : null); creator.append("trustStore.type", (trustStore != null) ? trustStore.getType() : "none"); return creator.toString(); } }; }
Factory method to create a new {@link SslStoreBundle} instance. @param keyStore the key store or {@code null} @param keyStorePassword the key store password or {@code null} @param trustStore the trust store or {@code null} @return a new {@link SslStoreBundle} instance
java
core/spring-boot/src/main/java/org/springframework/boot/ssl/SslStoreBundle.java
64
[ "keyStore", "keyStorePassword", "trustStore" ]
SslStoreBundle
true
4
7.92
spring-projects/spring-boot
79,428
javadoc
false
create
static CuVSIvfPqParams create(int numVectors, int dims, CagraIndexParams.CuvsDistanceType distanceType, int efConstruction) { long nRows = numVectors; long nFeatures = dims; if (nRows <= 0 || nFeatures <= 0) { throw new IllegalArgumentException("Dataset dimensions must be positive: rows=" + nRows + ", features=" + nFeatures); } return createFromDimensions(nRows, nFeatures, distanceType, efConstruction); }
Creates {@link CuVSIvfPqParams} with automatically calculated parameters based on the dataset dimensions, distance metric, and efConstruction parameter. <p>This method replicates the parameter calculation logic from the C++ function: {@code cuvs::neighbors::graph_build_params::ivf_pq_params(dataset_extents, metric)} @param numVectors the number of vectors in the dataset @param dims the dimensionality of the vectors @param distanceType the distance metric to use (e.g., L2Expanded, Cosine) @param efConstruction the efConstruction parameter in an HNSW graph @return a {@link CuVSIvfPqParams} instance with calculated parameters @throws IllegalArgumentException if dimensions are invalid
java
libs/gpu-codec/src/main/java/org/elasticsearch/gpu/codec/CuVSIvfPqParamsFactory.java
45
[ "numVectors", "dims", "distanceType", "efConstruction" ]
CuVSIvfPqParams
true
3
7.28
elastic/elasticsearch
75,680
javadoc
false
of
static SslBundleKey of(String password) { return of(password, null); }
Factory method to create a new {@link SslBundleKey} instance. @param password the password used to access the key @return a new {@link SslBundleKey} instance
java
core/spring-boot/src/main/java/org/springframework/boot/ssl/SslBundleKey.java
77
[ "password" ]
SslBundleKey
true
1
6.64
spring-projects/spring-boot
79,428
javadoc
false
initializeAsyncLoaderHooksOnLoaderHookWorker
async function initializeAsyncLoaderHooksOnLoaderHookWorker() { const customLoaderURLs = getOptionValue('--experimental-loader'); // The worker thread spawned for handling asynchronous loader hooks should not // further spawn other hook threads or there will be an infinite recursion. const shouldSpawnLoaderHookWorker = false; // The worker thread for async loader hooks will preload user modules itself in // initializeAsyncLoaderHooksOnLoaderHookWorker(). const shouldPreloadModules = false; initializeModuleLoaders({ shouldSpawnLoaderHookWorker, shouldPreloadModules }); assert(!isCascadedLoaderInitialized(), 'ModuleLoader should be initialized in initializeAsyncLoaderHooksOnLoaderHookWorker()'); const asyncLoaderHooks = new AsyncLoaderHooksOnLoaderHookWorker(); getOrInitializeCascadedLoader(asyncLoaderHooks); // We need the async loader hooks to be set _before_ we start invoking // `--require`, otherwise loops can happen because a `--require` script // might call `register(...)` before we've installed ourselves. These // global values are magically set in `initializeModuleLoaders` just for us and // we call them in the correct order. // N.B. This block appears here specifically in order to ensure that // `--require` calls occur before `--loader` ones do. loadPreloadModules(); initializeFrozenIntrinsics(); const parentURL = getCWDURL().href; for (let i = 0; i < customLoaderURLs.length; i++) { await asyncLoaderHooks.register(customLoaderURLs[i], parentURL); } return asyncLoaderHooks; }
Register asynchronus module loader customization hooks. This should only be run in the loader hooks worker. In a non-loader-hooks thread, if any asynchronous loader hook is registered, the ModuleLoader#asyncLoaderHooks are initialized to be AsyncLoaderHooksProxiedToLoaderHookWorker which posts the messages to the async loader hook worker thread. When no asynchronous loader hook is registered, the loader hook worker is not spawned and module loading is entiredly done in-thread. @returns {Promise<AsyncLoaderHooksOnLoaderHookWorker>}
javascript
lib/internal/modules/esm/worker.js
48
[]
false
2
6.8
nodejs/node
114,839
jsdoc
true
process
private void process(final SeekUnvalidatedEvent event) { try { event.offsetEpoch().ifPresent(epoch -> metadata.updateLastSeenEpochIfNewer(event.partition(), epoch)); SubscriptionState.FetchPosition newPosition = new SubscriptionState.FetchPosition( event.offset(), event.offsetEpoch(), metadata.currentLeader(event.partition()) ); subscriptions.seekUnvalidated(event.partition(), newPosition); event.future().complete(null); } catch (Exception e) { event.future().completeExceptionally(e); } }
Process event indicating whether the AcknowledgeCommitCallbackHandler is configured by the user. @param event Event containing a boolean to indicate if the callback handler is configured or not.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/events/ApplicationEventProcessor.java
606
[ "event" ]
void
true
2
6.08
apache/kafka
31,560
javadoc
false
get_auth_manager
def get_auth_manager() -> BaseAuthManager: """Return the auth manager, provided it's been initialized before.""" if auth_manager is None: raise RuntimeError( "Auth Manager has not been initialized yet. " "The `init_auth_manager` method needs to be called first." ) return auth_manager
Return the auth manager, provided it's been initialized before.
python
airflow-core/src/airflow/api_fastapi/app.py
162
[]
BaseAuthManager
true
2
6.4
apache/airflow
43,597
unknown
false
_update_memory_tracking_after_swap_reorder
def _update_memory_tracking_after_swap_reorder( candidate: BaseSchedulerNode, gns: list[BaseSchedulerNode], group_tail: BaseSchedulerNode, candidate_delta_mem: int, candidate_allocfree: SNodeMemory, group_n_to_bufs_after_swap_dealloc_by_candidate: dict, post_alloc_update: dict[BaseSchedulerNode, int], curr_memory: dict, buf_to_snode_last_use: dict, snodes_allocfree: dict, ) -> None: """ Update memory tracking structures after swap (reorder version). Updates curr_memory, buf_to_snode_last_use, and snodes_allocfree dictionaries to reflect the new memory state after swapping candidate with group. Args: candidate: Node that was moved gns: Group nodes group_tail: Last node of group candidate_delta_mem: Net memory change from candidate (alloc - free) candidate_allocfree: Candidate's allocation/free info group_n_to_bufs_after_swap_dealloc_by_candidate: Buffers whose deallocation moves to candidate post_alloc_update: Cached post-allocation memory values curr_memory: Current memory state dict (mutated) buf_to_snode_last_use: Buffer to last-use node mapping (mutated) snodes_allocfree: Node allocation/free info dict (mutated) """ if not group_n_to_bufs_after_swap_dealloc_by_candidate: for gn in gns: cm = curr_memory[gn] curr_memory[gn] = ( cm[0] - candidate_delta_mem, cm[1] - candidate_delta_mem, ) _candidate_post_alloc_mem = ( curr_memory[group_tail][1] + candidate_allocfree.size_alloc ) _candidate_post_free_mem = ( _candidate_post_alloc_mem - candidate_allocfree.size_free ) curr_memory[candidate] = ( _candidate_post_alloc_mem, _candidate_post_free_mem, ) return # Candidate becomes last use of some bufs for bufs in group_n_to_bufs_after_swap_dealloc_by_candidate.values(): for buf in bufs: buf_to_snode_last_use[buf] = candidate size_free_to_move_to_candidate_sum: int = 0 for n in gns: _gn_post_alloc_mem: int = post_alloc_update[n] size_free_to_move_to_candidate: int = sum( buf.mpi_buffer.size_free for buf in group_n_to_bufs_after_swap_dealloc_by_candidate[n] ) size_free_to_move_to_candidate_sum += size_free_to_move_to_candidate # group node does not deallocate this after swap snodes_allocfree[n].size_free -= size_free_to_move_to_candidate gn_post_free_mem: int = _gn_post_alloc_mem - snodes_allocfree[n].size_free curr_memory[n] = (_gn_post_alloc_mem, gn_post_free_mem) _candidate_post_alloc_mem = post_alloc_update[candidate] snodes_allocfree[candidate].size_free += size_free_to_move_to_candidate_sum candidate_post_free_mem = ( _candidate_post_alloc_mem - snodes_allocfree[candidate].size_free ) curr_memory[candidate] = ( _candidate_post_alloc_mem, candidate_post_free_mem, )
Update memory tracking structures after swap (reorder version). Updates curr_memory, buf_to_snode_last_use, and snodes_allocfree dictionaries to reflect the new memory state after swapping candidate with group. Args: candidate: Node that was moved gns: Group nodes group_tail: Last node of group candidate_delta_mem: Net memory change from candidate (alloc - free) candidate_allocfree: Candidate's allocation/free info group_n_to_bufs_after_swap_dealloc_by_candidate: Buffers whose deallocation moves to candidate post_alloc_update: Cached post-allocation memory values curr_memory: Current memory state dict (mutated) buf_to_snode_last_use: Buffer to last-use node mapping (mutated) snodes_allocfree: Node allocation/free info dict (mutated)
python
torch/_inductor/comms.py
610
[ "candidate", "gns", "group_tail", "candidate_delta_mem", "candidate_allocfree", "group_n_to_bufs_after_swap_dealloc_by_candidate", "post_alloc_update", "curr_memory", "buf_to_snode_last_use", "snodes_allocfree" ]
None
true
6
6.16
pytorch/pytorch
96,034
google
false
parent
function parent(object, path) { return path.length < 2 ? object : baseGet(object, baseSlice(path, 0, -1)); }
Gets the parent value at `path` of `object`. @private @param {Object} object The object to query. @param {Array} path The path to get the parent value of. @returns {*} Returns the parent value.
javascript
lodash.js
6,678
[ "object", "path" ]
false
2
6.16
lodash/lodash
61,490
jsdoc
false
equals
@Override public boolean equals(Object o) { return o instanceof JSONArray && ((JSONArray) o).values.equals(this.values); }
Encodes this array as a human-readable JSON string for debugging, such as: <pre> [ 94043, 90210 ]</pre> @param indentSpaces the number of spaces to indent for each level of nesting. @return a human-readable JSON string of this array @throws JSONException if processing of json failed
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONArray.java
658
[ "o" ]
true
2
7.68
spring-projects/spring-boot
79,428
javadoc
false
equals
@Override public boolean equals(final Object obj) { if (obj == this) { return true; } if (!(obj instanceof CharRange)) { return false; } final CharRange other = (CharRange) obj; return start == other.start && end == other.end && negated == other.negated; }
Compares two CharRange objects, returning true if they represent exactly the same range of characters defined in the same way. @param obj the object to compare to. @return true if equal.
java
src/main/java/org/apache/commons/lang3/CharRange.java
281
[ "obj" ]
true
5
8.08
apache/commons-lang
2,896
javadoc
false
getExitingScheduledExecutorService
@J2ktIncompatible @GwtIncompatible // java.time.Duration @IgnoreJRERequirement // Users will use this only if they're already using Duration. public static ScheduledExecutorService getExitingScheduledExecutorService( ScheduledThreadPoolExecutor executor, Duration terminationTimeout) { return getExitingScheduledExecutorService( executor, toNanosSaturated(terminationTimeout), TimeUnit.NANOSECONDS); }
Converts the given ScheduledThreadPoolExecutor into a ScheduledExecutorService that exits when the application is complete. It does so by using daemon threads and adding a shutdown hook to wait for their completion. <p>This is mainly for fixed thread pools. See {@link Executors#newScheduledThreadPool(int)}. @param executor the executor to modify to make sure it exits when the application is finished @param terminationTimeout how long to wait for the executor to finish before terminating the JVM @return an unmodifiable version of the input which will not hang the JVM @since 33.4.0 (but since 28.0 in the JRE flavor)
java
android/guava/src/com/google/common/util/concurrent/MoreExecutors.java
148
[ "executor", "terminationTimeout" ]
ScheduledExecutorService
true
1
6.72
google/guava
51,352
javadoc
false
supportsEvent
private boolean supportsEvent( ConfigurableBeanFactory beanFactory, String listenerBeanName, ResolvableType eventType) { Class<?> listenerType = beanFactory.getType(listenerBeanName); if (listenerType == null || GenericApplicationListener.class.isAssignableFrom(listenerType) || SmartApplicationListener.class.isAssignableFrom(listenerType)) { return true; } if (!supportsEvent(listenerType, eventType)) { return false; } try { BeanDefinition bd = beanFactory.getMergedBeanDefinition(listenerBeanName); ResolvableType genericEventType = bd.getResolvableType().as(ApplicationListener.class).getGeneric(); return (genericEventType == ResolvableType.NONE || genericEventType.isAssignableFrom(eventType)); } catch (NoSuchBeanDefinitionException ex) { // Ignore - no need to check resolvable type for manually registered singleton return true; } }
Filter a bean-defined listener early through checking its generically declared event type before trying to instantiate it. <p>If this method returns {@code true} for a given listener as a first pass, the listener instance will get retrieved and fully evaluated through a {@link #supportsEvent(ApplicationListener, ResolvableType, Class)} call afterwards. @param beanFactory the BeanFactory that contains the listener beans @param listenerBeanName the name of the bean in the BeanFactory @param eventType the event type to check @return whether the given listener should be included in the candidates for the given event type @see #supportsEvent(Class, ResolvableType) @see #supportsEvent(ApplicationListener, ResolvableType, Class)
java
spring-context/src/main/java/org/springframework/context/event/AbstractApplicationEventMulticaster.java
343
[ "beanFactory", "listenerBeanName", "eventType" ]
true
7
7.44
spring-projects/spring-framework
59,386
javadoc
false
add
@CanIgnoreReturnValue int add(@ParametricNullness E element, int occurrences);
Adds a number of occurrences of an element to this multiset. Note that if {@code occurrences == 1}, this method has the identical effect to {@link #add(Object)}. This method is functionally equivalent (except in the case of overflow) to the call {@code addAll(Collections.nCopies(element, occurrences))}, which would presumably perform much more poorly. @param element the element to add occurrences of; may be null only if explicitly allowed by the implementation @param occurrences the number of occurrences of the element to add. May be zero, in which case no change will be made. @return the count of the element before the operation; possibly zero @throws IllegalArgumentException if {@code occurrences} is negative, or if this operation would result in more than {@link Integer#MAX_VALUE} occurrences of the element @throws NullPointerException if {@code element} is null and this implementation does not permit null elements. Note that if {@code occurrences} is zero, the implementation may opt to return normally.
java
android/guava/src/com/google/common/collect/Multiset.java
137
[ "element", "occurrences" ]
true
1
6.32
google/guava
51,352
javadoc
false
maybeClearPreviousInflightPoll
private void maybeClearPreviousInflightPoll() { if (inflightPoll.isComplete()) { Optional<KafkaException> errorOpt = inflightPoll.error(); if (errorOpt.isPresent()) { // If the previous inflight event is complete, check if it resulted in an error. If there was // an error, throw it without delay. KafkaException error = errorOpt.get(); log.trace("Previous inflight event {} completed with an error ({}), clearing", inflightPoll, error); inflightPoll = null; throw error; } else { // Successful case... if (fetchBuffer.isEmpty()) { // If it completed without error, but without populating the fetch buffer, clear the event // so that a new event will be enqueued below. log.trace("Previous inflight event {} completed without filling the buffer, clearing", inflightPoll); inflightPoll = null; } else { // However, if the event completed, and it populated the buffer, *don't* create a new event. // This is to prevent an edge case of starvation when poll() is called with a timeout of 0. // If a new event was created on *every* poll, each time the event would have to complete the // validate positions stage before the data in the fetch buffer is used. Because there is // no blocking, and effectively a 0 wait, the data in the fetch buffer is continuously ignored // leading to no data ever being returned from poll(). log.trace("Previous inflight event {} completed and filled the buffer, not clearing", inflightPoll); } } } else if (inflightPoll.isExpired(time) && inflightPoll.isValidatePositionsComplete()) { // The inflight event validated positions, but it has expired. log.trace("Previous inflight event {} expired without completing, clearing", inflightPoll); inflightPoll = null; } }
{@code checkInflightPoll()} manages the lifetime of the {@link AsyncPollEvent} processing. If it is called when no event is currently processing, it will start a new event processing asynchronously. A check is made during each invocation to see if the <em>inflight</em> event has completed. If it has, it will be processed accordingly.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java
930
[]
void
true
6
6.88
apache/kafka
31,560
javadoc
false
connectionState
public ConnectionState connectionState(String id) { return nodeState(id).state; }
Get the state of a given connection. @param id the id of the connection @return the state of our connection
java
clients/src/main/java/org/apache/kafka/clients/ClusterConnectionStates.java
399
[ "id" ]
ConnectionState
true
1
6.96
apache/kafka
31,560
javadoc
false
polydiv
def polydiv(u, v): """ Returns the quotient and remainder of polynomial division. .. note:: This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in `numpy.polynomial` is preferred. A summary of the differences can be found in the :doc:`transition guide </reference/routines.polynomials>`. The input arrays are the coefficients (including any coefficients equal to zero) of the "numerator" (dividend) and "denominator" (divisor) polynomials, respectively. Parameters ---------- u : array_like or poly1d Dividend polynomial's coefficients. v : array_like or poly1d Divisor polynomial's coefficients. Returns ------- q : ndarray Coefficients, including those equal to zero, of the quotient. r : ndarray Coefficients, including those equal to zero, of the remainder. See Also -------- poly, polyadd, polyder, polydiv, polyfit, polyint, polymul, polysub polyval Notes ----- Both `u` and `v` must be 0-d or 1-d (ndim = 0 or 1), but `u.ndim` need not equal `v.ndim`. In other words, all four possible combinations - ``u.ndim = v.ndim = 0``, ``u.ndim = v.ndim = 1``, ``u.ndim = 1, v.ndim = 0``, and ``u.ndim = 0, v.ndim = 1`` - work. Examples -------- .. math:: \\frac{3x^2 + 5x + 2}{2x + 1} = 1.5x + 1.75, remainder 0.25 >>> import numpy as np >>> x = np.array([3.0, 5.0, 2.0]) >>> y = np.array([2.0, 1.0]) >>> np.polydiv(x, y) (array([1.5 , 1.75]), array([0.25])) """ truepoly = (isinstance(u, poly1d) or isinstance(v, poly1d)) u = atleast_1d(u) + 0.0 v = atleast_1d(v) + 0.0 # w has the common type w = u[0] + v[0] m = len(u) - 1 n = len(v) - 1 scale = 1. / v[0] q = NX.zeros((max(m - n + 1, 1),), w.dtype) r = u.astype(w.dtype) for k in range(m - n + 1): d = scale * r[k] q[k] = d r[k:k + n + 1] -= d * v while NX.allclose(r[0], 0, rtol=1e-14) and (r.shape[-1] > 1): r = r[1:] if truepoly: return poly1d(q), poly1d(r) return q, r
Returns the quotient and remainder of polynomial division. .. note:: This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in `numpy.polynomial` is preferred. A summary of the differences can be found in the :doc:`transition guide </reference/routines.polynomials>`. The input arrays are the coefficients (including any coefficients equal to zero) of the "numerator" (dividend) and "denominator" (divisor) polynomials, respectively. Parameters ---------- u : array_like or poly1d Dividend polynomial's coefficients. v : array_like or poly1d Divisor polynomial's coefficients. Returns ------- q : ndarray Coefficients, including those equal to zero, of the quotient. r : ndarray Coefficients, including those equal to zero, of the remainder. See Also -------- poly, polyadd, polyder, polydiv, polyfit, polyint, polymul, polysub polyval Notes ----- Both `u` and `v` must be 0-d or 1-d (ndim = 0 or 1), but `u.ndim` need not equal `v.ndim`. In other words, all four possible combinations - ``u.ndim = v.ndim = 0``, ``u.ndim = v.ndim = 1``, ``u.ndim = 1, v.ndim = 0``, and ``u.ndim = 0, v.ndim = 1`` - work. Examples -------- .. math:: \\frac{3x^2 + 5x + 2}{2x + 1} = 1.5x + 1.75, remainder 0.25 >>> import numpy as np >>> x = np.array([3.0, 5.0, 2.0]) >>> y = np.array([2.0, 1.0]) >>> np.polydiv(x, y) (array([1.5 , 1.75]), array([0.25]))
python
numpy/lib/_polynomial_impl.py
990
[ "u", "v" ]
false
6
7.6
numpy/numpy
31,054
numpy
false
lineLengths
function lineLengths(content) { const contentLength = content.length; const output = []; let lineLength = 0; for (let i = 0; i < contentLength; i++, lineLength++) { const codePoint = StringPrototypeCodePointAt(content, i); // We purposefully keep \r as part of the line-length calculation, in // cases where there is a \r\n separator, so that this can be taken into // account in coverage calculations. // codepoints for \n (new line), \u2028 (line separator) and \u2029 (paragraph separator) if (codePoint === 10 || codePoint === 0x2028 || codePoint === 0x2029) { ArrayPrototypePush(output, lineLength); lineLength = -1; // To not count the matched codePoint such as \n character } } ArrayPrototypePush(output, lineLength); return output; }
Resolves source map payload data from the source url and source map url. If the source map url is a data url, the data is returned. Otherwise the source map url is resolved to a file path and the file is read. @param {string} sourceURL - url of the source file @param {string} sourceMappingURL - url of the source map @returns {object} deserialized source map JSON object
javascript
lib/internal/source_map/source_map_cache.js
257
[ "content" ]
false
5
6.08
nodejs/node
114,839
jsdoc
false
printComposite
private static String printComposite(Duration duration) { if (duration.isZero()) { return DurationFormat.Unit.SECONDS.print(duration); } StringBuilder result = new StringBuilder(); if (duration.isNegative()) { result.append('-'); duration = duration.negated(); } long days = duration.toDaysPart(); if (days != 0) { result.append(days).append(DurationFormat.Unit.DAYS.asSuffix()); } int hours = duration.toHoursPart(); if (hours != 0) { result.append(hours).append(DurationFormat.Unit.HOURS.asSuffix()); } int minutes = duration.toMinutesPart(); if (minutes != 0) { result.append(minutes).append(DurationFormat.Unit.MINUTES.asSuffix()); } int seconds = duration.toSecondsPart(); if (seconds != 0) { result.append(seconds).append(DurationFormat.Unit.SECONDS.asSuffix()); } int millis = duration.toMillisPart(); if (millis != 0) { result.append(millis).append(DurationFormat.Unit.MILLIS.asSuffix()); } //special handling of nanos: remove the millis part and then divide into microseconds and nanoseconds long nanos = duration.toNanosPart() - Duration.ofMillis(millis).toNanos(); if (nanos != 0) { long micros = nanos / 1000; long remainder = nanos - (micros * 1000); if (micros > 0) { result.append(micros).append(DurationFormat.Unit.MICROS.asSuffix()); } if (remainder > 0) { result.append(remainder).append(DurationFormat.Unit.NANOS.asSuffix()); } } return result.toString(); }
Detect the style then parse the value to return a duration. @param value the value to parse @param unit the duration unit to use if the value doesn't specify one ({@code null} will default to ms) @return the parsed duration @throws IllegalArgumentException if the value is not a known style or cannot be parsed
java
spring-context/src/main/java/org/springframework/format/datetime/standard/DurationFormatterUtils.java
181
[ "duration" ]
String
true
11
7.92
spring-projects/spring-framework
59,386
javadoc
false
_make_key
def _make_key( custom_params_encoder: Callable[..., object] | None, *args: object, **kwargs: object, ) -> str: """Generate a cache key from function parameters. Args: custom_params_encoder: Optional encoder to apply to function parameters. If None, params are pickled directly. *args: Positional arguments to encode. **kwargs: Keyword arguments to encode. Returns: A 32-character hex string suitable for use as a cache key. """ if custom_params_encoder is None: # Pickle the parameters directly pickled_params: bytes = pickle.dumps((args, kwargs)) else: # Encode the parameters using the custom encoder encoded_params = custom_params_encoder(*args, **kwargs) # Pickle the encoded output pickled_params = pickle.dumps(encoded_params) # Hash the pickled bytes with SHA256 hash_obj = sha256(pickled_params) # Get hex digest and truncate to 32 characters return hash_obj.hexdigest()[:32]
Generate a cache key from function parameters. Args: custom_params_encoder: Optional encoder to apply to function parameters. If None, params are pickled directly. *args: Positional arguments to encode. **kwargs: Keyword arguments to encode. Returns: A 32-character hex string suitable for use as a cache key.
python
torch/_inductor/runtime/caching/interfaces.py
95
[ "custom_params_encoder" ]
str
true
3
7.76
pytorch/pytorch
96,034
google
false
toString
@Override public String toString() { return this.descriptions.toString(); }
Return the parameter descriptions. @return the descriptions
java
loader/spring-boot-jarmode-tools/src/main/java/org/springframework/boot/jarmode/tools/Command.java
175
[]
String
true
1
6.32
spring-projects/spring-boot
79,428
javadoc
false
resolveAutowiredArgument
public @Nullable Object resolveAutowiredArgument( DependencyDescriptor descriptor, TypeConverter typeConverter, Set<String> autowiredBeanNames) { return new ConstructorResolver((AbstractAutowireCapableBeanFactory) getBeanFactory()) .resolveAutowiredArgument(descriptor, descriptor.getDependencyType(), getBeanName(), autowiredBeanNames, typeConverter, true); }
Resolve an autowired argument. @param descriptor the descriptor for the dependency (field/method/constructor) @param typeConverter the TypeConverter to use for populating arrays and collections @param autowiredBeanNames a Set that all names of autowired beans (used for resolving the given dependency) are supposed to be added to @return the resolved object, or {@code null} if none found @since 6.0.9
java
spring-beans/src/main/java/org/springframework/beans/factory/support/RegisteredBean.java
247
[ "descriptor", "typeConverter", "autowiredBeanNames" ]
Object
true
1
6.24
spring-projects/spring-framework
59,386
javadoc
false
capitalize
public static String capitalize(final String str) { return capitalize(str, null); }
Capitalizes all the whitespace separated words in a String. Only the first character of each word is changed. To convert the rest of each word to lowercase at the same time, use {@link #capitalizeFully(String)}. <p>Whitespace is defined by {@link Character#isWhitespace(char)}. A {@code null} input String returns {@code null}. Capitalization uses the Unicode title case, normally equivalent to upper case.</p> <pre> WordUtils.capitalize(null) = null WordUtils.capitalize("") = "" WordUtils.capitalize("i am FINE") = "I Am FINE" </pre> @param str the String to capitalize, may be null. @return capitalized String, {@code null} if null String input. @see #uncapitalize(String) @see #capitalizeFully(String)
java
src/main/java/org/apache/commons/lang3/text/WordUtils.java
62
[ "str" ]
String
true
1
6.32
apache/commons-lang
2,896
javadoc
false
removeElement
public static double[] removeElement(final double[] array, final double element) { final int index = indexOf(array, element); return index == INDEX_NOT_FOUND ? clone(array) : remove(array, index); }
Removes the first occurrence of the specified element from the specified array. All subsequent elements are shifted to the left (subtracts one from their indices). If the array doesn't contain such an element, no elements are removed from the array. <p> This method returns a new array with the same elements of the input array except the first occurrence of the specified element. The component type of the returned array is always the same as that of the input array. </p> <pre> ArrayUtils.removeElement(null, 1.1) = null ArrayUtils.removeElement([], 1.1) = [] ArrayUtils.removeElement([1.1], 1.2) = [1.1] ArrayUtils.removeElement([1.1, 2.3], 1.1) = [2.3] ArrayUtils.removeElement([1.1, 2.3, 1.1], 1.1) = [2.3, 1.1] </pre> @param array the input array, may be {@code null}. @param element the element to be removed. @return A new array containing the existing elements except the first occurrence of the specified element. @since 2.1
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
5,732
[ "array", "element" ]
true
2
7.84
apache/commons-lang
2,896
javadoc
false
getExitingExecutorService
@J2ktIncompatible @GwtIncompatible // TODO @IgnoreJRERequirement // Users will use this only if they're already using Duration. public static ExecutorService getExitingExecutorService( ThreadPoolExecutor executor, Duration terminationTimeout) { return getExitingExecutorService( executor, toNanosSaturated(terminationTimeout), TimeUnit.NANOSECONDS); }
Converts the given ThreadPoolExecutor into an ExecutorService that exits when the application is complete. It does so by using daemon threads and adding a shutdown hook to wait for their completion. <p>This is mainly for fixed thread pools. See {@link Executors#newFixedThreadPool(int)}. @param executor the executor to modify to make sure it exits when the application is finished @param terminationTimeout how long to wait for the executor to finish before terminating the JVM @return an unmodifiable version of the input which will not hang the JVM @since 33.4.0 (but since 28.0 in the JRE flavor)
java
android/guava/src/com/google/common/util/concurrent/MoreExecutors.java
86
[ "executor", "terminationTimeout" ]
ExecutorService
true
1
6.72
google/guava
51,352
javadoc
false
centerValue
public double centerValue() { return this.centerValue; }
Get the value of this metrics center point. @return the center point value
java
clients/src/main/java/org/apache/kafka/common/metrics/stats/Frequency.java
55
[]
true
1
6.96
apache/kafka
31,560
javadoc
false
readField
public static Object readField(final Field field, final Object target) throws IllegalAccessException { return readField(field, target, false); }
Reads an accessible {@link Field}. @param field the field to use. @param target the object to call on, may be {@code null} for {@code static} fields. @return the field value @throws NullPointerException if the field is {@code null}. @throws IllegalAccessException if the field is not accessible. @throws SecurityException if an underlying accessible object's method denies the request. @see SecurityManager#checkPermission
java
src/main/java/org/apache/commons/lang3/reflect/FieldUtils.java
376
[ "field", "target" ]
Object
true
1
6.32
apache/commons-lang
2,896
javadoc
false
flush
@Override public void flush() throws IOException { checkNotClosed(); if (target instanceof Flushable) { ((Flushable) target).flush(); } }
Creates a new writer that appends everything it writes to {@code target}. @param target target to which to append output
java
android/guava/src/com/google/common/io/AppendableWriter.java
87
[]
void
true
2
6.72
google/guava
51,352
javadoc
false
bindToSpringApplication
protected void bindToSpringApplication(ConfigurableEnvironment environment) { try { Binder.get(environment).bind("spring.main", Bindable.ofInstance(this.properties)); } catch (Exception ex) { throw new IllegalStateException("Cannot bind to SpringApplication", ex); } }
Bind the environment to the {@link ApplicationProperties}. @param environment the environment to bind
java
core/spring-boot/src/main/java/org/springframework/boot/SpringApplication.java
550
[ "environment" ]
void
true
2
6.08
spring-projects/spring-boot
79,428
javadoc
false
intValue
@Override public int intValue() { return numerator / denominator; }
Gets the fraction as an {@code int}. This returns the whole number part of the fraction. @return the whole number fraction part
java
src/main/java/org/apache/commons/lang3/math/Fraction.java
733
[]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
pathHasQueryOrFragment
function pathHasQueryOrFragment (url) { return ( url.includes('?') || url.includes('#') ) }
@param {string} url The path to check for query strings or fragments. @returns {boolean} Returns true if the path contains a query string or fragment.
javascript
deps/undici/src/lib/core/util.js
109
[ "url" ]
false
2
6.24
nodejs/node
114,839
jsdoc
false
ready
@Override public boolean ready() throws IOException { return (current != null) && current.ready(); }
Closes the current reader and opens the next one, if any.
java
android/guava/src/com/google/common/io/MultiReader.java
81
[]
true
2
6.96
google/guava
51,352
javadoc
false
check_dag_prefix
def check_dag_prefix(env_name: str, env_value: str) -> None: """ Validate dag prefix value. Checks if value of dag prefix env variable is a prefix for one of the forbidden dag ids (which would cause runs of corresponding DAGs to be collected alongside the real test Dag Runs). :param env_name: name of the environment variable which is being checked. :param env_value: value of the variable. """ # TODO: allow every environment type to specify its own "forbidden" matching dag ids safe_dag_prefix = safe_dag_id(env_value) matching_dag_ids = [ dag_id for dag_id in DAG_IDS_NOT_ALLOWED_TO_MATCH_PREFIX if dag_id.startswith(safe_dag_prefix) ] if matching_dag_ids: raise ValueError( f"Value '{env_value}' of {env_name} is not allowed as {safe_dag_prefix} is a prefix " f"for the following forbidden dag ids: {matching_dag_ids}" )
Validate dag prefix value. Checks if value of dag prefix env variable is a prefix for one of the forbidden dag ids (which would cause runs of corresponding DAGs to be collected alongside the real test Dag Runs). :param env_name: name of the environment variable which is being checked. :param env_value: value of the variable.
python
performance/src/performance_dags/performance_dag/performance_dag_utils.py
196
[ "env_name", "env_value" ]
None
true
2
7.2
apache/airflow
43,597
sphinx
false
stringSize
function stringSize(string) { return hasUnicode(string) ? unicodeSize(string) : asciiSize(string); }
Gets the number of symbols in `string`. @private @param {string} string The string to inspect. @returns {number} Returns the string size.
javascript
lodash.js
1,337
[ "string" ]
false
2
6.24
lodash/lodash
61,490
jsdoc
false
handle_exception
def handle_exception(self, ctx: AppContext, e: Exception) -> Response: """Handle an exception that did not have an error handler associated with it, or that was raised from an error handler. This always causes a 500 ``InternalServerError``. Always sends the :data:`got_request_exception` signal. If :data:`PROPAGATE_EXCEPTIONS` is ``True``, such as in debug mode, the error will be re-raised so that the debugger can display it. Otherwise, the original exception is logged, and an :exc:`~werkzeug.exceptions.InternalServerError` is returned. If an error handler is registered for ``InternalServerError`` or ``500``, it will be used. For consistency, the handler will always receive the ``InternalServerError``. The original unhandled exception is available as ``e.original_exception``. .. versionchanged:: 1.1.0 Always passes the ``InternalServerError`` instance to the handler, setting ``original_exception`` to the unhandled error. .. versionchanged:: 1.1.0 ``after_request`` functions and other finalization is done even for the default 500 response when there is no handler. .. versionadded:: 0.3 """ exc_info = sys.exc_info() got_request_exception.send(self, _async_wrapper=self.ensure_sync, exception=e) propagate = self.config["PROPAGATE_EXCEPTIONS"] if propagate is None: propagate = self.testing or self.debug if propagate: # Re-raise if called with an active exception, otherwise # raise the passed in exception. if exc_info[1] is e: raise raise e self.log_exception(ctx, exc_info) server_error: InternalServerError | ft.ResponseReturnValue server_error = InternalServerError(original_exception=e) handler = self._find_error_handler(server_error, ctx.request.blueprints) if handler is not None: server_error = self.ensure_sync(handler)(server_error) return self.finalize_request(ctx, server_error, from_error_handler=True)
Handle an exception that did not have an error handler associated with it, or that was raised from an error handler. This always causes a 500 ``InternalServerError``. Always sends the :data:`got_request_exception` signal. If :data:`PROPAGATE_EXCEPTIONS` is ``True``, such as in debug mode, the error will be re-raised so that the debugger can display it. Otherwise, the original exception is logged, and an :exc:`~werkzeug.exceptions.InternalServerError` is returned. If an error handler is registered for ``InternalServerError`` or ``500``, it will be used. For consistency, the handler will always receive the ``InternalServerError``. The original unhandled exception is available as ``e.original_exception``. .. versionchanged:: 1.1.0 Always passes the ``InternalServerError`` instance to the handler, setting ``original_exception`` to the unhandled error. .. versionchanged:: 1.1.0 ``after_request`` functions and other finalization is done even for the default 500 response when there is no handler. .. versionadded:: 0.3
python
src/flask/app.py
896
[ "self", "ctx", "e" ]
Response
true
6
6.4
pallets/flask
70,946
unknown
false
gen_even_slices
def gen_even_slices(n, n_packs, *, n_samples=None): """Generator to create `n_packs` evenly spaced slices going up to `n`. If `n_packs` does not divide `n`, except for the first `n % n_packs` slices, remaining slices may contain fewer elements. Parameters ---------- n : int Size of the sequence. n_packs : int Number of slices to generate. n_samples : int, default=None Number of samples. Pass `n_samples` when the slices are to be used for sparse matrix indexing; slicing off-the-end raises an exception, while it works for NumPy arrays. Yields ------ `slice` representing a set of indices from 0 to n. See Also -------- gen_batches: Generator to create slices containing batch_size elements from 0 to n. Examples -------- >>> from sklearn.utils import gen_even_slices >>> list(gen_even_slices(10, 1)) [slice(0, 10, None)] >>> list(gen_even_slices(10, 10)) [slice(0, 1, None), slice(1, 2, None), ..., slice(9, 10, None)] >>> list(gen_even_slices(10, 5)) [slice(0, 2, None), slice(2, 4, None), ..., slice(8, 10, None)] >>> list(gen_even_slices(10, 3)) [slice(0, 4, None), slice(4, 7, None), slice(7, 10, None)] """ start = 0 for pack_num in range(n_packs): this_n = n // n_packs if pack_num < n % n_packs: this_n += 1 if this_n > 0: end = start + this_n if n_samples is not None: end = min(n_samples, end) yield slice(start, end, None) start = end
Generator to create `n_packs` evenly spaced slices going up to `n`. If `n_packs` does not divide `n`, except for the first `n % n_packs` slices, remaining slices may contain fewer elements. Parameters ---------- n : int Size of the sequence. n_packs : int Number of slices to generate. n_samples : int, default=None Number of samples. Pass `n_samples` when the slices are to be used for sparse matrix indexing; slicing off-the-end raises an exception, while it works for NumPy arrays. Yields ------ `slice` representing a set of indices from 0 to n. See Also -------- gen_batches: Generator to create slices containing batch_size elements from 0 to n. Examples -------- >>> from sklearn.utils import gen_even_slices >>> list(gen_even_slices(10, 1)) [slice(0, 10, None)] >>> list(gen_even_slices(10, 10)) [slice(0, 1, None), slice(1, 2, None), ..., slice(9, 10, None)] >>> list(gen_even_slices(10, 5)) [slice(0, 2, None), slice(2, 4, None), ..., slice(8, 10, None)] >>> list(gen_even_slices(10, 3)) [slice(0, 4, None), slice(4, 7, None), slice(7, 10, None)]
python
sklearn/utils/_chunking.py
89
[ "n", "n_packs", "n_samples" ]
false
5
7.68
scikit-learn/scikit-learn
64,340
numpy
false
tryAcquire
@IgnoreJRERequirement // Users will use this only if they're already using Duration. public boolean tryAcquire(int permits, Duration timeout) { return tryAcquire(permits, toNanosSaturated(timeout), TimeUnit.NANOSECONDS); }
Acquires the given number of permits from this {@code RateLimiter} if it can be obtained without exceeding the specified {@code timeout}, or returns {@code false} immediately (without waiting) if the permits would not have been granted before the timeout expired. @param permits the number of permits to acquire @param timeout the maximum time to wait for the permits. Negative values are treated as zero. @return {@code true} if the permits were acquired, {@code false} otherwise @throws IllegalArgumentException if the requested number of permits is negative or zero @since 33.4.0 (but since 28.0 in the JRE flavor)
java
android/guava/src/com/google/common/util/concurrent/RateLimiter.java
395
[ "permits", "timeout" ]
true
1
6.64
google/guava
51,352
javadoc
false
timeFormat
public DateTimeFormatters timeFormat(@Nullable String pattern) { this.timeFormatter = isIso(pattern) ? DateTimeFormatter.ISO_LOCAL_TIME : (isIsoOffset(pattern) ? DateTimeFormatter.ISO_OFFSET_TIME : formatter(pattern)); return this; }
Configures the time format using the given {@code pattern}. @param pattern the pattern for formatting times @return {@code this} for chained method invocation
java
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/format/DateTimeFormatters.java
65
[ "pattern" ]
DateTimeFormatters
true
3
8.08
spring-projects/spring-boot
79,428
javadoc
false
secondary_training_status_changed
def secondary_training_status_changed(current_job_description: dict, prev_job_description: dict) -> bool: """ Check if training job's secondary status message has changed. :param current_job_description: Current job description, returned from DescribeTrainingJob call. :param prev_job_description: Previous job description, returned from DescribeTrainingJob call. :return: Whether the secondary status message of a training job changed or not. """ current_secondary_status_transitions = current_job_description.get("SecondaryStatusTransitions") if not current_secondary_status_transitions: return False prev_job_secondary_status_transitions = ( prev_job_description.get("SecondaryStatusTransitions") if prev_job_description is not None else None ) last_message = ( prev_job_secondary_status_transitions[-1]["StatusMessage"] if prev_job_secondary_status_transitions else "" ) message = current_job_description["SecondaryStatusTransitions"][-1]["StatusMessage"] return message != last_message
Check if training job's secondary status message has changed. :param current_job_description: Current job description, returned from DescribeTrainingJob call. :param prev_job_description: Previous job description, returned from DescribeTrainingJob call. :return: Whether the secondary status message of a training job changed or not.
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/sagemaker.py
81
[ "current_job_description", "prev_job_description" ]
bool
true
4
7.44
apache/airflow
43,597
sphinx
false
clear
private void clear() { this.completedSends.clear(); this.completedReceives.clear(); this.connected.clear(); this.disconnected.clear(); // Remove closed channels after all their buffered receives have been processed or if a send was requested for (Iterator<Map.Entry<String, KafkaChannel>> it = closingChannels.entrySet().iterator(); it.hasNext(); ) { KafkaChannel channel = it.next().getValue(); boolean sendFailed = failedSends.remove(channel.id()); boolean hasPending = false; if (!sendFailed) hasPending = maybeReadFromClosingChannel(channel); if (!hasPending) { doClose(channel, true); it.remove(); } } for (String channel : this.failedSends) this.disconnected.put(channel, ChannelState.FAILED_SEND); this.failedSends.clear(); this.madeReadProgressLastPoll = false; }
Clears all the results from the previous poll. This is invoked by Selector at the start of a poll() when all the results from the previous poll are expected to have been handled. <p> SocketServer uses {@link #clearCompletedSends()} and {@link #clearCompletedReceives()} to clear `completedSends` and `completedReceives` as soon as they are processed to avoid holding onto large request/response buffers from multiple connections longer than necessary. Clients rely on Selector invoking {@link #clear()} at the start of each poll() since memory usage is less critical and clearing once-per-poll provides the flexibility to process these results in any order before the next poll.
java
clients/src/main/java/org/apache/kafka/common/network/Selector.java
842
[]
void
true
4
6.88
apache/kafka
31,560
javadoc
false
resume
public void resume() { if (runningState != State.SUSPENDED) { throw new IllegalStateException("Stopwatch must be suspended to resume."); } startTimeNanos += System.nanoTime() - stopTimeNanos; runningState = State.RUNNING; }
Resumes this StopWatch after a suspend. <p> This method resumes the watch after it was suspended. The watch will not include time between the suspend and resume calls in the total time. </p> @throws IllegalStateException if this StopWatch has not been suspended.
java
src/main/java/org/apache/commons/lang3/time/StopWatch.java
637
[]
void
true
2
6.88
apache/commons-lang
2,896
javadoc
false
create
public static AdminClient create(Map<String, Object> conf) { return (AdminClient) Admin.create(conf); }
Create a new Admin with the given configuration. @param conf The configuration. @return The new KafkaAdminClient.
java
clients/src/main/java/org/apache/kafka/clients/admin/AdminClient.java
48
[ "conf" ]
AdminClient
true
1
6.32
apache/kafka
31,560
javadoc
false
createEntries
@Override List<Entry<K, V>> createEntries() { @WeakOuter final class EntriesImpl extends AbstractSequentialList<Entry<K, V>> { @Override public int size() { return size; } @Override public ListIterator<Entry<K, V>> listIterator(int index) { return new NodeIterator(index); } @Override public void forEach(Consumer<? super Entry<K, V>> action) { checkNotNull(action); for (Node<K, V> node = head; node != null; node = node.next) { action.accept(node); } } } return new EntriesImpl(); }
{@inheritDoc} <p>The iterator generated by the returned collection traverses the entries in the order they were added to the multimap. Because the entries may have duplicates and follow the insertion ordering, this method returns a {@link List}, instead of the {@link Collection} specified in the {@link ListMultimap} interface. <p>An entry's {@link Entry#getKey} method always returns the same key, regardless of what happens subsequently. As long as the corresponding key-value mapping is not removed from the multimap, {@link Entry#getValue} returns the value from the multimap, which may change over time, and {@link Entry#setValue} modifies that value. Removing the mapping from the multimap does not alter the value returned by {@code getValue()}, though a subsequent {@code setValue()} call won't update the multimap but will lead to a revised value being returned by {@code getValue()}.
java
guava/src/com/google/common/collect/LinkedListMultimap.java
803
[]
true
2
6.56
google/guava
51,352
javadoc
false
bytes
@Override public UTF8Bytes bytes() { if (bytes == null) { byte[] byteArray = string.getBytes(StandardCharsets.UTF_8); bytes = new UTF8Bytes(byteArray, 0, byteArray.length); } return bytes; }
Whether an {@link UTF8Bytes} view of the data is already materialized.
java
libs/x-content/src/main/java/org/elasticsearch/xcontent/Text.java
67
[]
UTF8Bytes
true
2
6.88
elastic/elasticsearch
75,680
javadoc
false
determineRequiredStatus
protected boolean determineRequiredStatus(MergedAnnotation<?> ann) { return (ann.getValue(this.requiredParameterName).isEmpty() || this.requiredParameterValue == ann.getBoolean(this.requiredParameterName)); }
Determine if the annotated field or method requires its dependency. <p>A 'required' dependency means that autowiring should fail when no beans are found. Otherwise, the autowiring process will simply bypass the field or method when no beans are found. @param ann the Autowired annotation @return whether the annotation indicates that a dependency is required
java
spring-beans/src/main/java/org/springframework/beans/factory/annotation/AutowiredAnnotationBeanPostProcessor.java
627
[ "ann" ]
true
2
8.16
spring-projects/spring-framework
59,386
javadoc
false
get_join_indexers_non_unique
def get_join_indexers_non_unique( left: ArrayLike, right: ArrayLike, sort: bool = False, how: JoinHow = "inner", ) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp]]: """ Get join indexers for left and right. Parameters ---------- left : ArrayLike right : ArrayLike sort : bool, default False how : {'inner', 'outer', 'left', 'right'}, default 'inner' Returns ------- np.ndarray[np.intp] Indexer into left. np.ndarray[np.intp] Indexer into right. """ lkey, rkey, count = _factorize_keys(left, right, sort=sort, how=how) if count == -1: # hash join return lkey, rkey if how == "left": lidx, ridx = libjoin.left_outer_join(lkey, rkey, count, sort=sort) elif how == "right": ridx, lidx = libjoin.left_outer_join(rkey, lkey, count, sort=sort) elif how == "inner": lidx, ridx = libjoin.inner_join(lkey, rkey, count, sort=sort) elif how == "outer": lidx, ridx = libjoin.full_outer_join(lkey, rkey, count) return lidx, ridx
Get join indexers for left and right. Parameters ---------- left : ArrayLike right : ArrayLike sort : bool, default False how : {'inner', 'outer', 'left', 'right'}, default 'inner' Returns ------- np.ndarray[np.intp] Indexer into left. np.ndarray[np.intp] Indexer into right.
python
pandas/core/reshape/merge.py
2,121
[ "left", "right", "sort", "how" ]
tuple[npt.NDArray[np.intp], npt.NDArray[np.intp]]
true
6
6.56
pandas-dev/pandas
47,362
numpy
false
openStream
InputStream openStream() throws IOException { Assert.state(this.file != null, "'file' must not be null"); return new FileInputStream(this.file); }
Open a stream that provides the content of the source file. @return the file content @throws IOException on error
java
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/Library.java
111
[]
InputStream
true
1
6.96
spring-projects/spring-boot
79,428
javadoc
false
copy
def copy( self, file, mode: str = "w", propindexes: bool = True, keys=None, complib=None, complevel: int | None = None, fletcher32: bool = False, overwrite: bool = True, ) -> HDFStore: """ Copy the existing store to a new file, updating in place. Parameters ---------- propindexes : bool, default True Restore indexes in copied file. keys : list, optional List of keys to include in the copy (defaults to all). overwrite : bool, default True Whether to overwrite (remove and replace) existing nodes in the new store. mode, complib, complevel, fletcher32 same as in HDFStore.__init__ Returns ------- open file handle of the new store """ new_store = HDFStore( file, mode=mode, complib=complib, complevel=complevel, fletcher32=fletcher32 ) if keys is None: keys = list(self.keys()) if not isinstance(keys, (tuple, list)): keys = [keys] for k in keys: s = self.get_storer(k) if s is not None: if k in new_store: if overwrite: new_store.remove(k) data = self.select(k) if isinstance(s, Table): index: bool | list[str] = False if propindexes: index = [a.name for a in s.axes if a.is_indexed] new_store.append( k, data, index=index, data_columns=getattr(s, "data_columns", None), encoding=s.encoding, ) else: new_store.put(k, data, encoding=s.encoding) return new_store
Copy the existing store to a new file, updating in place. Parameters ---------- propindexes : bool, default True Restore indexes in copied file. keys : list, optional List of keys to include in the copy (defaults to all). overwrite : bool, default True Whether to overwrite (remove and replace) existing nodes in the new store. mode, complib, complevel, fletcher32 same as in HDFStore.__init__ Returns ------- open file handle of the new store
python
pandas/io/pytables.py
1,688
[ "self", "file", "mode", "propindexes", "keys", "complib", "complevel", "fletcher32", "overwrite" ]
HDFStore
true
10
6.8
pandas-dev/pandas
47,362
numpy
false
_check_mode
def _check_mode(mode, encoding, newline): """Check mode and that encoding and newline are compatible. Parameters ---------- mode : str File open mode. encoding : str File encoding. newline : str Newline for text files. """ if "t" in mode: if "b" in mode: raise ValueError(f"Invalid mode: {mode!r}") else: if encoding is not None: raise ValueError("Argument 'encoding' not supported in binary mode") if newline is not None: raise ValueError("Argument 'newline' not supported in binary mode")
Check mode and that encoding and newline are compatible. Parameters ---------- mode : str File open mode. encoding : str File encoding. newline : str Newline for text files.
python
numpy/lib/_datasource.py
44
[ "mode", "encoding", "newline" ]
false
6
6.08
numpy/numpy
31,054
numpy
false
sem
def sem(self, ddof: int = 1, numeric_only: bool = False): """ Calculate the expanding standard error of mean. Parameters ---------- ddof : int, default 1 Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements. numeric_only : bool, default False Include only float, int, boolean columns. Returns ------- Series or DataFrame Return type is the same as the original object with ``np.float64`` dtype. See Also -------- Series.expanding : Calling expanding with Series data. DataFrame.expanding : Calling expanding with DataFrames. Series.sem : Aggregating sem for Series. DataFrame.sem : Aggregating sem for DataFrame. Notes ----- A minimum of one period is required for the calculation. Examples -------- >>> s = pd.Series([0, 1, 2, 3]) >>> s.expanding().sem() 0 NaN 1 0.707107 2 0.707107 3 0.745356 dtype: float64 """ return super().sem(ddof=ddof, numeric_only=numeric_only)
Calculate the expanding standard error of mean. Parameters ---------- ddof : int, default 1 Delta Degrees of Freedom. The divisor used in calculations is ``N - ddof``, where ``N`` represents the number of elements. numeric_only : bool, default False Include only float, int, boolean columns. Returns ------- Series or DataFrame Return type is the same as the original object with ``np.float64`` dtype. See Also -------- Series.expanding : Calling expanding with Series data. DataFrame.expanding : Calling expanding with DataFrames. Series.sem : Aggregating sem for Series. DataFrame.sem : Aggregating sem for DataFrame. Notes ----- A minimum of one period is required for the calculation. Examples -------- >>> s = pd.Series([0, 1, 2, 3]) >>> s.expanding().sem() 0 NaN 1 0.707107 2 0.707107 3 0.745356 dtype: float64
python
pandas/core/window/expanding.py
868
[ "self", "ddof", "numeric_only" ]
true
1
7.12
pandas-dev/pandas
47,362
numpy
false
hexToLong
public static long hexToLong(final String src, final int srcPos, final long dstInit, final int dstPos, final int nHex) { if (0 == nHex) { return dstInit; } if ((nHex - 1) * 4 + dstPos >= Long.SIZE) { throw new IllegalArgumentException("(nHexs - 1) * 4 + dstPos >= 64"); } long out = dstInit; for (int i = 0; i < nHex; i++) { final int shift = i * 4 + dstPos; final long bits = (0xfL & hexDigitToInt(src.charAt(i + srcPos))) << shift; final long mask = 0xfL << shift; out = out & ~mask | bits; } return out; }
Converts an array of char into a long using the default (little-endian, LSB0) byte and bit ordering. @param src the hexadecimal string to convert. @param srcPos the position in {@code src}, in char unit, from where to start the conversion. @param dstInit initial value of the destination long. @param dstPos the position of the LSB, in bits, in the result long. @param nHex the number of chars to convert. @return a long containing the selected bits. @throws IllegalArgumentException if {@code (nHexs - 1) * 4 + dstPos >= 64}.
java
src/main/java/org/apache/commons/lang3/Conversion.java
800
[ "src", "srcPos", "dstInit", "dstPos", "nHex" ]
true
4
8.08
apache/commons-lang
2,896
javadoc
false
parameterize
public static final ParameterizedType parameterize(final Class<?> rawClass, final Type... typeArguments) { return parameterizeWithOwner(null, rawClass, typeArguments); }
Creates a parameterized type instance. @param rawClass the raw class to create a parameterized type instance for. @param typeArguments the types used for parameterization. @return {@link ParameterizedType}. @throws NullPointerException if {@code rawClass} is {@code null}. @since 3.2
java
src/main/java/org/apache/commons/lang3/reflect/TypeUtils.java
1,403
[ "rawClass" ]
ParameterizedType
true
1
6.16
apache/commons-lang
2,896
javadoc
false
save
def save(file, arr, allow_pickle=True): """ Save an array to a binary file in NumPy ``.npy`` format. Parameters ---------- file : file, str, or pathlib.Path File or filename to which the data is saved. If file is a file-object, then the filename is unchanged. If file is a string or Path, a ``.npy`` extension will be appended to the filename if it does not already have one. arr : array_like Array data to be saved. allow_pickle : bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between different versions of Python). Default: True See Also -------- savez : Save several arrays into a ``.npz`` archive savetxt, load Notes ----- For a description of the ``.npy`` format, see :py:mod:`numpy.lib.format`. Any data saved to the file is appended to the end of the file. Examples -------- >>> import numpy as np >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> np.save(outfile, x) >>> _ = outfile.seek(0) # Only needed to simulate closing & reopening file >>> np.load(outfile) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> with open('test.npy', 'wb') as f: ... np.save(f, np.array([1, 2])) ... np.save(f, np.array([1, 3])) >>> with open('test.npy', 'rb') as f: ... a = np.load(f) ... b = np.load(f) >>> print(a, b) # [1 2] [1 3] """ if hasattr(file, 'write'): file_ctx = contextlib.nullcontext(file) else: file = os.fspath(file) if not file.endswith('.npy'): file = file + '.npy' file_ctx = open(file, "wb") with file_ctx as fid: arr = np.asanyarray(arr) format.write_array(fid, arr, allow_pickle=allow_pickle)
Save an array to a binary file in NumPy ``.npy`` format. Parameters ---------- file : file, str, or pathlib.Path File or filename to which the data is saved. If file is a file-object, then the filename is unchanged. If file is a string or Path, a ``.npy`` extension will be appended to the filename if it does not already have one. arr : array_like Array data to be saved. allow_pickle : bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between different versions of Python). Default: True See Also -------- savez : Save several arrays into a ``.npz`` archive savetxt, load Notes ----- For a description of the ``.npy`` format, see :py:mod:`numpy.lib.format`. Any data saved to the file is appended to the end of the file. Examples -------- >>> import numpy as np >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> np.save(outfile, x) >>> _ = outfile.seek(0) # Only needed to simulate closing & reopening file >>> np.load(outfile) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> with open('test.npy', 'wb') as f: ... np.save(f, np.array([1, 2])) ... np.save(f, np.array([1, 3])) >>> with open('test.npy', 'rb') as f: ... a = np.load(f) ... b = np.load(f) >>> print(a, b) # [1 2] [1 3]
python
numpy/lib/_npyio_impl.py
505
[ "file", "arr", "allow_pickle" ]
false
4
7.76
numpy/numpy
31,054
numpy
false
visitClassDeclaration
function visitClassDeclaration(node: ClassDeclaration): VisitResult<Statement | undefined> { let statements: Statement[] | undefined; if (hasSyntacticModifier(node, ModifierFlags.Export)) { statements = append( statements, setOriginalNode( setTextRange( factory.createClassDeclaration( visitNodes(node.modifiers, modifierVisitor, isModifierLike), factory.getDeclarationName(node, /*allowComments*/ true, /*allowSourceMaps*/ true), /*typeParameters*/ undefined, visitNodes(node.heritageClauses, visitor, isHeritageClause), visitNodes(node.members, visitor, isClassElement), ), node, ), node, ), ); } else { statements = append(statements, visitEachChild(node, visitor, context)); } statements = appendExportsOfHoistedDeclaration(statements, node); return singleOrMany(statements); }
Visits a ClassDeclaration node. @param node The node to visit.
typescript
src/compiler/transformers/module/module.ts
1,784
[ "node" ]
true
3
6.88
microsoft/TypeScript
107,154
jsdoc
false
createDefaultTrustManagerFactory
private static TrustManagerFactory createDefaultTrustManagerFactory() { String defaultAlgorithm = TrustManagerFactory.getDefaultAlgorithm(); TrustManagerFactory trustManagerFactory; try { trustManagerFactory = TrustManagerFactory.getInstance(defaultAlgorithm); trustManagerFactory.init((KeyStore) null); } catch (NoSuchAlgorithmException | KeyStoreException ex) { throw new IllegalStateException( "Unable to create TrustManagerFactory for default '%s' algorithm".formatted(defaultAlgorithm), ex); } return trustManagerFactory; }
Factory method to create a new {@link SslManagerBundle} using the given {@link TrustManager TrustManagers} and the default {@link KeyManagerFactory}. @param trustManagers the trust managers to use @return a new {@link SslManagerBundle} instance @since 3.5.0
java
core/spring-boot/src/main/java/org/springframework/boot/ssl/SslManagerBundle.java
156
[]
TrustManagerFactory
true
2
7.44
spring-projects/spring-boot
79,428
javadoc
false
create
public static <O> ReadWriteLockVisitor<O> create(final O object, final ReadWriteLock readWriteLock) { return new LockingVisitors.ReadWriteLockVisitor<>(object, readWriteLock); }
Creates a new instance of {@link ReadWriteLockVisitor} with the given object and lock. @param <O> The type of the object to protect. @param object The object to protect. @param readWriteLock The lock to use. @return A new {@link ReadWriteLockVisitor}. @see LockingVisitors @since 3.13.0
java
src/main/java/org/apache/commons/lang3/concurrent/locks/LockingVisitors.java
695
[ "object", "readWriteLock" ]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
max
public static byte max(byte... array) { checkArgument(array.length > 0); int max = toUnsignedInt(array[0]); for (int i = 1; i < array.length; i++) { int next = toUnsignedInt(array[i]); if (next > max) { max = next; } } return (byte) max; }
Returns the greatest value present in {@code array}, treating values as unsigned. @param array a <i>nonempty</i> array of {@code byte} values @return the value present in {@code array} that is greater than or equal to every other value in the array according to {@link #compare} @throws IllegalArgumentException if {@code array} is empty
java
android/guava/src/com/google/common/primitives/UnsignedBytes.java
162
[]
true
3
7.76
google/guava
51,352
javadoc
false
asName
function asName<T extends DeclarationName | Identifier | BindingName | PropertyName | NoSubstitutionTemplateLiteral | EntityName | ThisTypeNode | undefined>(name: string | T): T | Identifier { return typeof name === "string" ? createIdentifier(name) : name; }
Lifts a NodeArray containing only Statement nodes to a block. @param nodes The NodeArray.
typescript
src/compiler/factory/nodeFactory.ts
7,141
[ "name" ]
true
2
6.64
microsoft/TypeScript
107,154
jsdoc
false
dashIgnoringElementEquals
private boolean dashIgnoringElementEquals(Elements e1, Elements e2, int i) { int l1 = e1.getLength(i); int l2 = e2.getLength(i); int i1 = 0; int i2 = 0; while (i1 < l1) { if (i2 >= l2) { return remainderIsDashes(e1, i, i1); } char ch1 = e1.charAt(i, i1); char ch2 = e2.charAt(i, i2); if (ch1 == '-') { i1++; } else if (ch2 == '-') { i2++; } else if (ch1 != ch2) { return false; } else { i1++; i2++; } } if (i2 < l2) { if (e2.getType(i).isIndexed()) { return false; } do { char ch2 = e2.charAt(i, i2++); if (ch2 != '-') { return false; } } while (i2 < l2); } return true; }
Returns {@code true} if this element is an ancestor (immediate or nested parent) of the specified name. @param name the name to check @return {@code true} if this name is an ancestor
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/ConfigurationPropertyName.java
424
[ "e1", "e2", "i" ]
true
9
8
spring-projects/spring-boot
79,428
javadoc
false
open
public static FileRecords open(File file) throws IOException { return open(file, true); }
Get an iterator over the record batches in the file, starting at a specific position. This is similar to {@link #batches()} except that callers specify a particular position to start reading the batches from. This method must be used with caution: the start position passed in must be a known start of a batch. @param start The position to start record iteration from; must be a known position for start of a batch @return An iterator over batches starting from {@code start}
java
clients/src/main/java/org/apache/kafka/common/record/FileRecords.java
465
[ "file" ]
FileRecords
true
1
6.96
apache/kafka
31,560
javadoc
false
run_and_validate
def run_and_validate(self, program_path): """ Run a program and return detailed results for validation. Args: program_path: Path to the Python program to run Returns: dict: Dictionary with 'success', 'stdout', 'stderr', 'returncode' """ abs_path = os.path.abspath(program_path) # Select a random CUDA device if available cuda_visible_devices = os.environ.get("CUDA_VISIBLE_DEVICES") if cuda_visible_devices: devices = [d.strip() for d in cuda_visible_devices.split(",") if d.strip()] else: try: import torch num_gpus = torch.cuda.device_count() if num_gpus > 1: devices = [str(i) for i in range(1, num_gpus)] else: devices = [str(i) for i in range(num_gpus)] except ImportError: devices = [] if devices: selected_device = random.choice(devices) env = os.environ.copy() env["CUDA_VISIBLE_DEVICES"] = selected_device print(f"Selected CUDA_VISIBLE_DEVICES={selected_device}") else: env = None try: result = subprocess.run( [sys.executable, abs_path], capture_output=True, text=True, check=True, env=env, ) return { "success": True, "stdout": result.stdout, "stderr": result.stderr, "returncode": result.returncode, } except subprocess.CalledProcessError as e: return { "success": False, "stdout": e.stdout, "stderr": e.stderr, "returncode": e.returncode, }
Run a program and return detailed results for validation. Args: program_path: Path to the Python program to run Returns: dict: Dictionary with 'success', 'stdout', 'stderr', 'returncode'
python
tools/experimental/torchfuzz/runner.py
77
[ "self", "program_path" ]
false
7
6.72
pytorch/pytorch
96,034
google
false
synchronizedQueue
@J2ktIncompatible // Synchronized public static <E extends @Nullable Object> Queue<E> synchronizedQueue(Queue<E> queue) { return Synchronized.queue(queue, null); }
Returns a synchronized (thread-safe) queue backed by the specified queue. In order to guarantee serial access, it is critical that <b>all</b> access to the backing queue is accomplished through the returned queue. <p>It is imperative that the user manually synchronize on the returned queue when accessing the queue's iterator: {@snippet : Queue<E> queue = Queues.synchronizedQueue(MinMaxPriorityQueue.<E>create()); ... queue.add(element); // Needn't be in synchronized block ... synchronized (queue) { // Must synchronize on queue! Iterator<E> i = queue.iterator(); // Must be in synchronized block while (i.hasNext()) { foo(i.next()); } } } <p>Failure to follow this advice may result in non-deterministic behavior. <p>The returned queue will be serializable if the specified queue is serializable. @param queue the queue to be wrapped in a synchronized view @return a synchronized view of the specified queue @since 14.0
java
android/guava/src/com/google/common/collect/Queues.java
457
[ "queue" ]
true
1
6.32
google/guava
51,352
javadoc
false
checkStrictModeEvalOrArguments
function checkStrictModeEvalOrArguments(contextNode: Node, name: Node | undefined) { if (name && name.kind === SyntaxKind.Identifier) { const identifier = name as Identifier; if (isEvalOrArgumentsIdentifier(identifier)) { // We check first if the name is inside class declaration or class expression; if so give explicit message // otherwise report generic error message. const span = getErrorSpanForNode(file, name); file.bindDiagnostics.push(createFileDiagnostic(file, span.start, span.length, getStrictModeEvalOrArgumentsMessage(contextNode), idText(identifier))); } } }
Declares a Symbol for the node and adds it to symbols. Reports errors for conflicting identifier names. @param symbolTable - The symbol table which node will be added to. @param parent - node's parent declaration. @param node - The declaration to be added to the symbol table @param includes - The SymbolFlags that node has in addition to its declaration type (eg: export, ambient, etc.) @param excludes - The flags which node cannot be declared alongside in a symbol table. Used to report forbidden declarations.
typescript
src/compiler/binder.ts
2,647
[ "contextNode", "name" ]
false
4
6.08
microsoft/TypeScript
107,154
jsdoc
false