function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
equals
@Override public boolean equals(@Nullable Object other) { if (this == other) { return true; } if (other == null || getClass() != other.getClass()) { return false; } AbstractBeanFactoryBasedTargetSource otherTargetSource = (AbstractBeanFactoryBasedTargetSource) other; return (ObjectUtils.nullSafeEquals(this.beanFactory, otherTargetSource.beanFactory) && ObjectUtils.nullSafeEquals(this.targetBeanName, otherTargetSource.targetBeanName)); }
Copy configuration from the other AbstractBeanFactoryBasedTargetSource object. Subclasses should override this if they wish to expose it. @param other object to copy configuration from
java
spring-aop/src/main/java/org/springframework/aop/target/AbstractBeanFactoryBasedTargetSource.java
166
[ "other" ]
true
5
6.08
spring-projects/spring-framework
59,386
javadoc
false
resetStateAndGeneration
private synchronized void resetStateAndGeneration(final String reason, final boolean shouldResetMemberId) { log.info("Resetting generation {}due to: {}", shouldResetMemberId ? "and member id " : "", reason); state = MemberState.UNJOINED; if (shouldResetMemberId) { generation = Generation.NO_GENERATION; } else { // keep member id since it might be still valid, to avoid to wait for the old member id leaving group // until rebalance timeout in next rebalance generation = new Generation(Generation.NO_GENERATION.generationId, generation.memberId, null); } }
Get the current generation state if the group is stable, otherwise return null @return the current generation or null
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
1,061
[ "reason", "shouldResetMemberId" ]
void
true
3
6.56
apache/kafka
31,560
javadoc
false
freq
def freq(self) -> BaseOffset: """ The frequency object of this PeriodDtype. The `freq` property returns the `BaseOffset` object that represents the frequency of the PeriodDtype. This frequency specifies the interval (e.g., daily, monthly, yearly) associated with the Period type. It is essential for operations that depend on time-based calculations within a period index or series. See Also -------- Period : Represents a period of time. PeriodIndex : Immutable ndarray holding ordinal values indicating regular periods. PeriodDtype : An ExtensionDtype for Period data. date_range : Return a fixed frequency range of dates. Examples -------- >>> dtype = pd.PeriodDtype(freq="D") >>> dtype.freq <Day> """ return self._freq
The frequency object of this PeriodDtype. The `freq` property returns the `BaseOffset` object that represents the frequency of the PeriodDtype. This frequency specifies the interval (e.g., daily, monthly, yearly) associated with the Period type. It is essential for operations that depend on time-based calculations within a period index or series. See Also -------- Period : Represents a period of time. PeriodIndex : Immutable ndarray holding ordinal values indicating regular periods. PeriodDtype : An ExtensionDtype for Period data. date_range : Return a fixed frequency range of dates. Examples -------- >>> dtype = pd.PeriodDtype(freq="D") >>> dtype.freq <Day>
python
pandas/core/dtypes/dtypes.py
1,076
[ "self" ]
BaseOffset
true
1
6.64
pandas-dev/pandas
47,362
unknown
false
invokeListener
protected void invokeListener(ApplicationListener<?> listener, ApplicationEvent event) { ErrorHandler errorHandler = getErrorHandler(); if (errorHandler != null) { try { doInvokeListener(listener, event); } catch (Throwable err) { errorHandler.handleError(err); } } else { doInvokeListener(listener, event); } }
Invoke the given listener with the given event. @param listener the ApplicationListener to invoke @param event the current event to propagate @since 4.1
java
spring-context/src/main/java/org/springframework/context/event/SimpleApplicationEventMulticaster.java
162
[ "listener", "event" ]
void
true
3
6.4
spring-projects/spring-framework
59,386
javadoc
false
newHasher
@Override public Hasher newHasher() { Hasher[] hashers = new Hasher[functions.length]; for (int i = 0; i < hashers.length; i++) { hashers[i] = functions[i].newHasher(); } return fromHashers(hashers); }
Constructs a {@code HashCode} from the {@code Hasher} objects of the functions. Each of them has consumed the entire input and they are ready to output a {@code HashCode}. The order of the hashers are the same order as the functions given to the constructor.
java
android/guava/src/com/google/common/hash/AbstractCompositeHashFunction.java
53
[]
Hasher
true
2
6.88
google/guava
51,352
javadoc
false
_permute_strides
def _permute_strides(out: torch.Tensor, query_strides: tuple[int, ...]) -> torch.Tensor: """ Create a new tensor with the same data and shape as the input, but with strides permuted based on the input tensor's stride order. Args: out (torch.Tensor): The output tensor of attention. query_strides (List[int]): The stride order of the input query tensor Returns: torch.Tensor: A new tensor with same shape and data as the input, but with strides permuted based on the query tensor's stride order. """ from torch._inductor.ir import get_fill_order fill_order = get_fill_order(query_strides) assert out.storage_offset() == 0, "Only support storage_offset == 0" out_strides = _construct_strides(out.shape, fill_order) new_out = out.new_empty(out.shape).as_strided(out.shape, out_strides) new_out.copy_(out) return new_out
Create a new tensor with the same data and shape as the input, but with strides permuted based on the input tensor's stride order. Args: out (torch.Tensor): The output tensor of attention. query_strides (List[int]): The stride order of the input query tensor Returns: torch.Tensor: A new tensor with same shape and data as the input, but with strides permuted based on the query tensor's stride order.
python
torch/_higher_order_ops/flex_attention.py
57
[ "out", "query_strides" ]
torch.Tensor
true
1
6.88
pytorch/pytorch
96,034
google
false
isAssignable
private static boolean isAssignable(final Type type, final Type toType, final Map<TypeVariable<?>, Type> typeVarAssigns) { if (toType == null || toType instanceof Class<?>) { return isAssignable(type, (Class<?>) toType); } if (toType instanceof ParameterizedType) { return isAssignable(type, (ParameterizedType) toType, typeVarAssigns); } if (toType instanceof GenericArrayType) { return isAssignable(type, (GenericArrayType) toType, typeVarAssigns); } if (toType instanceof WildcardType) { return isAssignable(type, (WildcardType) toType, typeVarAssigns); } if (toType instanceof TypeVariable<?>) { return isAssignable(type, (TypeVariable<?>) toType, typeVarAssigns); } throw new IllegalStateException("found an unhandled type: " + toType); }
Tests if the subject type may be implicitly cast to the target type following the Java generics rules. @param type the subject type to be assigned to the target type. @param toType the target type. @param typeVarAssigns optional map of type variable assignments. @return {@code true} if {@code type} is assignable to {@code toType}.
java
src/main/java/org/apache/commons/lang3/reflect/TypeUtils.java
1,121
[ "type", "toType", "typeVarAssigns" ]
true
7
8.08
apache/commons-lang
2,896
javadoc
false
createCaffeineCache
protected Cache createCaffeineCache(String name) { return (this.asyncCacheMode ? adaptCaffeineCache(name, createAsyncCaffeineCache(name)) : adaptCaffeineCache(name, createNativeCaffeineCache(name))); }
Build a common {@link CaffeineCache} instance for the specified cache name, using the common Caffeine configuration specified on this cache manager. <p>Delegates to {@link #adaptCaffeineCache} as the adaptation method to Spring's cache abstraction (allowing for centralized decoration etc.), passing in a freshly built native Caffeine Cache instance. @param name the name of the cache @return the Spring CaffeineCache adapter (or a decorator thereof) @see #adaptCaffeineCache @see #createNativeCaffeineCache
java
spring-context-support/src/main/java/org/springframework/cache/caffeine/CaffeineCacheManager.java
372
[ "name" ]
Cache
true
2
7.36
spring-projects/spring-framework
59,386
javadoc
false
toArray
public static <T> T[] toArray(@SuppressWarnings("unchecked") final T... items) { return items; }
Create a type-safe generic array. <p> The Java language does not allow an array to be created from a generic type: </p> <pre> public static &lt;T&gt; T[] createAnArray(int size) { return new T[size]; // compiler error here } public static &lt;T&gt; T[] createAnArray(int size) { return (T[]) new Object[size]; // ClassCastException at runtime } </pre> <p> Therefore new arrays of generic types can be created with this method. For example, an array of Strings can be created: </p> <pre>{@code String[] array = ArrayUtils.toArray("1", "2"); String[] emptyArray = ArrayUtils.<String>toArray(); }</pre> <p> The method is typically used in scenarios, where the caller itself uses generic types that have to be combined into an array. </p> <p> Note, this method makes only sense to provide arguments of the same type so that the compiler can deduce the type of the array itself. While it is possible to select the type explicitly like in {@code Number[] array = ArrayUtils.<Number>toArray(Integer.valueOf(42), Double.valueOf(Math.PI))}, there is no real advantage when compared to {@code new Number[] {Integer.valueOf(42), Double.valueOf(Math.PI)}}. </p> @param <T> the array's element type. @param items the varargs array items, null allowed. @return the array, not null unless a null array is passed in. @since 3.0
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
8,612
[]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
get_block_type
def get_block_type(dtype: DtypeObj) -> type[Block]: """ Find the appropriate Block subclass to use for the given values and dtype. Parameters ---------- dtype : numpy or pandas dtype Returns ------- cls : class, subclass of Block """ if isinstance(dtype, DatetimeTZDtype): return DatetimeLikeBlock elif isinstance(dtype, PeriodDtype): return NDArrayBackedExtensionBlock elif isinstance(dtype, ExtensionDtype): # Note: need to be sure NumpyExtensionArray is unwrapped before we get here return ExtensionBlock # We use kind checks because it is much more performant # than is_foo_dtype kind = dtype.kind if kind in "Mm": return DatetimeLikeBlock return NumpyBlock
Find the appropriate Block subclass to use for the given values and dtype. Parameters ---------- dtype : numpy or pandas dtype Returns ------- cls : class, subclass of Block
python
pandas/core/internals/blocks.py
2,216
[ "dtype" ]
type[Block]
true
5
7.2
pandas-dev/pandas
47,362
numpy
false
of
public static MemberPath of(String value) { MemberPath path = MemberPath.ROOT; StringBuilder buffer = new StringBuilder(); boolean escape = false; for (char ch : value.toCharArray()) { if (!escape && ch == '\\') { escape = true; } else if (!escape && (ch == '.' || ch == '[')) { path = path.child(buffer.toString()); buffer.setLength(0); } else if (!escape && ch == ']') { path = path.child(Integer.parseUnsignedInt(buffer.toString())); buffer.setLength(0); } else { buffer.append(ch); escape = false; } } path = path.child(buffer.toString()); return path; }
Create a new {@link MemberPath} instance from the given string. @param value the path value @return a new {@link MemberPath} instance
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
857
[ "value" ]
MemberPath
true
8
7.92
spring-projects/spring-boot
79,428
javadoc
false
add
Headers add(Header header) throws IllegalStateException;
Adds a header (key inside), to the end, returning if the operation succeeded. @param header the Header to be added. @return this instance of the Headers, once the header is added. @throws IllegalStateException is thrown if headers are in a read-only state.
java
clients/src/main/java/org/apache/kafka/common/header/Headers.java
34
[ "header" ]
Headers
true
1
6.48
apache/kafka
31,560
javadoc
false
doWith
@Nullable T doWith(MainClass mainClass);
Handle the specified main class. @param mainClass the main class @return a non-null value if processing should end or {@code null} to continue
java
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/MainClassFinder.java
370
[ "mainClass" ]
T
true
1
6.48
spring-projects/spring-boot
79,428
javadoc
false
getClassPathUrls
@Override public Set<URL> getClassPathUrls(Predicate<Entry> includeFilter, Predicate<Entry> directorySearchFilter) throws IOException { Set<URL> urls = new LinkedHashSet<>(); LinkedList<File> files = new LinkedList<>(listFiles(this.rootDirectory)); while (!files.isEmpty()) { File file = files.poll(); if (SKIPPED_NAMES.contains(file.getName())) { continue; } String entryName = file.toURI().getPath().substring(this.rootUriPath.length()); Entry entry = new FileArchiveEntry(entryName, file); if (entry.isDirectory() && directorySearchFilter.test(entry)) { files.addAll(0, listFiles(file)); } if (includeFilter.test(entry)) { urls.add(file.toURI().toURL()); } } return urls; }
Create a new {@link ExplodedArchive} instance. @param rootDirectory the root directory
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/launch/ExplodedArchive.java
86
[ "includeFilter", "directorySearchFilter" ]
true
6
6.08
spring-projects/spring-boot
79,428
javadoc
false
searchsorted
def searchsorted( self, value: NumpyValueArrayLike | ExtensionArray, side: Literal["left", "right"] = "left", sorter: NumpySorter | None = None, ) -> npt.NDArray[np.intp] | np.intp: """ Find indices where elements should be inserted to maintain order. Find the indices into a sorted array `self` (a) such that, if the corresponding elements in `value` were inserted before the indices, the order of `self` would be preserved. Assuming that `self` is sorted: ====== ================================ `side` returned index `i` satisfies ====== ================================ left ``self[i-1] < value <= self[i]`` right ``self[i-1] <= value < self[i]`` ====== ================================ Parameters ---------- value : array-like, list or scalar Value(s) to insert into `self`. side : {'left', 'right'}, optional If 'left', the index of the first suitable location found is given. If 'right', return the last such index. If there is no suitable index, return either 0 or N (where N is the length of `self`). sorter : 1-D array-like, optional Optional array of integer indices that sort array a into ascending order. They are typically the result of argsort. Returns ------- array of ints or int If value is array-like, array of insertion points. If value is scalar, a single integer. See Also -------- numpy.searchsorted : Similar method from NumPy. Examples -------- >>> arr = pd.array([1, 2, 3, 5]) >>> arr.searchsorted([4]) array([3]) """ # Note: the base tests provided by pandas only test the basics. # We do not test # 1. Values outside the range of the `data_for_sorting` fixture # 2. Values between the values in the `data_for_sorting` fixture # 3. Missing values. arr = self.astype(object) if isinstance(value, ExtensionArray): value = value.astype(object) return arr.searchsorted(value, side=side, sorter=sorter)
Find indices where elements should be inserted to maintain order. Find the indices into a sorted array `self` (a) such that, if the corresponding elements in `value` were inserted before the indices, the order of `self` would be preserved. Assuming that `self` is sorted: ====== ================================ `side` returned index `i` satisfies ====== ================================ left ``self[i-1] < value <= self[i]`` right ``self[i-1] <= value < self[i]`` ====== ================================ Parameters ---------- value : array-like, list or scalar Value(s) to insert into `self`. side : {'left', 'right'}, optional If 'left', the index of the first suitable location found is given. If 'right', return the last such index. If there is no suitable index, return either 0 or N (where N is the length of `self`). sorter : 1-D array-like, optional Optional array of integer indices that sort array a into ascending order. They are typically the result of argsort. Returns ------- array of ints or int If value is array-like, array of insertion points. If value is scalar, a single integer. See Also -------- numpy.searchsorted : Similar method from NumPy. Examples -------- >>> arr = pd.array([1, 2, 3, 5]) >>> arr.searchsorted([4]) array([3])
python
pandas/core/arrays/base.py
1,461
[ "self", "value", "side", "sorter" ]
npt.NDArray[np.intp] | np.intp
true
2
8.4
pandas-dev/pandas
47,362
numpy
false
dedupAndCoalesceAndDeleteEmpty
private void dedupAndCoalesceAndDeleteEmpty() { dedupAndCoalesce(false); // If there was a setCount(elem, 0), those elements are still present. Eliminate them. int size = 0; for (int i = 0; i < length; i++) { if (counts[i] > 0) { elements[size] = elements[i]; counts[size] = counts[i]; size++; } } Arrays.fill(elements, size, length, null); Arrays.fill(counts, size, length, 0); length = size; }
Adds each element of {@code elements} to the {@code ImmutableSortedMultiset}. @param elements the elements to add to the {@code ImmutableSortedMultiset} @return this {@code Builder} object @throws NullPointerException if {@code elements} is null or contains a null element
java
android/guava/src/com/google/common/collect/ImmutableSortedMultiset.java
668
[]
void
true
3
7.44
google/guava
51,352
javadoc
false
apply
def apply(self, parent, **kwargs): """Apply the steps in this blueprint to an object. This will apply the ``__init__`` and ``include`` methods of each step, with the object as argument:: step = Step(obj) ... step.include(obj) For :class:`StartStopStep` the services created will also be added to the objects ``steps`` attribute. """ self._debug('Preparing bootsteps.') order = self.order = [] steps = self.steps = self.claim_steps() self._debug('Building graph...') for S in self._finalize_steps(steps): step = S(parent, **kwargs) steps[step.name] = step order.append(step) self._debug('New boot order: {%s}', ', '.join(s.alias for s in self.order)) for step in order: step.include(parent) return self
Apply the steps in this blueprint to an object. This will apply the ``__init__`` and ``include`` methods of each step, with the object as argument:: step = Step(obj) ... step.include(obj) For :class:`StartStopStep` the services created will also be added to the objects ``steps`` attribute.
python
celery/bootsteps.py
186
[ "self", "parent" ]
false
3
6.08
celery/celery
27,741
unknown
false
set_caption
def set_caption(self, caption: str | tuple | list) -> Styler: """ Set the text added to a ``<caption>`` HTML element. Parameters ---------- caption : str, tuple, list For HTML output either the string input is used or the first element of the tuple. For LaTeX the string input provides a caption and the additional tuple input allows for full captions and short captions, in that order. Returns ------- Styler Instance of class with text set for ``<caption>`` HTML element. See Also -------- Styler.set_td_classes : Set the ``class`` attribute of ``<td>`` HTML elements. Styler.set_tooltips : Set the DataFrame of strings on ``Styler`` generating ``:hover`` tooltips. Styler.set_uuid : Set the uuid applied to ``id`` attributes of HTML elements. Examples -------- >>> df = pd.DataFrame({"A": [1, 2], "B": [3, 4]}) >>> df.style.set_caption("test") # doctest: +SKIP Please see: `Table Visualization <../../user_guide/style.ipynb>`_ for more examples. """ msg = "`caption` must be either a string or 2-tuple of strings." if isinstance(caption, (list, tuple)): if ( len(caption) != 2 or not isinstance(caption[0], str) or not isinstance(caption[1], str) ): raise ValueError(msg) elif not isinstance(caption, str): raise ValueError(msg) self.caption = caption return self
Set the text added to a ``<caption>`` HTML element. Parameters ---------- caption : str, tuple, list For HTML output either the string input is used or the first element of the tuple. For LaTeX the string input provides a caption and the additional tuple input allows for full captions and short captions, in that order. Returns ------- Styler Instance of class with text set for ``<caption>`` HTML element. See Also -------- Styler.set_td_classes : Set the ``class`` attribute of ``<td>`` HTML elements. Styler.set_tooltips : Set the DataFrame of strings on ``Styler`` generating ``:hover`` tooltips. Styler.set_uuid : Set the uuid applied to ``id`` attributes of HTML elements. Examples -------- >>> df = pd.DataFrame({"A": [1, 2], "B": [3, 4]}) >>> df.style.set_caption("test") # doctest: +SKIP Please see: `Table Visualization <../../user_guide/style.ipynb>`_ for more examples.
python
pandas/io/formats/style.py
2,402
[ "self", "caption" ]
Styler
true
6
8.48
pandas-dev/pandas
47,362
numpy
false
fit
public int fit(final int element) { return super.fit(element).intValue(); }
Fits the given value into this range by returning the given value or, if out of bounds, the range minimum if below, or the range maximum if above. <pre>{@code IntegerRange range = IntegerRange.of(16, 64); range.fit(-9) --> 16 range.fit(0) --> 16 range.fit(15) --> 16 range.fit(16) --> 16 range.fit(17) --> 17 ... range.fit(63) --> 63 range.fit(64) --> 64 range.fit(99) --> 64 }</pre> @param element the element to test. @return the minimum, the element, or the maximum depending on the element's location relative to the range. @since 3.19.0
java
src/main/java/org/apache/commons/lang3/IntegerRange.java
107
[ "element" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
values
public static List<Boolean> values() { return BOOLEAN_LIST; }
Returns an unmodifiable list of Booleans {@code [false, true]}. @return an unmodifiable list of Booleans {@code [false, true]}. @since 3.13.0
java
src/main/java/org/apache/commons/lang3/BooleanUtils.java
1,149
[]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
_determine_resource
def _determine_resource() -> tuple[str, str]: """Determine the type of resource based on which values are present.""" if self.dagrun_id: # The deadline is for a Dag run: return "DagRun", f"Dag: {self.dagrun.dag_id} Run: {self.dagrun_id}" return "Unknown", ""
Determine the type of resource based on which values are present.
python
airflow-core/src/airflow/models/deadline.py
119
[]
tuple[str, str]
true
2
7.2
apache/airflow
43,597
unknown
false
findCacheOperation
protected abstract @Nullable JCacheOperation<?> findCacheOperation(Method method, @Nullable Class<?> targetType);
Subclasses need to implement this to return the caching operation for the given method, if any. @param method the method to retrieve the operation for @param targetType the target class @return the cache operation associated with this method (or {@code null} if none)
java
spring-context-support/src/main/java/org/springframework/cache/jcache/interceptor/AbstractFallbackJCacheOperationSource.java
131
[ "method", "targetType" ]
true
1
6.16
spring-projects/spring-framework
59,386
javadoc
false
close
JSONStringer close(Scope empty, Scope nonempty, String closeBracket) throws JSONException { Scope context = peek(); if (context != nonempty && context != empty) { throw new JSONException("Nesting problem"); } this.stack.remove(this.stack.size() - 1); if (context == nonempty) { newline(); } this.out.append(closeBracket); return this; }
Closes the current scope by appending any necessary whitespace and the given bracket. @param empty any necessary whitespace @param nonempty the current scope @param closeBracket the close bracket @return the JSON stringer @throws JSONException if processing of json failed
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONStringer.java
194
[ "empty", "nonempty", "closeBracket" ]
JSONStringer
true
4
7.44
spring-projects/spring-boot
79,428
javadoc
false
equals
def equals(self, other: object) -> bool: """ Determine if two CategoricalIndex objects contain the same elements. The order and orderedness of elements matters. The categories matter, but the order of the categories matters only when ``ordered=True``. Parameters ---------- other : object The CategoricalIndex object to compare with. Returns ------- bool ``True`` if two :class:`pandas.CategoricalIndex` objects have equal elements, ``False`` otherwise. See Also -------- Categorical.equals : Returns True if categorical arrays are equal. Examples -------- >>> ci = pd.CategoricalIndex(["a", "b", "c", "a", "b", "c"]) >>> ci2 = pd.CategoricalIndex(pd.Categorical(["a", "b", "c", "a", "b", "c"])) >>> ci.equals(ci2) True The order of elements matters. >>> ci3 = pd.CategoricalIndex(["c", "b", "a", "a", "b", "c"]) >>> ci.equals(ci3) False The orderedness also matters. >>> ci4 = ci.as_ordered() >>> ci.equals(ci4) False The categories matter, but the order of the categories matters only when ``ordered=True``. >>> ci5 = ci.set_categories(["a", "b", "c", "d"]) >>> ci.equals(ci5) False >>> ci6 = ci.set_categories(["b", "c", "a"]) >>> ci.equals(ci6) True >>> ci_ordered = pd.CategoricalIndex( ... ["a", "b", "c", "a", "b", "c"], ordered=True ... ) >>> ci2_ordered = ci_ordered.set_categories(["b", "c", "a"]) >>> ci_ordered.equals(ci2_ordered) False """ if self.is_(other): return True if not isinstance(other, Index): return False try: other = self._is_dtype_compat(other) except (TypeError, ValueError): return False return self._data.equals(other)
Determine if two CategoricalIndex objects contain the same elements. The order and orderedness of elements matters. The categories matter, but the order of the categories matters only when ``ordered=True``. Parameters ---------- other : object The CategoricalIndex object to compare with. Returns ------- bool ``True`` if two :class:`pandas.CategoricalIndex` objects have equal elements, ``False`` otherwise. See Also -------- Categorical.equals : Returns True if categorical arrays are equal. Examples -------- >>> ci = pd.CategoricalIndex(["a", "b", "c", "a", "b", "c"]) >>> ci2 = pd.CategoricalIndex(pd.Categorical(["a", "b", "c", "a", "b", "c"])) >>> ci.equals(ci2) True The order of elements matters. >>> ci3 = pd.CategoricalIndex(["c", "b", "a", "a", "b", "c"]) >>> ci.equals(ci3) False The orderedness also matters. >>> ci4 = ci.as_ordered() >>> ci.equals(ci4) False The categories matter, but the order of the categories matters only when ``ordered=True``. >>> ci5 = ci.set_categories(["a", "b", "c", "d"]) >>> ci.equals(ci5) False >>> ci6 = ci.set_categories(["b", "c", "a"]) >>> ci.equals(ci6) True >>> ci_ordered = pd.CategoricalIndex( ... ["a", "b", "c", "a", "b", "c"], ordered=True ... ) >>> ci2_ordered = ci_ordered.set_categories(["b", "c", "a"]) >>> ci_ordered.equals(ci2_ordered) False
python
pandas/core/indexes/category.py
275
[ "self", "other" ]
bool
true
3
7.76
pandas-dev/pandas
47,362
numpy
false
hasStdinWithoutTty
function hasStdinWithoutTty(): boolean { try { return !process.stdin.isTTY; // Via https://twitter.com/MylesBorins/status/782009479382626304 } catch (error) { // Windows workaround for https://github.com/nodejs/node/issues/11656 } return false; }
Starting at the `start` port, look for a free port incrementing by 1 until `end` inclusive. If no free port is found, undefined is returned.
typescript
src/server-main.ts
257
[]
true
2
7.2
microsoft/vscode
179,840
jsdoc
false
determineBasicProperties
public static Collection<? extends PropertyDescriptor> determineBasicProperties(Class<?> beanClass) throws IntrospectionException { Map<String, BasicPropertyDescriptor> pdMap = new TreeMap<>(); for (Method method : beanClass.getMethods()) { String methodName = method.getName(); boolean setter; int nameIndex; if (methodName.startsWith("set") && method.getParameterCount() == 1) { setter = true; nameIndex = 3; } else if (methodName.startsWith("get") && method.getParameterCount() == 0 && method.getReturnType() != void.class) { setter = false; nameIndex = 3; } else if (methodName.startsWith("is") && method.getParameterCount() == 0 && method.getReturnType() == boolean.class) { setter = false; nameIndex = 2; } else { continue; } String propertyName = StringUtils.uncapitalizeAsProperty(methodName.substring(nameIndex)); if (propertyName.isEmpty()) { continue; } BasicPropertyDescriptor pd = pdMap.get(propertyName); if (pd != null) { if (setter) { Method writeMethod = pd.getWriteMethod(); if (writeMethod == null || writeMethod.getParameterTypes()[0].isAssignableFrom(method.getParameterTypes()[0])) { pd.setWriteMethod(method); } else { pd.addWriteMethod(method); } } else { Method readMethod = pd.getReadMethod(); if (readMethod == null || (readMethod.getReturnType() == method.getReturnType() && method.getName().startsWith("is"))) { pd.setReadMethod(method); } } } else { pd = new BasicPropertyDescriptor(propertyName, (!setter ? method : null), (setter ? method : null)); pdMap.put(propertyName, pd); } } return pdMap.values(); }
Simple introspection algorithm for basic set/get/is accessor methods, building corresponding JavaBeans property descriptors for them. <p>This just supports the basic JavaBeans conventions, without indexed properties or any customizers, and without other BeanInfo metadata. For standard JavaBeans introspection, use the JavaBeans Introspector. @param beanClass the target class to introspect @return a collection of property descriptors @throws IntrospectionException from introspecting the given bean class @since 5.3.24 @see SimpleBeanInfoFactory @see java.beans.Introspector#getBeanInfo(Class)
java
spring-beans/src/main/java/org/springframework/beans/PropertyDescriptorUtils.java
58
[ "beanClass" ]
true
19
6.16
spring-projects/spring-framework
59,386
javadoc
false
shouldUseTypeOnly
function shouldUseTypeOnly(info: { addAsTypeOnly: AddAsTypeOnly; }, preferences: UserPreferences): boolean { return needsTypeOnly(info) || !!preferences.preferTypeOnlyAutoImports && info.addAsTypeOnly !== AddAsTypeOnly.NotAllowed; }
@param forceImportKeyword Indicates that the user has already typed `import`, so the result must start with `import`. (In other words, do not allow `const x = require("...")` for JS files.) @internal
typescript
src/services/codefixes/importFixes.ts
2,033
[ "info", "preferences" ]
true
3
6.48
microsoft/TypeScript
107,154
jsdoc
false
timestamp
public Optional<Long> timestamp() { if (type == StrategyType.EARLIEST) return Optional.of(ListOffsetsRequest.EARLIEST_TIMESTAMP); else if (type == StrategyType.LATEST) return Optional.of(ListOffsetsRequest.LATEST_TIMESTAMP); else if (type == StrategyType.BY_DURATION && duration.isPresent()) { Instant now = Instant.now(); return Optional.of(now.minus(duration.get()).toEpochMilli()); } else return Optional.empty(); }
Return the timestamp to be used for the ListOffsetsRequest. @return the timestamp for the OffsetResetStrategy, if the strategy is EARLIEST or LATEST or duration is provided else return Optional.empty()
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AutoOffsetResetStrategy.java
121
[]
true
5
7.6
apache/kafka
31,560
javadoc
false
ndindex
def ndindex(*x: int) -> Generator[tuple[int, ...]]: """ Generate all N-dimensional indices for a given array shape. Given the shape of an array, an ndindex instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned, the last dimension is iterated over first. This has an identical API to numpy.ndindex. Parameters ---------- *x : int The shape of the array. """ if not x: yield () return for i in ndindex(*x[:-1]): for j in range(x[-1]): yield *i, j
Generate all N-dimensional indices for a given array shape. Given the shape of an array, an ndindex instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned, the last dimension is iterated over first. This has an identical API to numpy.ndindex. Parameters ---------- *x : int The shape of the array.
python
sklearn/externals/array_api_extra/_lib/_utils/_helpers.py
230
[]
Generator[tuple[int, ...]]
true
4
6.72
scikit-learn/scikit-learn
64,340
numpy
false
to_period
def to_period(self, freq=None) -> PeriodArray: """ Cast to PeriodArray/PeriodIndex at a particular frequency. Converts DatetimeArray/Index to PeriodArray/PeriodIndex. Parameters ---------- freq : str or Period, optional One of pandas' :ref:`period aliases <timeseries.period_aliases>` or a Period object. Will be inferred by default. Returns ------- PeriodArray/PeriodIndex Immutable ndarray holding ordinal values at a particular frequency. Raises ------ ValueError When converting a DatetimeArray/Index with non-regular values, so that a frequency cannot be inferred. See Also -------- PeriodIndex: Immutable ndarray holding ordinal values. DatetimeIndex.to_pydatetime: Return DatetimeIndex as object. Examples -------- >>> df = pd.DataFrame( ... {"y": [1, 2, 3]}, ... index=pd.to_datetime( ... [ ... "2000-03-31 00:00:00", ... "2000-05-31 00:00:00", ... "2000-08-31 00:00:00", ... ] ... ), ... ) >>> df.index.to_period("M") PeriodIndex(['2000-03', '2000-05', '2000-08'], dtype='period[M]') Infer the daily frequency >>> idx = pd.date_range("2017-01-01", periods=2) >>> idx.to_period() PeriodIndex(['2017-01-01', '2017-01-02'], dtype='period[D]') """ from pandas.core.arrays import PeriodArray if self.tz is not None: warnings.warn( "Converting to PeriodArray/Index representation " "will drop timezone information.", UserWarning, stacklevel=find_stack_level(), ) if freq is None: freq = self.freqstr or self.inferred_freq if isinstance(self.freq, BaseOffset) and hasattr( self.freq, "_period_dtype_code" ): freq = PeriodDtype(self.freq)._freqstr if freq is None: raise ValueError( "You must pass a freq argument as current index has none." ) res = get_period_alias(freq) # https://github.com/pandas-dev/pandas/issues/33358 if res is None: res = freq freq = res return PeriodArray._from_datetime64(self._ndarray, freq, tz=self.tz)
Cast to PeriodArray/PeriodIndex at a particular frequency. Converts DatetimeArray/Index to PeriodArray/PeriodIndex. Parameters ---------- freq : str or Period, optional One of pandas' :ref:`period aliases <timeseries.period_aliases>` or a Period object. Will be inferred by default. Returns ------- PeriodArray/PeriodIndex Immutable ndarray holding ordinal values at a particular frequency. Raises ------ ValueError When converting a DatetimeArray/Index with non-regular values, so that a frequency cannot be inferred. See Also -------- PeriodIndex: Immutable ndarray holding ordinal values. DatetimeIndex.to_pydatetime: Return DatetimeIndex as object. Examples -------- >>> df = pd.DataFrame( ... {"y": [1, 2, 3]}, ... index=pd.to_datetime( ... [ ... "2000-03-31 00:00:00", ... "2000-05-31 00:00:00", ... "2000-08-31 00:00:00", ... ] ... ), ... ) >>> df.index.to_period("M") PeriodIndex(['2000-03', '2000-05', '2000-08'], dtype='period[M]') Infer the daily frequency >>> idx = pd.date_range("2017-01-01", periods=2) >>> idx.to_period() PeriodIndex(['2017-01-01', '2017-01-02'], dtype='period[D]')
python
pandas/core/arrays/datetimes.py
1,202
[ "self", "freq" ]
PeriodArray
true
8
7.76
pandas-dev/pandas
47,362
numpy
false
get_jit_arguments
def get_jit_arguments(engine_kwargs: dict[str, bool] | None = None) -> dict[str, bool]: """ Return arguments to pass to numba.JIT, falling back on pandas default JIT settings. Parameters ---------- engine_kwargs : dict, default None user passed keyword arguments for numba.JIT Returns ------- dict[str, bool] nopython, nogil, parallel Raises ------ NumbaUtilError """ if engine_kwargs is None: engine_kwargs = {} nopython = engine_kwargs.get("nopython", True) nogil = engine_kwargs.get("nogil", False) parallel = engine_kwargs.get("parallel", False) return {"nopython": nopython, "nogil": nogil, "parallel": parallel}
Return arguments to pass to numba.JIT, falling back on pandas default JIT settings. Parameters ---------- engine_kwargs : dict, default None user passed keyword arguments for numba.JIT Returns ------- dict[str, bool] nopython, nogil, parallel Raises ------ NumbaUtilError
python
pandas/core/util/numba_.py
32
[ "engine_kwargs" ]
dict[str, bool]
true
2
6.24
pandas-dev/pandas
47,362
numpy
false
nankurt
def nankurt( values: np.ndarray, *, axis: AxisInt | None = None, skipna: bool = True, mask: npt.NDArray[np.bool_] | None = None, ) -> float: """ Compute the sample excess kurtosis The statistic computed here is the adjusted Fisher-Pearson standardized moment coefficient G2, computed directly from the second and fourth central moment. Parameters ---------- values : ndarray axis : int, optional skipna : bool, default True mask : ndarray[bool], optional nan-mask if known Returns ------- result : float64 Unless input is a float array, in which case use the same precision as the input array. Examples -------- >>> from pandas.core import nanops >>> s = pd.Series([1, np.nan, 1, 3, 2]) >>> nanops.nankurt(s.values) np.float64(-1.2892561983471076) """ mask = _maybe_get_mask(values, skipna, mask) if values.dtype.kind != "f": values = values.astype("f8") count = _get_counts(values.shape, mask, axis) else: count = _get_counts(values.shape, mask, axis, dtype=values.dtype) if skipna and mask is not None: values = values.copy() np.putmask(values, mask, 0) elif not skipna and mask is not None and mask.any(): return np.nan with np.errstate(invalid="ignore", divide="ignore"): mean = values.sum(axis, dtype=np.float64) / count if axis is not None: mean = np.expand_dims(mean, axis) adjusted = values - mean if skipna and mask is not None: np.putmask(adjusted, mask, 0) adjusted2 = adjusted**2 adjusted4 = adjusted2**2 m2 = adjusted2.sum(axis, dtype=np.float64) m4 = adjusted4.sum(axis, dtype=np.float64) # Several floating point errors may occur during the summation due to rounding. # This computation is similar to the one in Scipy # https://github.com/scipy/scipy/blob/04d6d9c460b1fed83f2919ecec3d743cfa2e8317/scipy/stats/_stats_py.py#L1429 # With a few modifications, like using the maximum value instead of the averages # and some adaptations because they use the average and we use the sum for `m2`. # We need to estimate an upper bound to the error to consider the data constant. # Let's call: # x: true value in data # y: floating point representation # e: relative approximation error # n: number of observations in array # # We have that: # |x - y|/|x| <= e (See https://en.wikipedia.org/wiki/Machine_epsilon) # (|x - y|/|x|)² <= e² # Σ (|x - y|/|x|)² <= ne² # # Let's say that the fperr upper bound for m2 is constrained by the summation. # |m2 - y|/|m2| <= ne² # |m2 - y| <= n|m2|e² # # We will use max (x²) to estimate |m2| max_abs = np.abs(values).max(axis, initial=0.0) eps = np.finfo(m2.dtype).eps constant_tolerance2 = ((eps * max_abs) ** 2) * count constant_tolerance4 = ((eps * max_abs) ** 4) * count m2 = _zero_out_fperr(m2, constant_tolerance2) m4 = _zero_out_fperr(m4, constant_tolerance4) with np.errstate(invalid="ignore", divide="ignore"): adj = 3 * (count - 1) ** 2 / ((count - 2) * (count - 3)) numerator = count * (count + 1) * (count - 1) * m4 denominator = (count - 2) * (count - 3) * m2**2 if not isinstance(denominator, np.ndarray): # if ``denom`` is a scalar, check these corner cases first before # doing division if count < 4: return np.nan if denominator == 0: return values.dtype.type(0) with np.errstate(invalid="ignore", divide="ignore"): result = numerator / denominator - adj dtype = values.dtype if dtype.kind == "f": result = result.astype(dtype, copy=False) if isinstance(result, np.ndarray): result = np.where(denominator == 0, 0, result) result[count < 4] = np.nan return result
Compute the sample excess kurtosis The statistic computed here is the adjusted Fisher-Pearson standardized moment coefficient G2, computed directly from the second and fourth central moment. Parameters ---------- values : ndarray axis : int, optional skipna : bool, default True mask : ndarray[bool], optional nan-mask if known Returns ------- result : float64 Unless input is a float array, in which case use the same precision as the input array. Examples -------- >>> from pandas.core import nanops >>> s = pd.Series([1, np.nan, 1, 3, 2]) >>> nanops.nankurt(s.values) np.float64(-1.2892561983471076)
python
pandas/core/nanops.py
1,303
[ "values", "axis", "skipna", "mask" ]
float
true
16
7.04
pandas-dev/pandas
47,362
numpy
false
mean_poisson_deviance
def mean_poisson_deviance(y_true, y_pred, *, sample_weight=None): """Mean Poisson deviance regression loss. Poisson deviance is equivalent to the Tweedie deviance with the power parameter `power=1`. Read more in the :ref:`User Guide <mean_tweedie_deviance>`. Parameters ---------- y_true : array-like of shape (n_samples,) Ground truth (correct) target values. Requires y_true >= 0. y_pred : array-like of shape (n_samples,) Estimated target values. Requires y_pred > 0. sample_weight : array-like of shape (n_samples,), default=None Sample weights. Returns ------- loss : float A non-negative floating point value (the best value is 0.0). Examples -------- >>> from sklearn.metrics import mean_poisson_deviance >>> y_true = [2, 0, 1, 4] >>> y_pred = [0.5, 0.5, 2., 2.] >>> mean_poisson_deviance(y_true, y_pred) 1.4260... """ return mean_tweedie_deviance(y_true, y_pred, sample_weight=sample_weight, power=1)
Mean Poisson deviance regression loss. Poisson deviance is equivalent to the Tweedie deviance with the power parameter `power=1`. Read more in the :ref:`User Guide <mean_tweedie_deviance>`. Parameters ---------- y_true : array-like of shape (n_samples,) Ground truth (correct) target values. Requires y_true >= 0. y_pred : array-like of shape (n_samples,) Estimated target values. Requires y_pred > 0. sample_weight : array-like of shape (n_samples,), default=None Sample weights. Returns ------- loss : float A non-negative floating point value (the best value is 0.0). Examples -------- >>> from sklearn.metrics import mean_poisson_deviance >>> y_true = [2, 0, 1, 4] >>> y_pred = [0.5, 0.5, 2., 2.] >>> mean_poisson_deviance(y_true, y_pred) 1.4260...
python
sklearn/metrics/_regression.py
1,494
[ "y_true", "y_pred", "sample_weight" ]
false
1
6
scikit-learn/scikit-learn
64,340
numpy
false
_get_time_truncation_expression
def _get_time_truncation_expression( self, column: InstrumentedAttribute[datetime | None], granularity: Literal["hourly", "daily"], dialect: str | None, ) -> sa.sql.elements.ColumnElement: """ Get database-specific time truncation expression for SQLAlchemy. We want to return always timestamp for both hourly and daily truncation. Unfortunately different databases have different functions for truncating datetime, so we need to handle them separately. Args: column: The datetime column to truncate granularity: Either "hourly" or "daily" dialect: Database dialect ("postgresql", "mysql", "sqlite") Returns: SQLAlchemy expression for time truncation Raises: ValueError: If the dialect is not supported """ if granularity == "hourly": if dialect == "postgresql": expression = sa.func.date_trunc("hour", column) elif dialect == "mysql": expression = sa.func.date_format(column, "%Y-%m-%dT%H:00:00Z") elif dialect == "sqlite": expression = sa.func.strftime("%Y-%m-%dT%H:00:00Z", column) else: raise ValueError(f"Unsupported dialect: {dialect}") else: if dialect == "postgresql": expression = sa.func.timezone("UTC", sa.func.cast(sa.func.cast(column, sa.Date), sa.DateTime)) elif dialect == "mysql": expression = sa.func.date_format(column, "%Y-%m-%dT%00:00:00Z") elif dialect == "sqlite": expression = sa.func.strftime("%Y-%m-%dT00:00:00Z", column) else: raise ValueError(f"Unsupported dialect: {dialect}") return expression
Get database-specific time truncation expression for SQLAlchemy. We want to return always timestamp for both hourly and daily truncation. Unfortunately different databases have different functions for truncating datetime, so we need to handle them separately. Args: column: The datetime column to truncate granularity: Either "hourly" or "daily" dialect: Database dialect ("postgresql", "mysql", "sqlite") Returns: SQLAlchemy expression for time truncation Raises: ValueError: If the dialect is not supported
python
airflow-core/src/airflow/api_fastapi/core_api/services/ui/calendar.py
253
[ "self", "column", "granularity", "dialect" ]
sa.sql.elements.ColumnElement
true
11
7.44
apache/airflow
43,597
google
false
bindOrCreate
public <T> T bindOrCreate(String name, Bindable<T> target, BindHandler handler) { return bindOrCreate(ConfigurationPropertyName.of(name), target, handler); }
Bind the specified target {@link Bindable} using this binder's {@link ConfigurationPropertySource property sources} or create a new instance using the type of the {@link Bindable} if the result of the binding is {@code null}. @param name the configuration property name to bind @param target the target bindable @param handler the bind handler @param <T> the bound type @return the bound or created object @since 2.2.0 @see #bindOrCreate(ConfigurationPropertyName, Bindable, BindHandler)
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/Binder.java
333
[ "name", "target", "handler" ]
T
true
1
6.32
spring-projects/spring-boot
79,428
javadoc
false
packbits
def packbits(a, /, axis=None, bitorder="big"): """ packbits(a, /, axis=None, bitorder='big') Packs the elements of a binary-valued array into bits in a uint8 array. The result is padded to full bytes by inserting zero bits at the end. Parameters ---------- a : array_like An array of integers or booleans whose elements should be packed to bits. axis : int, optional The dimension over which bit-packing is done. ``None`` implies packing the flattened array. bitorder : {'big', 'little'}, optional The order of the input bits. 'big' will mimic bin(val), ``[0, 0, 0, 0, 0, 0, 1, 1] => 3 = 0b00000011``, 'little' will reverse the order so ``[1, 1, 0, 0, 0, 0, 0, 0] => 3``. Defaults to 'big'. Returns ------- packed : ndarray Array of type uint8 whose elements represent bits corresponding to the logical (0 or nonzero) value of the input elements. The shape of `packed` has the same number of dimensions as the input (unless `axis` is None, in which case the output is 1-D). See Also -------- unpackbits: Unpacks elements of a uint8 array into a binary-valued output array. Examples -------- >>> import numpy as np >>> a = np.array([[[1,0,1], ... [0,1,0]], ... [[1,1,0], ... [0,0,1]]]) >>> b = np.packbits(a, axis=-1) >>> b array([[[160], [ 64]], [[192], [ 32]]], dtype=uint8) Note that in binary 160 = 1010 0000, 64 = 0100 0000, 192 = 1100 0000, and 32 = 0010 0000. """ return (a,)
packbits(a, /, axis=None, bitorder='big') Packs the elements of a binary-valued array into bits in a uint8 array. The result is padded to full bytes by inserting zero bits at the end. Parameters ---------- a : array_like An array of integers or booleans whose elements should be packed to bits. axis : int, optional The dimension over which bit-packing is done. ``None`` implies packing the flattened array. bitorder : {'big', 'little'}, optional The order of the input bits. 'big' will mimic bin(val), ``[0, 0, 0, 0, 0, 0, 1, 1] => 3 = 0b00000011``, 'little' will reverse the order so ``[1, 1, 0, 0, 0, 0, 0, 0] => 3``. Defaults to 'big'. Returns ------- packed : ndarray Array of type uint8 whose elements represent bits corresponding to the logical (0 or nonzero) value of the input elements. The shape of `packed` has the same number of dimensions as the input (unless `axis` is None, in which case the output is 1-D). See Also -------- unpackbits: Unpacks elements of a uint8 array into a binary-valued output array. Examples -------- >>> import numpy as np >>> a = np.array([[[1,0,1], ... [0,1,0]], ... [[1,1,0], ... [0,0,1]]]) >>> b = np.packbits(a, axis=-1) >>> b array([[[160], [ 64]], [[192], [ 32]]], dtype=uint8) Note that in binary 160 = 1010 0000, 64 = 0100 0000, 192 = 1100 0000, and 32 = 0010 0000.
python
numpy/_core/multiarray.py
1,182
[ "a", "axis", "bitorder" ]
false
1
6.48
numpy/numpy
31,054
numpy
false
streamingIterator
@Override public CloseableIterator<Record> streamingIterator(BufferSupplier bufferSupplier) { // the older message format versions do not support streaming, so we return the normal iterator return iterator(bufferSupplier); }
Get an iterator for the nested entries contained within this batch. Note that if the batch is not compressed, then this method will return an iterator over the shallow record only (i.e. this object). @return An iterator over the records contained within this batch
java
clients/src/main/java/org/apache/kafka/common/record/AbstractLegacyRecordBatch.java
264
[ "bufferSupplier" ]
true
1
6
apache/kafka
31,560
javadoc
false
invokeDriver
private <K, V> void invokeDriver( AdminApiHandler<K, V> handler, AdminApiFuture<K, V> future, Integer timeoutMs ) { long currentTimeMs = time.milliseconds(); long deadlineMs = calcDeadlineMs(currentTimeMs, timeoutMs); AdminApiDriver<K, V> driver = new AdminApiDriver<>( handler, future, deadlineMs, retryBackoffMs, retryBackoffMaxMs, logContext ); maybeSendRequests(driver, currentTimeMs); }
Forcefully terminates an ongoing transaction for a given transactional ID. <p> This API is intended for well-formed but long-running transactions that are known to the transaction coordinator. It is primarily designed for supporting 2PC (two-phase commit) workflows, where a coordinator may need to unilaterally terminate a participant transaction that hasn't completed. </p> @param transactionalId The transactional ID whose active transaction should be forcefully terminated. @return a {@link TerminateTransactionResult} that can be used to await the operation result.
java
clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java
5,074
[ "handler", "future", "timeoutMs" ]
void
true
1
6.24
apache/kafka
31,560
javadoc
false
_get_period_range_edges
def _get_period_range_edges( first: Period, last: Period, freq: BaseOffset, closed: Literal["right", "left"] = "left", origin: TimeGrouperOrigin = "start_day", offset: Timedelta | None = None, ) -> tuple[Period, Period]: """ Adjust the provided `first` and `last` Periods to the respective Period of the given offset that encompasses them. Parameters ---------- first : pd.Period The beginning Period of the range to be adjusted. last : pd.Period The ending Period of the range to be adjusted. freq : pd.DateOffset The freq to which the Periods will be adjusted. closed : {'right', 'left'}, default "left" Which side of bin interval is closed. origin : {'epoch', 'start', 'start_day'}, Timestamp, default 'start_day' The timestamp on which to adjust the grouping. The timezone of origin must match the timezone of the index. If a timestamp is not used, these values are also supported: - 'epoch': `origin` is 1970-01-01 - 'start': `origin` is the first value of the timeseries - 'start_day': `origin` is the first day at midnight of the timeseries offset : pd.Timedelta, default is None An offset timedelta added to the origin. Returns ------- A tuple of length 2, containing the adjusted pd.Period objects. """ if not all(isinstance(obj, Period) for obj in [first, last]): raise TypeError("'first' and 'last' must be instances of type Period") # GH 23882 first_ts = first.to_timestamp() last_ts = last.to_timestamp() adjust_first = not freq.is_on_offset(first_ts) adjust_last = freq.is_on_offset(last_ts) first_ts, last_ts = _get_timestamp_range_edges( first_ts, last_ts, freq, unit="ns", closed=closed, origin=origin, offset=offset ) first = (first_ts + int(adjust_first) * freq).to_period(freq) last = (last_ts - int(adjust_last) * freq).to_period(freq) return first, last
Adjust the provided `first` and `last` Periods to the respective Period of the given offset that encompasses them. Parameters ---------- first : pd.Period The beginning Period of the range to be adjusted. last : pd.Period The ending Period of the range to be adjusted. freq : pd.DateOffset The freq to which the Periods will be adjusted. closed : {'right', 'left'}, default "left" Which side of bin interval is closed. origin : {'epoch', 'start', 'start_day'}, Timestamp, default 'start_day' The timestamp on which to adjust the grouping. The timezone of origin must match the timezone of the index. If a timestamp is not used, these values are also supported: - 'epoch': `origin` is 1970-01-01 - 'start': `origin` is the first value of the timeseries - 'start_day': `origin` is the first day at midnight of the timeseries offset : pd.Timedelta, default is None An offset timedelta added to the origin. Returns ------- A tuple of length 2, containing the adjusted pd.Period objects.
python
pandas/core/resample.py
2,914
[ "first", "last", "freq", "closed", "origin", "offset" ]
tuple[Period, Period]
true
2
6.88
pandas-dev/pandas
47,362
numpy
false
toString
@Override public String toString() { String className = getClass().getName(); if (this.wrappedObject == null) { return className + ": no wrapped object set"; } return className + ": wrapping object [" + ObjectUtils.identityToString(this.wrappedObject) + ']'; }
Parse the given property name into the corresponding property name tokens. @param propertyName the property name to parse @return representation of the parsed property tokens
java
spring-beans/src/main/java/org/springframework/beans/AbstractNestablePropertyAccessor.java
1,004
[]
String
true
2
7.28
spring-projects/spring-framework
59,386
javadoc
false
decrement
public static InetAddress decrement(InetAddress address) { byte[] addr = address.getAddress(); int i = addr.length - 1; while (i >= 0 && addr[i] == (byte) 0x00) { addr[i] = (byte) 0xff; i--; } checkArgument(i >= 0, "Decrementing %s would wrap.", address); addr[i]--; return bytesToInetAddress(addr, null); }
Returns a new InetAddress that is one less than the passed in address. This method works for both IPv4 and IPv6 addresses. @param address the InetAddress to decrement @return a new InetAddress that is one less than the passed in address @throws IllegalArgumentException if InetAddress is at the beginning of its range @since 18.0
java
android/guava/src/com/google/common/net/InetAddresses.java
1,179
[ "address" ]
InetAddress
true
3
8.08
google/guava
51,352
javadoc
false
orElseThrow
public <X extends Throwable> T orElseThrow(Supplier<? extends X> exceptionSupplier) throws X { if (this.value == null) { throw exceptionSupplier.get(); } return this.value; }
Return the object that was bound, or throw an exception to be created by the provided supplier if no value has been bound. @param <X> the type of the exception to be thrown @param exceptionSupplier the supplier which will return the exception to be thrown @return the present value @throws X if there is no value present
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/BindResult.java
127
[ "exceptionSupplier" ]
T
true
2
7.76
spring-projects/spring-boot
79,428
javadoc
false
_slow16
private int _slow16() throws IOException { if (_inputPtr >= _inputEnd) { loadMoreGuaranteed(); } int v = (_inputBuffer[_inputPtr++] & 0xFF); if (_inputPtr >= _inputEnd) { loadMoreGuaranteed(); } return (v << 8) + (_inputBuffer[_inputPtr++] & 0xFF); }
Method used to decode explicit length of a variable-length value (or, for indefinite/chunked, indicate that one is not known). Note that long (64-bit) length is only allowed if it fits in 32-bit signed int, for now; expectation being that longer values are always encoded as chunks.
java
libs/x-content/impl/src/main/java/org/elasticsearch/xcontent/provider/cbor/ESCborParser.java
156
[]
true
3
6.88
elastic/elasticsearch
75,680
javadoc
false
execute
def execute(self, context: Context): """ Execute AWS Glue Job from Airflow. :return: the current Glue job ID. """ self.log.info( "Initializing AWS Glue Job: %s. Wait for completion: %s", self.job_name, self.wait_for_completion, ) glue_job_run = self.hook.initialize_job(self.script_args, self.run_job_kwargs) self._job_run_id = glue_job_run["JobRunId"] glue_job_run_url = GlueJobRunDetailsLink.format_str.format( aws_domain=GlueJobRunDetailsLink.get_aws_domain(self.hook.conn_partition), region_name=self.hook.conn_region_name, job_name=urllib.parse.quote(self.job_name, safe=""), job_run_id=self._job_run_id, ) GlueJobRunDetailsLink.persist( context=context, operator=self, region_name=self.hook.conn_region_name, aws_partition=self.hook.conn_partition, job_name=urllib.parse.quote(self.job_name, safe=""), job_run_id=self._job_run_id, ) self.log.info("You can monitor this Glue Job run at: %s", glue_job_run_url) if self.deferrable: self.defer( trigger=GlueJobCompleteTrigger( job_name=self.job_name, run_id=self._job_run_id, verbose=self.verbose, aws_conn_id=self.aws_conn_id, waiter_delay=self.waiter_delay, waiter_max_attempts=self.waiter_max_attempts, region_name=self.region_name, ), method_name="execute_complete", ) elif self.wait_for_completion: glue_job_run = self.hook.job_completion( self.job_name, self._job_run_id, self.verbose, self.sleep_before_return ) self.log.info( "AWS Glue Job: %s status: %s. Run Id: %s", self.job_name, glue_job_run["JobRunState"], self._job_run_id, ) else: self.log.info("AWS Glue Job: %s. Run Id: %s", self.job_name, self._job_run_id) return self._job_run_id
Execute AWS Glue Job from Airflow. :return: the current Glue job ID.
python
providers/amazon/src/airflow/providers/amazon/aws/operators/glue.py
214
[ "self", "context" ]
true
4
6.8
apache/airflow
43,597
unknown
false
addClassIfPresent
private static void addClassIfPresent(Collection<Class<?>> collection, String className) { try { collection.add(ClassUtils.forName(className, null)); } catch (Throwable ex) { // Ignore } }
Return the description for the given request. By default this method will return a description based on the request {@code servletPath} and {@code pathInfo}. @param request the source request @return the description
java
core/spring-boot/src/main/java/org/springframework/boot/web/servlet/support/ErrorPageFilter.java
302
[ "collection", "className" ]
void
true
2
8.08
spring-projects/spring-boot
79,428
javadoc
false
computeNext
@Override protected @Nullable String computeNext() { /* * The returned string will be from the end of the last match to the beginning of the next * one. nextStart is the start position of the returned substring, while offset is the place * to start looking for a separator. */ int nextStart = offset; while (offset != -1) { int start = nextStart; int end; int separatorPosition = separatorStart(offset); if (separatorPosition == -1) { end = toSplit.length(); offset = -1; } else { end = separatorPosition; offset = separatorEnd(separatorPosition); } if (offset == nextStart) { /* * This occurs when some pattern has an empty match, even if it doesn't match the empty * string -- for example, if it requires lookahead or the like. The offset must be * increased to look for separators beyond this point, without changing the start position * of the next returned substring -- so nextStart stays the same. */ offset++; if (offset > toSplit.length()) { offset = -1; } continue; } while (start < end && trimmer.matches(toSplit.charAt(start))) { start++; } while (end > start && trimmer.matches(toSplit.charAt(end - 1))) { end--; } if (omitEmptyStrings && start == end) { // Don't include the (unused) separator in next split string. nextStart = offset; continue; } if (limit == 1) { // The limit has been reached, return the rest of the string as the // final item. This is tested after empty string removal so that // empty strings do not count towards the limit. end = toSplit.length(); offset = -1; // Since we may have changed the end, we need to trim it again. while (end > start && trimmer.matches(toSplit.charAt(end - 1))) { end--; } } else { limit--; } return toSplit.subSequence(start, end).toString(); } return endOfData(); }
Returns the first index in {@code toSplit} after {@code separatorPosition} that does not contain a separator. This method is only invoked after a call to {@code separatorStart}.
java
android/guava/src/com/google/common/base/Splitter.java
550
[]
String
true
14
6.64
google/guava
51,352
javadoc
false
diff
def diff( self, periods: int = 1, ) -> NDFrameT: """ First discrete difference of element. Calculates the difference of each element compared with another element in the group (default is element in previous row). Parameters ---------- periods : int, default 1 Periods to shift for calculating difference, accepts negative values. Returns ------- Series or DataFrame First differences. %(see_also)s Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "a", "b", "b", "b"] >>> ser = pd.Series([7, 2, 8, 4, 3, 3], index=lst) >>> ser a 7 a 2 a 8 b 4 b 3 b 3 dtype: int64 >>> ser.groupby(level=0).diff() a NaN a -5.0 a 6.0 b NaN b -1.0 b 0.0 dtype: float64 For DataFrameGroupBy: >>> data = {"a": [1, 3, 5, 7, 7, 8, 3], "b": [1, 4, 8, 4, 4, 2, 1]} >>> df = pd.DataFrame( ... data, index=["dog", "dog", "dog", "mouse", "mouse", "mouse", "mouse"] ... ) >>> df a b dog 1 1 dog 3 4 dog 5 8 mouse 7 4 mouse 7 4 mouse 8 2 mouse 3 1 >>> df.groupby(level=0).diff() a b dog NaN NaN dog 2.0 3.0 dog 2.0 4.0 mouse NaN NaN mouse 0.0 0.0 mouse 1.0 -2.0 mouse -5.0 -1.0 """ obj = self._obj_with_exclusions shifted = self.shift(periods=periods) # GH45562 - to retain existing behavior and match behavior of Series.diff(), # int8 and int16 are coerced to float32 rather than float64. dtypes_to_f32 = ["int8", "int16"] if obj.ndim == 1: if obj.dtype in dtypes_to_f32: shifted = shifted.astype("float32") else: to_coerce = [c for c, dtype in obj.dtypes.items() if dtype in dtypes_to_f32] if to_coerce: shifted = shifted.astype(dict.fromkeys(to_coerce, "float32")) return obj - shifted
First discrete difference of element. Calculates the difference of each element compared with another element in the group (default is element in previous row). Parameters ---------- periods : int, default 1 Periods to shift for calculating difference, accepts negative values. Returns ------- Series or DataFrame First differences. %(see_also)s Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "a", "b", "b", "b"] >>> ser = pd.Series([7, 2, 8, 4, 3, 3], index=lst) >>> ser a 7 a 2 a 8 b 4 b 3 b 3 dtype: int64 >>> ser.groupby(level=0).diff() a NaN a -5.0 a 6.0 b NaN b -1.0 b 0.0 dtype: float64 For DataFrameGroupBy: >>> data = {"a": [1, 3, 5, 7, 7, 8, 3], "b": [1, 4, 8, 4, 4, 2, 1]} >>> df = pd.DataFrame( ... data, index=["dog", "dog", "dog", "mouse", "mouse", "mouse", "mouse"] ... ) >>> df a b dog 1 1 dog 3 4 dog 5 8 mouse 7 4 mouse 7 4 mouse 8 2 mouse 3 1 >>> df.groupby(level=0).diff() a b dog NaN NaN dog 2.0 3.0 dog 2.0 4.0 mouse NaN NaN mouse 0.0 0.0 mouse 1.0 -2.0 mouse -5.0 -1.0
python
pandas/core/groupby/groupby.py
5,263
[ "self", "periods" ]
NDFrameT
true
5
8.4
pandas-dev/pandas
47,362
numpy
false
booleanValue
public boolean booleanValue() { return value; }
Returns the value of this MutableBoolean as a boolean. @return the boolean value represented by this object.
java
src/main/java/org/apache/commons/lang3/mutable/MutableBoolean.java
80
[]
true
1
6.32
apache/commons-lang
2,896
javadoc
false
truncateIfIntegral
static std::optional<llvm::APSInt> truncateIfIntegral(const FloatingLiteral &FloatLiteral) { const double Value = FloatLiteral.getValueAsApproximateDouble(); if (std::fmod(Value, 1) == 0) { if (Value >= static_cast<double>(1U << 31)) return std::nullopt; return llvm::APSInt::get(static_cast<int64_t>(Value)); } return std::nullopt; }
Returns an integer if the fractional part of a `FloatingLiteral` is `0`.
cpp
clang-tools-extra/clang-tidy/abseil/DurationRewriter.cpp
21
[]
true
3
6.56
llvm/llvm-project
36,021
doxygen
false
getOrigin
@SuppressWarnings("unchecked") static <K> @Nullable Origin getOrigin(@Nullable Object source, K key) { if (!(source instanceof OriginLookup)) { return null; } try { return ((OriginLookup<K>) source).getOrigin(key); } catch (Throwable ex) { return null; } }
Attempt to look up the origin from the given source. If the source is not a {@link OriginLookup} or if an exception occurs during lookup then {@code null} is returned. @param source the source object @param key the key to lookup @param <K> the key type @return an {@link Origin} or {@code null}
java
core/spring-boot/src/main/java/org/springframework/boot/origin/OriginLookup.java
49
[ "source", "key" ]
Origin
true
3
7.92
spring-projects/spring-boot
79,428
javadoc
false
_str_map
def _str_map( self, f, na_value=lib.no_default, dtype: NpDtype | None = None, convert: bool = True, ): """ Map a callable over valid elements of the array. Parameters ---------- f : Callable A function to call on each non-NA element. na_value : Scalar, optional The value to set for NA values. Might also be used for the fill value if the callable `f` raises an exception. This defaults to ``self.dtype.na_value`` which is ``np.nan`` for object-dtype and Categorical and ``pd.NA`` for StringArray. dtype : Dtype, optional The dtype of the result array. convert : bool, default True Whether to call `maybe_convert_objects` on the resulting ndarray """ if dtype is None: dtype = np.dtype("object") if na_value is lib.no_default: na_value = self.dtype.na_value # type: ignore[attr-defined] if not len(self): return np.array([], dtype=dtype) arr = np.asarray(self, dtype=object) mask = isna(arr) map_convert = convert and not np.all(mask) try: result = lib.map_infer_mask( arr, f, mask.view(np.uint8), convert=map_convert ) except (TypeError, AttributeError) as err: # Reraise the exception if callable `f` got wrong number of args. # The user may want to be warned by this, instead of getting NaN p_err = ( r"((takes)|(missing)) (?(2)from \d+ to )?\d+ " r"(?(3)required )positional arguments?" ) if len(err.args) >= 1 and re.search(p_err, err.args[0]): # FIXME: this should be totally avoidable raise err def g(x): # This type of fallback behavior can be removed once # we remove object-dtype .str accessor. try: return f(x) except (TypeError, AttributeError): return na_value return self._str_map(g, na_value=na_value, dtype=dtype) if not isinstance(result, np.ndarray): return result if na_value is not np.nan: np.putmask(result, mask, na_value) if convert and result.dtype == object: result = lib.maybe_convert_objects(result) return result
Map a callable over valid elements of the array. Parameters ---------- f : Callable A function to call on each non-NA element. na_value : Scalar, optional The value to set for NA values. Might also be used for the fill value if the callable `f` raises an exception. This defaults to ``self.dtype.na_value`` which is ``np.nan`` for object-dtype and Categorical and ``pd.NA`` for StringArray. dtype : Dtype, optional The dtype of the result array. convert : bool, default True Whether to call `maybe_convert_objects` on the resulting ndarray
python
pandas/core/strings/object_array.py
50
[ "self", "f", "na_value", "dtype", "convert" ]
true
11
6.8
pandas-dev/pandas
47,362
numpy
false
register_option
def register_option( key: str, defval: object, doc: str = "", validator: Callable[[object], Any] | None = None, cb: Callable[[str], Any] | None = None, ) -> None: """ Register an option in the package-wide pandas config object Parameters ---------- key : str Fully-qualified key, e.g. "x.y.option - z". defval : object Default value of the option. doc : str Description of the option. validator : Callable, optional Function of a single argument, should raise `ValueError` if called with a value which is not a legal value for the option. cb a function of a single argument "key", which is called immediately after an option value is set/reset. key is the full name of the option. Raises ------ ValueError if `validator` is specified and `defval` is not a valid value. """ import keyword import tokenize key = key.lower() if key in _registered_options: raise OptionError(f"Option '{key}' has already been registered") if key in _reserved_keys: raise OptionError(f"Option '{key}' is a reserved key") # the default value should be legal if validator: validator(defval) # walk the nested dict, creating dicts as needed along the path path = key.split(".") for k in path: if not re.match("^" + tokenize.Name + "$", k): raise ValueError(f"{k} is not a valid identifier") if keyword.iskeyword(k): raise ValueError(f"{k} is a python keyword") cursor = _global_config msg = "Path prefix to option '{option}' is already an option" for i, p in enumerate(path[:-1]): if not isinstance(cursor, dict): raise OptionError(msg.format(option=".".join(path[:i]))) if p not in cursor: cursor[p] = {} cursor = cursor[p] if not isinstance(cursor, dict): raise OptionError(msg.format(option=".".join(path[:-1]))) cursor[path[-1]] = defval # initialize # save the option metadata _registered_options[key] = RegisteredOption( key=key, defval=defval, doc=doc, validator=validator, cb=cb )
Register an option in the package-wide pandas config object Parameters ---------- key : str Fully-qualified key, e.g. "x.y.option - z". defval : object Default value of the option. doc : str Description of the option. validator : Callable, optional Function of a single argument, should raise `ValueError` if called with a value which is not a legal value for the option. cb a function of a single argument "key", which is called immediately after an option value is set/reset. key is the full name of the option. Raises ------ ValueError if `validator` is specified and `defval` is not a valid value.
python
pandas/_config/config.py
521
[ "key", "defval", "doc", "validator", "cb" ]
None
true
11
6.8
pandas-dev/pandas
47,362
numpy
false
complete
public void complete(T value) { try { if (value instanceof RuntimeException) throw new IllegalArgumentException("The argument to complete can not be an instance of RuntimeException"); if (!result.compareAndSet(INCOMPLETE_SENTINEL, value)) throw new IllegalStateException("Invalid attempt to complete a request future which is already complete"); fireSuccess(); } finally { completedLatch.countDown(); } }
Complete the request successfully. After this call, {@link #succeeded()} will return true and the value can be obtained through {@link #value()}. @param value corresponding value (or null if there is none) @throws IllegalStateException if the future has already been completed @throws IllegalArgumentException if the argument is an instance of {@link RuntimeException}
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFuture.java
122
[ "value" ]
void
true
3
6.08
apache/kafka
31,560
javadoc
false
write_dag
def write_dag( cls, *, dag_id: str, bundle_name: str, bundle_version: str | None = None, version_number: int = 1, session: Session = NEW_SESSION, ) -> DagVersion: """ Write a new DagVersion into database. Checks if a version of the DAG exists and increments the version number if it does. :param dag_id: The DAG ID. :param version_number: The version number. :param session: The database session. :return: The DagVersion object. """ existing_dag_version = session.scalar( with_row_locks(cls._latest_version_select(dag_id), of=DagVersion, session=session, nowait=True) ) if existing_dag_version: version_number = existing_dag_version.version_number + 1 dag_version = DagVersion( dag_id=dag_id, version_number=version_number, bundle_name=bundle_name, bundle_version=bundle_version, ) log.debug("Writing DagVersion %s to the DB", dag_version) session.add(dag_version) log.debug("DagVersion %s written to the DB", dag_version) return dag_version
Write a new DagVersion into database. Checks if a version of the DAG exists and increments the version number if it does. :param dag_id: The DAG ID. :param version_number: The version number. :param session: The database session. :return: The DagVersion object.
python
airflow-core/src/airflow/models/dag_version.py
107
[ "cls", "dag_id", "bundle_name", "bundle_version", "version_number", "session" ]
DagVersion
true
2
8.08
apache/airflow
43,597
sphinx
false
build
public ThreadPoolTaskExecutor build() { return configure(new ThreadPoolTaskExecutor()); }
Build a new {@link ThreadPoolTaskExecutor} instance and configure it using this builder. @return a configured {@link ThreadPoolTaskExecutor} instance. @see #build(Class) @see #configure(ThreadPoolTaskExecutor)
java
core/spring-boot/src/main/java/org/springframework/boot/task/ThreadPoolTaskExecutorBuilder.java
301
[]
ThreadPoolTaskExecutor
true
1
6
spring-projects/spring-boot
79,428
javadoc
false
get_current_worker_task
def get_current_worker_task(): """Currently executing task, that was applied by the worker. This is used to differentiate between the actual task executed by the worker and any task that was called within a task (using ``task.__call__`` or ``task.apply``) """ for task in reversed(_task_stack.stack): if not task.request.called_directly: return task
Currently executing task, that was applied by the worker. This is used to differentiate between the actual task executed by the worker and any task that was called within a task (using ``task.__call__`` or ``task.apply``)
python
celery/_state.py
126
[]
false
3
6.24
celery/celery
27,741
unknown
false
_has_externally_shared_axis
def _has_externally_shared_axis(ax1: Axes, compare_axis: str) -> bool: """ Return whether an axis is externally shared. Parameters ---------- ax1 : matplotlib.axes.Axes Axis to query. compare_axis : str `"x"` or `"y"` according to whether the X-axis or Y-axis is being compared. Returns ------- bool `True` if the axis is externally shared. Otherwise `False`. Notes ----- If two axes with different positions are sharing an axis, they can be referred to as *externally* sharing the common axis. If two axes sharing an axis also have the same position, they can be referred to as *internally* sharing the common axis (a.k.a twinning). _handle_shared_axes() is only interested in axes externally sharing an axis, regardless of whether either of the axes is also internally sharing with a third axis. """ if compare_axis == "x": axes = ax1.get_shared_x_axes() elif compare_axis == "y": axes = ax1.get_shared_y_axes() else: raise ValueError( "_has_externally_shared_axis() needs 'x' or 'y' as a second parameter" ) axes_siblings = axes.get_siblings(ax1) # Retain ax1 and any of its siblings which aren't in the same position as it ax1_points = ax1.get_position().get_points() for ax2 in axes_siblings: if not np.array_equal(ax1_points, ax2.get_position().get_points()): return True return False
Return whether an axis is externally shared. Parameters ---------- ax1 : matplotlib.axes.Axes Axis to query. compare_axis : str `"x"` or `"y"` according to whether the X-axis or Y-axis is being compared. Returns ------- bool `True` if the axis is externally shared. Otherwise `False`. Notes ----- If two axes with different positions are sharing an axis, they can be referred to as *externally* sharing the common axis. If two axes sharing an axis also have the same position, they can be referred to as *internally* sharing the common axis (a.k.a twinning). _handle_shared_axes() is only interested in axes externally sharing an axis, regardless of whether either of the axes is also internally sharing with a third axis.
python
pandas/plotting/_matplotlib/tools.py
342
[ "ax1", "compare_axis" ]
bool
true
6
6.88
pandas-dev/pandas
47,362
numpy
false
split_and_operate
def split_and_operate(self, func, *args, **kwargs) -> list[Block]: """ Split the block and apply func column-by-column. Parameters ---------- func : Block method *args **kwargs Returns ------- List[Block] """ assert self.ndim == 2 and self.shape[0] != 1 res_blocks = [] for nb in self._split(): rbs = func(nb, *args, **kwargs) res_blocks.extend(rbs) return res_blocks
Split the block and apply func column-by-column. Parameters ---------- func : Block method *args **kwargs Returns ------- List[Block]
python
pandas/core/internals/blocks.py
404
[ "self", "func" ]
list[Block]
true
3
6.56
pandas-dev/pandas
47,362
numpy
false
readFrom
public int readFrom(final Readable readable) throws IOException { final int oldSize = size; if (readable instanceof Reader) { final Reader r = (Reader) readable; ensureCapacity(size + 1); int read; while ((read = r.read(buffer, size, buffer.length - size)) != -1) { size += read; ensureCapacity(size + 1); } } else if (readable instanceof CharBuffer) { final CharBuffer cb = (CharBuffer) readable; final int remaining = cb.remaining(); ensureCapacity(size + remaining); cb.get(buffer, size, remaining); size += remaining; } else { while (true) { ensureCapacity(size + 1); final CharBuffer buf = CharBuffer.wrap(buffer, size, buffer.length - size); final int read = readable.read(buf); if (read == -1) { break; } size += read; } } return size - oldSize; }
If possible, reads chars from the provided {@link Readable} directly into underlying character buffer without making extra copies. @param readable object to read from @return the number of characters read @throws IOException if an I/O error occurs. @since 3.4 @see #appendTo(Appendable)
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
2,492
[ "readable" ]
true
6
7.44
apache/commons-lang
2,896
javadoc
false
getInstant
public @Nullable Instant getInstant(String key) { String s = get(key); if (s != null) { try { return Instant.ofEpochMilli(Long.parseLong(s)); } catch (NumberFormatException ex) { // Not valid epoch time } } return null; }
Return the value of the specified property as an {@link Instant} or {@code null} if the value is not a valid {@link Long} representation of an epoch time. @param key the key of the property @return the property value
java
core/spring-boot/src/main/java/org/springframework/boot/info/InfoProperties.java
65
[ "key" ]
Instant
true
3
8.24
spring-projects/spring-boot
79,428
javadoc
false
visitDestructuringAssignment
function visitDestructuringAssignment(node: DestructuringAssignment, valueIsDiscarded: boolean): VisitResult<Expression> { if (hasExportedReferenceInDestructuringTarget(node.left)) { return flattenDestructuringAssignment( node, visitor, context, FlattenLevel.All, !valueIsDiscarded, ); } return visitEachChild(node, visitor, context); }
Visits a DestructuringAssignment to flatten destructuring to exported symbols. @param node The node to visit.
typescript
src/compiler/transformers/module/system.ts
1,643
[ "node", "valueIsDiscarded" ]
true
2
6.24
microsoft/TypeScript
107,154
jsdoc
false
replaceIn
public boolean replaceIn(final StringBuffer source, final int offset, final int length) { if (source == null) { return false; } final StrBuilder buf = new StrBuilder(length).append(source, offset, length); if (!substitute(buf, 0, length)) { return false; } source.replace(offset, offset + length, buf.toString()); return true; }
Replaces all the occurrences of variables within the given source buffer with their matching values from the resolver. The buffer is updated with the result. <p> Only the specified portion of the buffer will be processed. The rest of the buffer is not processed, but it is not deleted. </p> @param source the buffer to replace in, updated, null returns zero. @param offset the start offset within the array, must be valid. @param length the length within the buffer to be processed, must be valid. @return true if altered.
java
src/main/java/org/apache/commons/lang3/text/StrSubstitutor.java
785
[ "source", "offset", "length" ]
true
3
8.24
apache/commons-lang
2,896
javadoc
false
getAnalysis
private static FailureAnalysis getAnalysis(Throwable rootFailure, Throwable cause) { StringBuilder description = new StringBuilder(String.format("%s:%n", cause.getMessage())); if (rootFailure != cause) { description.append(String.format("%n Resulting Failure: %s", getExceptionTypeAndMessage(rootFailure))); } return new FailureAnalysis(description.toString(), ACTION, rootFailure); }
Analyze the given failure for missing parameter name exceptions. @param failure the failure to analyze @return a failure analysis or {@code null}
java
core/spring-boot/src/main/java/org/springframework/boot/diagnostics/analyzer/MissingParameterNamesFailureAnalyzer.java
98
[ "rootFailure", "cause" ]
FailureAnalysis
true
2
7.92
spring-projects/spring-boot
79,428
javadoc
false
nanskew
def nanskew( values: np.ndarray, *, axis: AxisInt | None = None, skipna: bool = True, mask: npt.NDArray[np.bool_] | None = None, ) -> float: """ Compute the sample skewness. The statistic computed here is the adjusted Fisher-Pearson standardized moment coefficient G1. The algorithm computes this coefficient directly from the second and third central moment. Parameters ---------- values : ndarray axis : int, optional skipna : bool, default True mask : ndarray[bool], optional nan-mask if known Returns ------- result : float64 Unless input is a float array, in which case use the same precision as the input array. Examples -------- >>> from pandas.core import nanops >>> s = pd.Series([1, np.nan, 1, 2]) >>> nanops.nanskew(s.values) np.float64(1.7320508075688787) """ mask = _maybe_get_mask(values, skipna, mask) if values.dtype.kind != "f": values = values.astype("f8") count = _get_counts(values.shape, mask, axis) else: count = _get_counts(values.shape, mask, axis, dtype=values.dtype) if skipna and mask is not None: values = values.copy() np.putmask(values, mask, 0) elif not skipna and mask is not None and mask.any(): return np.nan with np.errstate(invalid="ignore", divide="ignore"): mean = values.sum(axis, dtype=np.float64) / count if axis is not None: mean = np.expand_dims(mean, axis) adjusted = values - mean if skipna and mask is not None: np.putmask(adjusted, mask, 0) adjusted2 = adjusted**2 adjusted3 = adjusted2 * adjusted m2 = adjusted2.sum(axis, dtype=np.float64) m3 = adjusted3.sum(axis, dtype=np.float64) # floating point error. See comment in [nankurt] max_abs = np.abs(values).max(axis, initial=0.0) eps = np.finfo(m2.dtype).eps constant_tolerance2 = ((eps * max_abs) ** 2) * count constant_tolerance3 = ((eps * max_abs) ** 3) * count m2 = _zero_out_fperr(m2, constant_tolerance2) m3 = _zero_out_fperr(m3, constant_tolerance3) with np.errstate(invalid="ignore", divide="ignore"): result = (count * (count - 1) ** 0.5 / (count - 2)) * (m3 / m2**1.5) dtype = values.dtype if dtype.kind == "f": result = result.astype(dtype, copy=False) if isinstance(result, np.ndarray): result = np.where(m2 == 0, 0, result) result[count < 3] = np.nan else: result = dtype.type(0) if m2 == 0 else result if count < 3: return np.nan return result
Compute the sample skewness. The statistic computed here is the adjusted Fisher-Pearson standardized moment coefficient G1. The algorithm computes this coefficient directly from the second and third central moment. Parameters ---------- values : ndarray axis : int, optional skipna : bool, default True mask : ndarray[bool], optional nan-mask if known Returns ------- result : float64 Unless input is a float array, in which case use the same precision as the input array. Examples -------- >>> from pandas.core import nanops >>> s = pd.Series([1, np.nan, 1, 2]) >>> nanops.nanskew(s.values) np.float64(1.7320508075688787)
python
pandas/core/nanops.py
1,214
[ "values", "axis", "skipna", "mask" ]
float
true
16
6.88
pandas-dev/pandas
47,362
numpy
false
subarray
public static char[] subarray(final char[] array, int startIndexInclusive, int endIndexExclusive) { if (array == null) { return null; } startIndexInclusive = max0(startIndexInclusive); endIndexExclusive = Math.min(endIndexExclusive, array.length); final int newSize = endIndexExclusive - startIndexInclusive; if (newSize <= 0) { return EMPTY_CHAR_ARRAY; } return arraycopy(array, startIndexInclusive, 0, newSize, char[]::new); }
Produces a new {@code char} array containing the elements between the start and end indices. <p> The start index is inclusive, the end index exclusive. Null array input produces null output. </p> @param array the input array. @param startIndexInclusive the starting index. Undervalue (&lt;0) is promoted to 0, overvalue (&gt;array.length) results in an empty array. @param endIndexExclusive elements up to endIndex-1 are present in the returned subarray. Undervalue (&lt; startIndex) produces empty array, overvalue (&gt;array.length) is demoted to array length. @return a new array containing the elements between the start and end indices. @since 2.1 @see Arrays#copyOfRange(char[], int, int)
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
7,816
[ "array", "startIndexInclusive", "endIndexExclusive" ]
true
3
7.6
apache/commons-lang
2,896
javadoc
false
setPlainTextToMimePart
private void setPlainTextToMimePart(MimePart mimePart, String text) throws MessagingException { if (getEncoding() != null) { mimePart.setText(text, getEncoding()); } else { mimePart.setText(text); } }
Set the given plain text and HTML text as alternatives, offering both options to the email client. Requires multipart mode. <p><b>NOTE:</b> Invoke {@link #addInline} <i>after</i> {@code setText}; else, mail readers might not be able to resolve inline references correctly. @param plainText the plain text for the message @param htmlText the HTML text for the message @throws MessagingException in case of errors
java
spring-context-support/src/main/java/org/springframework/mail/javamail/MimeMessageHelper.java
867
[ "mimePart", "text" ]
void
true
2
6.72
spring-projects/spring-framework
59,386
javadoc
false
visitFunctionDeclaration
function visitFunctionDeclaration(node: FunctionDeclaration): VisitResult<Statement | undefined> { if (hasSyntacticModifier(node, ModifierFlags.Export)) { hoistedStatements = append( hoistedStatements, factory.updateFunctionDeclaration( node, visitNodes(node.modifiers, modifierVisitor, isModifierLike), node.asteriskToken, factory.getDeclarationName(node, /*allowComments*/ true, /*allowSourceMaps*/ true), /*typeParameters*/ undefined, visitNodes(node.parameters, visitor, isParameter), /*type*/ undefined, visitNode(node.body, visitor, isBlock), ), ); } else { hoistedStatements = append(hoistedStatements, visitEachChild(node, visitor, context)); } hoistedStatements = appendExportsOfHoistedDeclaration(hoistedStatements, node); return undefined; }
Visits a FunctionDeclaration, hoisting it to the outer module body function. @param node The node to visit.
typescript
src/compiler/transformers/module/system.ts
799
[ "node" ]
true
3
6.72
microsoft/TypeScript
107,154
jsdoc
false
aws_template_fields
def aws_template_fields(*template_fields: str) -> tuple[str, ...]: """Merge provided template_fields with generic one and return in alphabetical order.""" if not all(isinstance(tf, str) for tf in template_fields): msg = ( "Expected that all provided arguments are strings, but got " f"{', '.join(map(repr, template_fields))}." ) raise TypeError(msg) return tuple(sorted({"aws_conn_id", "region_name", "verify"} | set(template_fields)))
Merge provided template_fields with generic one and return in alphabetical order.
python
providers/amazon/src/airflow/providers/amazon/aws/utils/mixins.py
153
[]
tuple[str, ...]
true
2
6.4
apache/airflow
43,597
unknown
false
shift
public static void shift(final char[] array, final int offset) { if (array != null) { shift(array, 0, array.length, offset); } }
Shifts the order of the given char array. <p>There is no special handling for multi-dimensional arrays. This method does nothing for {@code null} or empty input arrays.</p> @param array the array to shift, may be {@code null}. @param offset The number of positions to rotate the elements. If the offset is larger than the number of elements to rotate, than the effective offset is modulo the number of elements to rotate. @since 3.5
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
6,929
[ "array", "offset" ]
void
true
2
6.88
apache/commons-lang
2,896
javadoc
false
combine
@CanIgnoreReturnValue @Override Builder<E> combine(ImmutableSet.Builder<E> builder) { copyIfNecessary(); Builder<E> other = (Builder<E>) builder; for (int i = 0; i < other.n; i++) { add(other.elements[i]); } return this; }
Adds each element of {@code elements} to the {@code ImmutableSortedSet}, ignoring duplicate elements (only the first duplicate element is added). @param elements the elements to add to the {@code ImmutableSortedSet} @return this {@code Builder} object @throws NullPointerException if {@code elements} contains a null element
java
guava/src/com/google/common/collect/ImmutableSortedSet.java
571
[ "builder" ]
true
2
7.28
google/guava
51,352
javadoc
false
attrs
def attrs(self) -> dict[Hashable, Any]: """ Dictionary of global attributes of this dataset. .. warning:: attrs is experimental and may change without warning. See Also -------- DataFrame.flags : Global flags applying to this object. Notes ----- Many operations that create new datasets will copy ``attrs``. Copies are always deep so that changing ``attrs`` will only affect the present dataset. :func:`pandas.concat` and :func:`pandas.merge` will only copy ``attrs`` if all input datasets have the same ``attrs``. Examples -------- For Series: >>> ser = pd.Series([1, 2, 3]) >>> ser.attrs = {"A": [10, 20, 30]} >>> ser.attrs {'A': [10, 20, 30]} For DataFrame: >>> df = pd.DataFrame({"A": [1, 2], "B": [3, 4]}) >>> df.attrs = {"A": [10, 20, 30]} >>> df.attrs {'A': [10, 20, 30]} """ return self._attrs
Dictionary of global attributes of this dataset. .. warning:: attrs is experimental and may change without warning. See Also -------- DataFrame.flags : Global flags applying to this object. Notes ----- Many operations that create new datasets will copy ``attrs``. Copies are always deep so that changing ``attrs`` will only affect the present dataset. :func:`pandas.concat` and :func:`pandas.merge` will only copy ``attrs`` if all input datasets have the same ``attrs``. Examples -------- For Series: >>> ser = pd.Series([1, 2, 3]) >>> ser.attrs = {"A": [10, 20, 30]} >>> ser.attrs {'A': [10, 20, 30]} For DataFrame: >>> df = pd.DataFrame({"A": [1, 2], "B": [3, 4]}) >>> df.attrs = {"A": [10, 20, 30]} >>> df.attrs {'A': [10, 20, 30]}
python
pandas/core/generic.py
320
[ "self" ]
dict[Hashable, Any]
true
1
6.08
pandas-dev/pandas
47,362
unknown
false
getComment
public String getComment() { try { return ZipString.readString(this.data, this.commentPos, this.commentLength); } catch (UncheckedIOException ex) { if (ex.getCause() instanceof ClosedChannelException) { throw new IllegalStateException("Zip content closed", ex); } throw ex; } }
Return the zip comment, if any. @return the comment or {@code null}
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/zip/ZipContent.java
183
[]
String
true
3
8.08
spring-projects/spring-boot
79,428
javadoc
false
randomLong
private long randomLong(final long n) { // Extracted from o.a.c.rng.core.BaseProvider.nextLong(long) long bits; long val; do { bits = random().nextLong() >>> 1; val = bits % n; } while (bits - val + n - 1 < 0); return val; }
Generates a {@code long} value between 0 (inclusive) and the specified value (exclusive). @param n Bound on the random number to be returned. Must be positive. @return a random {@code long} value between 0 (inclusive) and {@code n} (exclusive).
java
src/main/java/org/apache/commons/lang3/RandomUtils.java
425
[ "n" ]
true
1
7.2
apache/commons-lang
2,896
javadoc
false
setMaxPollRecords
public synchronized void setMaxPollRecords(long maxPollRecords) { if (maxPollRecords < 1) { throw new IllegalArgumentException("MaxPollRecords must be strictly superior to 0"); } this.maxPollRecords = maxPollRecords; }
Sets the maximum number of records returned in a single call to {@link #poll(Duration)}. @param maxPollRecords the max.poll.records.
java
clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java
337
[ "maxPollRecords" ]
void
true
2
6.4
apache/kafka
31,560
javadoc
false
_get_connection
def _get_connection(self, write=False): """Prepare the connection for action. Arguments: write (bool): are we a writer? """ if self._session is not None: return self._lock.acquire() try: if self._session is not None: return # using either 'servers' or 'bundle_path' here: if self.servers: self._cluster = cassandra.cluster.Cluster( self.servers, port=self.port, auth_provider=self.auth_provider, **self.cassandra_options) else: # 'bundle_path' is guaranteed to be set self._cluster = cassandra.cluster.Cluster( cloud={ 'secure_connect_bundle': self.bundle_path, }, auth_provider=self.auth_provider, **self.cassandra_options) self._session = self._cluster.connect(self.keyspace) # We're forced to do concatenation below, as formatting would # blow up on superficial %s that'll be processed by Cassandra self._write_stmt = cassandra.query.SimpleStatement( Q_INSERT_RESULT.format( table=self.table, expires=self.cqlexpires), ) self._write_stmt.consistency_level = self.write_consistency self._read_stmt = cassandra.query.SimpleStatement( Q_SELECT_RESULT.format(table=self.table), ) self._read_stmt.consistency_level = self.read_consistency if write: # Only possible writers "workers" are allowed to issue # CREATE TABLE. This is to prevent conflicting situations # where both task-creator and task-executor would issue it # at the same time. # Anyway; if you're doing anything critical, you should # have created this table in advance, in which case # this query will be a no-op (AlreadyExists) make_stmt = cassandra.query.SimpleStatement( Q_CREATE_RESULT_TABLE.format(table=self.table), ) make_stmt.consistency_level = self.write_consistency try: self._session.execute(make_stmt) except cassandra.AlreadyExists: pass except cassandra.OperationTimedOut: # a heavily loaded or gone Cassandra cluster failed to respond. # leave this class in a consistent state if self._cluster is not None: self._cluster.shutdown() # also shuts down _session self._cluster = None self._session = None raise # we did fail after all - reraise finally: self._lock.release()
Prepare the connection for action. Arguments: write (bool): are we a writer?
python
celery/backends/cassandra.py
142
[ "self", "write" ]
false
7
6
celery/celery
27,741
google
false
hasnans
def hasnans(self) -> bool: """ Return True if there are any NaNs. Enables various performance speedups. Returns ------- bool See Also -------- Series.isna : Detect missing values. Series.notna : Detect existing (non-missing) values. Examples -------- >>> s = pd.Series([1, 2, 3, None]) >>> s 0 1.0 1 2.0 2 3.0 3 NaN dtype: float64 >>> s.hasnans True """ # error: Item "bool" of "Union[bool, ndarray[Any, dtype[bool_]], NDFrame]" # has no attribute "any" return bool(isna(self).any()) # type: ignore[union-attr]
Return True if there are any NaNs. Enables various performance speedups. Returns ------- bool See Also -------- Series.isna : Detect missing values. Series.notna : Detect existing (non-missing) values. Examples -------- >>> s = pd.Series([1, 2, 3, None]) >>> s 0 1.0 1 2.0 2 3.0 3 NaN dtype: float64 >>> s.hasnans True
python
pandas/core/base.py
916
[ "self" ]
bool
true
1
7.44
pandas-dev/pandas
47,362
unknown
false
rootLast
public static StandardStackTracePrinter rootLast() { return new StandardStackTracePrinter(EnumSet.noneOf(Option.class), UNLIMITED, null, null, null, null, null, null); }
Return a {@link StandardStackTracePrinter} that prints the stack trace with the root exception last (the same as {@link Throwable#printStackTrace()}). @return a {@link StandardStackTracePrinter} that prints the stack trace root last
java
core/spring-boot/src/main/java/org/springframework/boot/logging/StandardStackTracePrinter.java
297
[]
StandardStackTracePrinter
true
1
6.32
spring-projects/spring-boot
79,428
javadoc
false
compareMultiple
function compareMultiple(object, other, orders) { var index = -1, objCriteria = object.criteria, othCriteria = other.criteria, length = objCriteria.length, ordersLength = orders.length; while (++index < length) { var result = compareAscending(objCriteria[index], othCriteria[index]); if (result) { if (index >= ordersLength) { return result; } var order = orders[index]; return result * (order == 'desc' ? -1 : 1); } } // Fixes an `Array#sort` bug in the JS engine embedded in Adobe applications // that causes it, under certain circumstances, to provide the same value for // `object` and `other`. See https://github.com/jashkenas/underscore/pull/1247 // for more details. // // This also ensures a stable sort in V8 and other engines. // See https://bugs.chromium.org/p/v8/issues/detail?id=90 for more details. return object.index - other.index; }
Used by `_.orderBy` to compare multiple properties of a value to another and stable sort them. If `orders` is unspecified, all values are sorted in ascending order. Otherwise, specify an order of "desc" for descending or "asc" for ascending sort order of corresponding values. @private @param {Object} object The object to compare. @param {Object} other The other object to compare. @param {boolean[]|string[]} orders The order to sort by for each property. @returns {number} Returns the sort order indicator for `object`.
javascript
lodash.js
4,733
[ "object", "other", "orders" ]
false
5
6.08
lodash/lodash
61,490
jsdoc
false
as_float_array
def as_float_array(X, *, copy=True, ensure_all_finite=True): """Convert an array-like to an array of floats. The new dtype will be np.float32 or np.float64, depending on the original type. The function can create a copy or modify the argument depending on the argument copy. Parameters ---------- X : {array-like, sparse matrix} The input data. copy : bool, default=True If True, a copy of X will be created. If False, a copy may still be returned if X's dtype is not a floating point type. ensure_all_finite : bool or 'allow-nan', default=True Whether to raise an error on np.inf, np.nan, pd.NA in X. The possibilities are: - True: Force all values of X to be finite. - False: accepts np.inf, np.nan, pd.NA in X. - 'allow-nan': accepts only np.nan and pd.NA values in X. Values cannot be infinite. .. versionadded:: 1.6 `force_all_finite` was renamed to `ensure_all_finite`. Returns ------- XT : {ndarray, sparse matrix} An array of type float. Examples -------- >>> from sklearn.utils import as_float_array >>> import numpy as np >>> array = np.array([0, 0, 1, 2, 2], dtype=np.int64) >>> as_float_array(array) array([0., 0., 1., 2., 2.]) """ if isinstance(X, np.matrix) or ( not isinstance(X, np.ndarray) and not sp.issparse(X) ): return check_array( X, accept_sparse=["csr", "csc", "coo"], dtype=np.float64, copy=copy, ensure_all_finite=ensure_all_finite, ensure_2d=False, ) elif sp.issparse(X) and X.dtype in [np.float32, np.float64]: return X.copy() if copy else X elif X.dtype in [np.float32, np.float64]: # is numpy array return X.copy("F" if X.flags["F_CONTIGUOUS"] else "C") if copy else X else: if X.dtype.kind in "uib" and X.dtype.itemsize <= 4: return_dtype = np.float32 else: return_dtype = np.float64 return X.astype(return_dtype)
Convert an array-like to an array of floats. The new dtype will be np.float32 or np.float64, depending on the original type. The function can create a copy or modify the argument depending on the argument copy. Parameters ---------- X : {array-like, sparse matrix} The input data. copy : bool, default=True If True, a copy of X will be created. If False, a copy may still be returned if X's dtype is not a floating point type. ensure_all_finite : bool or 'allow-nan', default=True Whether to raise an error on np.inf, np.nan, pd.NA in X. The possibilities are: - True: Force all values of X to be finite. - False: accepts np.inf, np.nan, pd.NA in X. - 'allow-nan': accepts only np.nan and pd.NA values in X. Values cannot be infinite. .. versionadded:: 1.6 `force_all_finite` was renamed to `ensure_all_finite`. Returns ------- XT : {ndarray, sparse matrix} An array of type float. Examples -------- >>> from sklearn.utils import as_float_array >>> import numpy as np >>> array = np.array([0, 0, 1, 2, 2], dtype=np.int64) >>> as_float_array(array) array([0., 0., 1., 2., 2.])
python
sklearn/utils/validation.py
231
[ "X", "copy", "ensure_all_finite" ]
false
14
7.6
scikit-learn/scikit-learn
64,340
numpy
false
pad_or_backfill_inplace
def pad_or_backfill_inplace( values: np.ndarray, method: Literal["pad", "backfill"] = "pad", axis: AxisInt = 0, limit: int | None = None, limit_area: Literal["inside", "outside"] | None = None, ) -> None: """ Perform an actual interpolation of values, values will be make 2-d if needed fills inplace, returns the result. Parameters ---------- values: np.ndarray Input array. method: str, default "pad" Interpolation method. Could be "bfill" or "pad" axis: 0 or 1 Interpolation axis limit: int, optional Index limit on interpolation. limit_area: str, optional Limit area for interpolation. Can be "inside" or "outside" Notes ----- Modifies values in-place. """ transf = (lambda x: x) if axis == 0 else (lambda x: x.T) # reshape a 1 dim if needed if values.ndim == 1: if axis != 0: # pragma: no cover raise AssertionError("cannot interpolate on an ndim == 1 with axis != 0") values = values.reshape(tuple((1,) + values.shape)) method = clean_fill_method(method) tvalues = transf(values) func = get_fill_func(method, ndim=2) # _pad_2d and _backfill_2d both modify tvalues inplace func(tvalues, limit=limit, limit_area=limit_area)
Perform an actual interpolation of values, values will be make 2-d if needed fills inplace, returns the result. Parameters ---------- values: np.ndarray Input array. method: str, default "pad" Interpolation method. Could be "bfill" or "pad" axis: 0 or 1 Interpolation axis limit: int, optional Index limit on interpolation. limit_area: str, optional Limit area for interpolation. Can be "inside" or "outside" Notes ----- Modifies values in-place.
python
pandas/core/missing.py
821
[ "values", "method", "axis", "limit", "limit_area" ]
None
true
4
6.72
pandas-dev/pandas
47,362
numpy
false
setitem
def setitem(self, indexer, value): """ Attempt self.values[indexer] = value, possibly creating a new array. This differs from Block.setitem by not allowing setitem to change the dtype of the Block. Parameters ---------- indexer : tuple, list-like, array-like, slice, int The subset of self.values to set value : object The value being set Returns ------- Block Notes ----- `indexer` is a direct slice/positional indexer. `value` must be a compatible shape. """ orig_indexer = indexer orig_value = value indexer = self._unwrap_setitem_indexer(indexer) value = self._maybe_squeeze_arg(value) values = self.values if values.ndim == 2: # TODO(GH#45419): string[pyarrow] tests break if we transpose # unconditionally values = values.T check_setitem_lengths(indexer, value, values) try: values[indexer] = value except (ValueError, TypeError): if isinstance(self.dtype, IntervalDtype): # see TestSetitemFloatIntervalWithIntIntervalValues nb = self.coerce_to_target_dtype(orig_value, raise_on_upcast=True) return nb.setitem(orig_indexer, orig_value) elif isinstance(self, NDArrayBackedExtensionBlock): nb = self.coerce_to_target_dtype(orig_value, raise_on_upcast=True) return nb.setitem(orig_indexer, orig_value) else: raise else: return self
Attempt self.values[indexer] = value, possibly creating a new array. This differs from Block.setitem by not allowing setitem to change the dtype of the Block. Parameters ---------- indexer : tuple, list-like, array-like, slice, int The subset of self.values to set value : object The value being set Returns ------- Block Notes ----- `indexer` is a direct slice/positional indexer. `value` must be a compatible shape.
python
pandas/core/internals/blocks.py
1,630
[ "self", "indexer", "value" ]
false
6
6.08
pandas-dev/pandas
47,362
numpy
false
analyzeCamelCaseWord
function analyzeCamelCaseWord(word: string): ICamelCaseAnalysis { let upper = 0, lower = 0, alpha = 0, numeric = 0, code = 0; for (let i = 0; i < word.length; i++) { code = word.charCodeAt(i); if (isUpper(code)) { upper++; } if (isLower(code)) { lower++; } if (isAlphanumeric(code)) { alpha++; } if (isNumber(code)) { numeric++; } } const upperPercent = upper / word.length; const lowerPercent = lower / word.length; const alphaPercent = alpha / word.length; const numericPercent = numeric / word.length; return { upperPercent, lowerPercent, alphaPercent, numericPercent }; }
Gets alternative codes to the character code passed in. This comes in the form of an array of character codes, all of which must match _in order_ to successfully match. @param code The character code to check.
typescript
src/vs/base/common/filters.ts
242
[ "word" ]
true
6
7.04
microsoft/vscode
179,840
jsdoc
false
min
public static int min(int a, final int b, final int c) { if (b < a) { a = b; } if (c < a) { a = c; } return a; }
Gets the minimum of three {@code int} values. @param a value 1. @param b value 2. @param c value 3. @return the smallest of the values.
java
src/main/java/org/apache/commons/lang3/math/NumberUtils.java
1,250
[ "a", "b", "c" ]
true
3
8.24
apache/commons-lang
2,896
javadoc
false
convertLinkedEditInfoToRanges
function convertLinkedEditInfoToRanges(linkedEdit: LinkedEditingInfo, scriptInfo: ScriptInfo): protocol.LinkedEditingRangesBody { const ranges = linkedEdit.ranges.map( r => { return { start: scriptInfo.positionToLineOffset(r.start), end: scriptInfo.positionToLineOffset(r.start + r.length), }; }, ); if (!linkedEdit.wordPattern) return { ranges }; return { ranges, wordPattern: linkedEdit.wordPattern }; }
@param projects Projects initially known to contain {@link initialLocation} @param defaultProject The default project containing {@link initialLocation} @param initialLocation Where the search operation was triggered @param getResultsForPosition This is where you plug in `findReferences`, `renameLocation`, etc @param forPositionInResult Given an item returned by {@link getResultsForPosition} enumerate the positions referred to by that result @returns In the common case where there's only one project, returns an array of results from {@link getResultsForPosition}. If multiple projects were searched - even if they didn't return results - the result will be a map from project to per-project results.
typescript
src/server/session.ts
4,006
[ "linkedEdit", "scriptInfo" ]
true
2
7.12
microsoft/TypeScript
107,154
jsdoc
false
mean
def mean( self, numeric_only: bool = False, ): """ Compute mean of groups, excluding missing values. Parameters ---------- numeric_only : bool, default False Include only `float`, `int` or `boolean` data. .. versionchanged:: 2.0.0 numeric_only now defaults to ``False``. Returns ------- DataFrame or Series Mean of values within each group. See Also -------- core.resample.Resampler.median : Compute median of groups, excluding missing values. core.resample.Resampler.sum : Compute sum of groups, excluding missing values. core.resample.Resampler.std : Compute standard deviation of groups, excluding missing values. core.resample.Resampler.var : Compute variance of groups, excluding missing values. Examples -------- >>> ser = pd.Series( ... [1, 2, 3, 4], ... index=pd.DatetimeIndex( ... ["2023-01-01", "2023-01-15", "2023-02-01", "2023-02-15"] ... ), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 2023-02-15 4 dtype: int64 >>> ser.resample("MS").mean() 2023-01-01 1.5 2023-02-01 3.5 Freq: MS, dtype: float64 """ return self._downsample("mean", numeric_only=numeric_only)
Compute mean of groups, excluding missing values. Parameters ---------- numeric_only : bool, default False Include only `float`, `int` or `boolean` data. .. versionchanged:: 2.0.0 numeric_only now defaults to ``False``. Returns ------- DataFrame or Series Mean of values within each group. See Also -------- core.resample.Resampler.median : Compute median of groups, excluding missing values. core.resample.Resampler.sum : Compute sum of groups, excluding missing values. core.resample.Resampler.std : Compute standard deviation of groups, excluding missing values. core.resample.Resampler.var : Compute variance of groups, excluding missing values. Examples -------- >>> ser = pd.Series( ... [1, 2, 3, 4], ... index=pd.DatetimeIndex( ... ["2023-01-01", "2023-01-15", "2023-02-01", "2023-02-15"] ... ), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 2023-02-15 4 dtype: int64 >>> ser.resample("MS").mean() 2023-01-01 1.5 2023-02-01 3.5 Freq: MS, dtype: float64
python
pandas/core/resample.py
1,479
[ "self", "numeric_only" ]
true
1
6.64
pandas-dev/pandas
47,362
numpy
false
findThreadsByName
public static Collection<Thread> findThreadsByName(final String threadName, final String threadGroupName) { Objects.requireNonNull(threadName, "threadName"); Objects.requireNonNull(threadGroupName, "threadGroupName"); return Collections.unmodifiableCollection(findThreadGroups(predicateThreadGroup(threadGroupName)).stream() .flatMap(group -> findThreads(group, false, predicateThread(threadName)).stream()).collect(Collectors.toList())); }
Finds active threads with the specified name if they belong to a thread group with the specified group name. @param threadName The thread name. @param threadGroupName The thread group name. @return The threads which belongs to a thread group with the specified group name and the thread's name match the specified name, An empty collection is returned if no such thread exists. The collection returned is always unmodifiable. @throws NullPointerException if the specified thread name or group name is null. @throws SecurityException if the current thread cannot access the system thread group. @throws SecurityException if the current thread cannot modify thread groups from this thread's thread group up to the system thread group.
java
src/main/java/org/apache/commons/lang3/ThreadUtils.java
409
[ "threadName", "threadGroupName" ]
true
1
6.88
apache/commons-lang
2,896
javadoc
false
hashCode
@Override public final int hashCode() { // If we have at least 4 bytes (32 bits), just take the first 4 bytes. Since this is // already a (presumably) high-quality hash code, any four bytes of it will do. if (bits() >= 32) { return asInt(); } // If we have less than 4 bytes, use them all. byte[] bytes = getBytesInternal(); int val = bytes[0] & 0xFF; for (int i = 1; i < bytes.length; i++) { val |= (bytes[i] & 0xFF) << (i * 8); } return val; }
Returns a "Java hash code" for this {@code HashCode} instance; this is well-defined (so, for example, you can safely put {@code HashCode} instances into a {@code HashSet}) but is otherwise probably not what you want to use.
java
android/guava/src/com/google/common/hash/HashCode.java
383
[]
true
3
6.72
google/guava
51,352
javadoc
false
convert_optional_dependencies_to_table
def convert_optional_dependencies_to_table( optional_dependencies: dict[str, list[str]], markdown: bool = True, ) -> str: """ Converts optional dependencies to a Markdown/RST table :param optional_dependencies: dict of optional dependencies :param markdown: if True, Markdown format is used else rst :return: formatted table """ import html from tabulate import tabulate headers = ["Extra", "Dependencies"] table_data = [] for extra_name, dependencies in optional_dependencies.items(): decoded_deps = [html.unescape(dep) for dep in dependencies] formatted_deps = ", ".join(f"`{dep}`" if markdown else f"``{dep}``" for dep in decoded_deps) extra_col = f"`{extra_name}`" if markdown else f"``{extra_name}``" table_data.append((extra_col, formatted_deps)) return tabulate(table_data, headers=headers, tablefmt="pipe" if markdown else "rst")
Converts optional dependencies to a Markdown/RST table :param optional_dependencies: dict of optional dependencies :param markdown: if True, Markdown format is used else rst :return: formatted table
python
dev/breeze/src/airflow_breeze/utils/packages.py
634
[ "optional_dependencies", "markdown" ]
str
true
5
7.12
apache/airflow
43,597
sphinx
false
writeIfNecessary
public void writeIfNecessary(ThrowingConsumer<List<String>> writer) { if (this.excludes.isEmpty()) { return; } List<String> lines = new ArrayList<>(); for (String exclude : this.excludes) { int lastSlash = exclude.lastIndexOf('/'); String jar = (lastSlash != -1) ? exclude.substring(lastSlash + 1) : exclude; lines.add("--exclude-config"); lines.add(Pattern.quote(jar)); lines.add("^/META-INF/native-image/.*"); } writer.accept(lines); }
Write the arguments file if it is necessary. @param writer consumer that should write the contents
java
loader/spring-boot-loader-tools/src/main/java/org/springframework/boot/loader/tools/NativeImageArgFile.java
54
[ "writer" ]
void
true
3
7.04
spring-projects/spring-boot
79,428
javadoc
false
addInline
public void addInline(String contentId, String inlineFilename, InputStreamSource inputStreamSource, String contentType) throws MessagingException { Assert.notNull(inputStreamSource, "InputStreamSource must not be null"); if (inputStreamSource instanceof Resource resource && resource.isOpen()) { throw new IllegalArgumentException( "Passed-in Resource contains an open stream: invalid argument. " + "JavaMail requires an InputStreamSource that creates a fresh stream for every call."); } DataSource dataSource = createDataSource(inputStreamSource, contentType, inlineFilename); addInline(contentId, inlineFilename, dataSource); }
Add an inline element to the MimeMessage, taking the content from an {@code org.springframework.core.InputStreamResource}, and specifying the inline fileName and content type explicitly. <p>You can determine the content type for any given filename via a Java Activation Framework's FileTypeMap, for example the one held by this helper. <p>Note that the InputStream returned by the InputStreamSource implementation needs to be a <i>fresh one on each call</i>, as JavaMail will invoke {@code getInputStream()} multiple times. <p><b>NOTE:</b> Invoke {@code addInline} <i>after</i> {@code setText}; else, mail readers might not be able to resolve inline references correctly. @param contentId the content ID to use. Will end up as "Content-ID" header in the body part, surrounded by angle brackets: for example, "myId" &rarr; "&lt;myId&gt;". Can be referenced in HTML source via src="cid:myId" expressions. @param inlineFilename the fileName to use for the inline element's part @param inputStreamSource the resource to take the content from @param contentType the content type to use for the element @throws MessagingException in case of errors @since 6.2 @see #setText @see #getFileTypeMap @see #addInline(String, org.springframework.core.io.Resource) @see #addInline(String, String, jakarta.activation.DataSource)
java
spring-context-support/src/main/java/org/springframework/mail/javamail/MimeMessageHelper.java
1,082
[ "contentId", "inlineFilename", "inputStreamSource", "contentType" ]
void
true
3
6.4
spring-projects/spring-framework
59,386
javadoc
false
getRootMimeMultipart
public final MimeMultipart getRootMimeMultipart() throws IllegalStateException { if (this.rootMimeMultipart == null) { throw new IllegalStateException("Not in multipart mode - " + "create an appropriate MimeMessageHelper via a constructor that takes a 'multipart' flag " + "if you need to set alternative texts or add inline elements or attachments."); } return this.rootMimeMultipart; }
Return the root MIME "multipart/mixed" object, if any. Can be used to manually add attachments. <p>This will be the direct content of the MimeMessage, in case of a multipart mail. @throws IllegalStateException if this helper is not in multipart mode @see #isMultipart @see #getMimeMessage @see jakarta.mail.internet.MimeMultipart#addBodyPart
java
spring-context-support/src/main/java/org/springframework/mail/javamail/MimeMessageHelper.java
391
[]
MimeMultipart
true
2
6.24
spring-projects/spring-framework
59,386
javadoc
false
getErrorTypeScore
function getErrorTypeScore(error: EngineValidationError): number { switch (error.kind) { case 'InvalidArgumentValue': case 'ValueTooLarge': return 20 case 'InvalidArgumentType': return 10 case 'RequiredArgumentMissing': return -10 default: return 0 } }
Function is invoked to determine most relevant error based on its type. Specific numbers returned from this function do not really matter, it's only important how they compare relatively to each other. Current logic is: - InvalidArgumentValue/ValueTooLarge is treated as the best possible error to display since when it is present we know that the field causing the error is defined on the schema and provided value has correct type, it's just that value violates some other constraint. - Next candidate is `InvalidArgumentType` error. We know the field user specified can exists in this spot, it's just that value provided has incorrect type. - All other engine-side errors follow. At that point it's difficult to say which of them is more relevant, so we treat them equally. We might adjust this logic in the future. - RequiredArgumentMissing is penalized because this error is often used to disambiguate union types and what is required in one arm of the union might be fine to leave out in another @param error @returns
typescript
packages/client/src/runtime/core/errorRendering/applyUnionError.ts
140
[ "error" ]
true
1
6.72
prisma/prisma
44,834
jsdoc
false
aot_load
def aot_load(so_path: str, device: str) -> Callable: """ Loads a shared library generated by aot_compile and returns a callable Args: so_path: Path to the shared library Returns: A callable """ aot_compile_warning() if device == "cpu": runner: AOTIModelContainerRunner = torch._C._aoti.AOTIModelContainerRunnerCpu(so_path, 1) elif device == "cuda" or device.startswith("cuda:"): runner = torch._C._aoti.AOTIModelContainerRunnerCuda(so_path, 1, device) elif device == "xpu" or device.startswith("xpu:"): runner = torch._C._aoti.AOTIModelContainerRunnerXpu(so_path, 1, device) elif device == "mps" or device.startswith("mps:"): runner = torch._C._aoti.AOTIModelContainerRunnerMps(so_path, 1) else: raise RuntimeError("Unsupported device " + device) def optimized(*args, **kwargs): call_spec = runner.get_call_spec() in_spec = pytree.treespec_loads(call_spec[0]) out_spec = pytree.treespec_loads(call_spec[1]) flat_inputs = pytree.tree_flatten((args, reorder_kwargs(kwargs, in_spec)))[0] flat_inputs = [x for x in flat_inputs if isinstance(x, torch.Tensor)] flat_outputs = runner.run(flat_inputs) return pytree.tree_unflatten(flat_outputs, out_spec) return optimized
Loads a shared library generated by aot_compile and returns a callable Args: so_path: Path to the shared library Returns: A callable
python
torch/_export/__init__.py
156
[ "so_path", "device" ]
Callable
true
9
7.12
pytorch/pytorch
96,034
google
false
immediateCancelledFuture
@SuppressWarnings("unchecked") // ImmediateCancelledFuture can work with any type public static <V extends @Nullable Object> ListenableFuture<V> immediateCancelledFuture() { ListenableFuture<Object> instance = ImmediateCancelledFuture.INSTANCE; if (instance != null) { return (ListenableFuture<V>) instance; } return new ImmediateCancelledFuture<>(); }
Creates a {@code ListenableFuture} which is cancelled immediately upon construction, so that {@code isCancelled()} always returns {@code true}. @since 14.0
java
android/guava/src/com/google/common/util/concurrent/Futures.java
174
[]
true
2
6.08
google/guava
51,352
javadoc
false
size
public abstract long size();
Returns the number of points that have been added to this TDigest. @return The sum of the weights on all centroids.
java
libs/tdigest/src/main/java/org/elasticsearch/tdigest/TDigest.java
134
[]
true
1
6.64
elastic/elasticsearch
75,680
javadoc
false
estimateTotalCost
inline unsigned estimateTotalCost(const BinaryContext &BC, const PredicateTy &SkipPredicate, SchedulingPolicy &SchedPolicy) { if (SchedPolicy == SchedulingPolicy::SP_TRIVIAL) return BC.getBinaryFunctions().size(); unsigned TotalCost = 0; for (auto &BFI : BC.getBinaryFunctions()) { const BinaryFunction &BF = BFI.second; TotalCost += computeCostFor(BF, SkipPredicate, SchedPolicy); } // Switch to trivial scheduling if total estimated work is zero if (TotalCost == 0) { BC.outs() << "BOLT-WARNING: Running parallel work of 0 estimated cost, will " "switch to trivial scheduling.\n"; SchedPolicy = SP_TRIVIAL; TotalCost = BC.getBinaryFunctions().size(); } return TotalCost; }
A single thread pool that is used to run parallel tasks
cpp
bolt/lib/Core/ParallelUtilities.cpp
79
[]
true
3
6.56
llvm/llvm-project
36,021
doxygen
false
build
@Override public ImmutableMultiset<E> build() { requireNonNull(contents); // see the comment on the field if (contents.size() == 0) { return of(); } if (isLinkedHash) { // we need ObjectCountHashMap-backed contents, with its keys and values array in direct // insertion order contents = new ObjectCountHashMap<E>(contents); isLinkedHash = false; } buildInvoked = true; // contents is now ObjectCountHashMap, but still guaranteed to be in insertion order! return new RegularImmutableMultiset<E>(contents); }
Returns a newly-created {@code ImmutableMultiset} based on the contents of the {@code Builder}.
java
android/guava/src/com/google/common/collect/ImmutableMultiset.java
656
[]
true
3
6.24
google/guava
51,352
javadoc
false
parse
T parse(String text, Locale locale) throws ParseException;
Parse a text String to produce a T. @param text the text string @param locale the current user locale @return an instance of T @throws ParseException when a parse exception occurs in a java.text parsing library @throws IllegalArgumentException when a parse exception occurs
java
spring-context/src/main/java/org/springframework/format/Parser.java
40
[ "text", "locale" ]
T
true
1
6.32
spring-projects/spring-framework
59,386
javadoc
false
indexOf
public static int indexOf(final double[] array, final double valueToFind) { return indexOf(array, valueToFind, 0); }
Finds the index of the given value in the array. <p> This method returns {@link #INDEX_NOT_FOUND} ({@code -1}) for a {@code null} input array. </p> @param array the array to search for the object, may be {@code null}. @param valueToFind the value to find. @return the index of the value within the array, {@link #INDEX_NOT_FOUND} ({@code -1}) if not found or {@code null} array input.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
2,491
[ "array", "valueToFind" ]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
declareModuleSymbol
function declareModuleSymbol(node: ModuleDeclaration): ModuleInstanceState { const state = getModuleInstanceState(node); const instantiated = state !== ModuleInstanceState.NonInstantiated; declareSymbolAndAddToSymbolTable( node, instantiated ? SymbolFlags.ValueModule : SymbolFlags.NamespaceModule, instantiated ? SymbolFlags.ValueModuleExcludes : SymbolFlags.NamespaceModuleExcludes, ); return state; }
Declares a Symbol for the node and adds it to symbols. Reports errors for conflicting identifier names. @param symbolTable - The symbol table which node will be added to. @param parent - node's parent declaration. @param node - The declaration to be added to the symbol table @param includes - The SymbolFlags that node has in addition to its declaration type (eg: export, ambient, etc.) @param excludes - The flags which node cannot be declared alongside in a symbol table. Used to report forbidden declarations.
typescript
src/compiler/binder.ts
2,388
[ "node" ]
true
3
6.72
microsoft/TypeScript
107,154
jsdoc
false