function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
notEmpty
public static <T extends CharSequence> T notEmpty(final T chars) { return notEmpty(chars, DEFAULT_NOT_EMPTY_CHAR_SEQUENCE_EX_MESSAGE); }
<p>Validates that the specified argument character sequence is neither {@code null} nor a length of zero (no characters); otherwise throwing an exception with the specified message. <pre>Validate.notEmpty(myString);</pre> <p>The message in the exception is &quot;The validated character sequence is empty&quot;. @param <T> the character sequence type. @param chars the character sequence to check, validated not null by this method. @return the validated character sequence (never {@code null} method for chaining). @throws NullPointerException if the character sequence is {@code null}. @throws IllegalArgumentException if the character sequence is empty. @see #notEmpty(CharSequence, String, Object...)
java
src/main/java/org/apache/commons/lang3/Validate.java
868
[ "chars" ]
T
true
1
6.16
apache/commons-lang
2,896
javadoc
false
join
public static String join(final List<?> list, final char separator, final int startIndex, final int endIndex) { if (list == null) { return null; } final int noOfItems = endIndex - startIndex; if (noOfItems <= 0) { return EMPTY; } final List<?> subList = list.subList(startIndex, endIndex); return join(subList.iterator(), separator); }
Joins the elements of the provided {@link List} into a single String containing the provided list of elements. <p> No delimiter is added before or after the list. Null objects or empty strings within the array are represented by empty strings. </p> <pre> StringUtils.join(null, *) = null StringUtils.join([], *) = "" StringUtils.join([null], *) = "" StringUtils.join(["a", "b", "c"], ';') = "a;b;c" StringUtils.join(["a", "b", "c"], null) = "abc" StringUtils.join([null, "", "a"], ';') = ";;a" </pre> @param list the {@link List} of values to join together, may be null. @param separator the separator character to use. @param startIndex the first index to start joining from. It is an error to pass in a start index past the end of the list. @param endIndex the index to stop joining from (exclusive). It is an error to pass in an end index past the end of the list. @return the joined String, {@code null} if null list input. @since 3.8
java
src/main/java/org/apache/commons/lang3/StringUtils.java
4,363
[ "list", "separator", "startIndex", "endIndex" ]
String
true
3
8.08
apache/commons-lang
2,896
javadoc
false
equals
public boolean equals(Object other) { if (this == other) return true; if (other == null) return false; if (!other.getClass().equals(LegacyRecord.class)) return false; LegacyRecord record = (LegacyRecord) other; return this.buffer.equals(record.buffer); }
Get the underlying buffer backing this record instance. @return the buffer
java
clients/src/main/java/org/apache/kafka/common/record/LegacyRecord.java
295
[ "other" ]
true
4
6.88
apache/kafka
31,560
javadoc
false
area
def area( self, x: Hashable | None = None, y: Hashable | None = None, stacked: bool = True, **kwargs, ) -> PlotAccessor: """ Draw a stacked area plot. An area plot displays quantitative data visually. This function wraps the matplotlib area function. Parameters ---------- x : label or position, optional Coordinates for the X axis. By default uses the index. y : label or position, optional Column to plot. By default uses all columns. stacked : bool, default True Area plots are stacked by default. Set to False to create a unstacked plot. **kwargs Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- matplotlib.axes.Axes or numpy.ndarray Area plot, or array of area plots if subplots is True. See Also -------- DataFrame.plot : Make plots of DataFrame using matplotlib. Examples -------- Draw an area plot based on basic business metrics: .. plot:: :context: close-figs >>> df = pd.DataFrame( ... { ... "sales": [3, 2, 3, 9, 10, 6], ... "signups": [5, 5, 6, 12, 14, 13], ... "visits": [20, 42, 28, 62, 81, 50], ... }, ... index=pd.date_range( ... start="2018/01/01", end="2018/07/01", freq="ME" ... ), ... ) >>> ax = df.plot.area() Area plots are stacked by default. To produce an unstacked plot, pass ``stacked=False``: .. plot:: :context: close-figs >>> ax = df.plot.area(stacked=False) Draw an area plot for a single column: .. plot:: :context: close-figs >>> ax = df.plot.area(y="sales") Draw with a different `x`: .. plot:: :context: close-figs >>> df = pd.DataFrame( ... { ... "sales": [3, 2, 3], ... "visits": [20, 42, 28], ... "day": [1, 2, 3], ... } ... ) >>> ax = df.plot.area(x="day") """ return self(kind="area", x=x, y=y, stacked=stacked, **kwargs)
Draw a stacked area plot. An area plot displays quantitative data visually. This function wraps the matplotlib area function. Parameters ---------- x : label or position, optional Coordinates for the X axis. By default uses the index. y : label or position, optional Column to plot. By default uses all columns. stacked : bool, default True Area plots are stacked by default. Set to False to create a unstacked plot. **kwargs Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- matplotlib.axes.Axes or numpy.ndarray Area plot, or array of area plots if subplots is True. See Also -------- DataFrame.plot : Make plots of DataFrame using matplotlib. Examples -------- Draw an area plot based on basic business metrics: .. plot:: :context: close-figs >>> df = pd.DataFrame( ... { ... "sales": [3, 2, 3, 9, 10, 6], ... "signups": [5, 5, 6, 12, 14, 13], ... "visits": [20, 42, 28, 62, 81, 50], ... }, ... index=pd.date_range( ... start="2018/01/01", end="2018/07/01", freq="ME" ... ), ... ) >>> ax = df.plot.area() Area plots are stacked by default. To produce an unstacked plot, pass ``stacked=False``: .. plot:: :context: close-figs >>> ax = df.plot.area(stacked=False) Draw an area plot for a single column: .. plot:: :context: close-figs >>> ax = df.plot.area(y="sales") Draw with a different `x`: .. plot:: :context: close-figs >>> df = pd.DataFrame( ... { ... "sales": [3, 2, 3], ... "visits": [20, 42, 28], ... "day": [1, 2, 3], ... } ... ) >>> ax = df.plot.area(x="day")
python
pandas/plotting/_core.py
1,818
[ "self", "x", "y", "stacked" ]
PlotAccessor
true
1
7.2
pandas-dev/pandas
47,362
numpy
false
_init_dict
def _init_dict( self, data: Mapping, index: Index | None = None, dtype: DtypeObj | None = None ): """ Derive the "_mgr" and "index" attributes of a new Series from a dictionary input. Parameters ---------- data : dict or dict-like Data used to populate the new Series. index : Index or None, default None Index for the new Series: if None, use dict keys. dtype : np.dtype, ExtensionDtype, or None, default None The dtype for the new Series: if None, infer from data. Returns ------- _data : BlockManager for the new Series index : index for the new Series """ # Looking for NaN in dict doesn't work ({np.nan : 1}[float('nan')] # raises KeyError), so we iterate the entire dict, and align if data: # GH:34717, issue was using zip to extract key and values from data. # using generators in effects the performance. # Below is the new way of extracting the keys and values keys = maybe_sequence_to_range(tuple(data.keys())) values = list(data.values()) # Generating list of values- faster way elif index is not None: # fastpath for Series(data=None). Just use broadcasting a scalar # instead of reindexing. if len(index) or dtype is not None: values = na_value_for_dtype(pandas_dtype(dtype), compat=False) else: values = [] keys = index else: keys, values = default_index(0), [] # Input is now list-like, so rely on "standard" construction: s = Series(values, index=keys, dtype=dtype) # Now we just make sure the order is respected, if any if data and index is not None: s = s.reindex(index) return s._mgr, s.index
Derive the "_mgr" and "index" attributes of a new Series from a dictionary input. Parameters ---------- data : dict or dict-like Data used to populate the new Series. index : Index or None, default None Index for the new Series: if None, use dict keys. dtype : np.dtype, ExtensionDtype, or None, default None The dtype for the new Series: if None, infer from data. Returns ------- _data : BlockManager for the new Series index : index for the new Series
python
pandas/core/series.py
525
[ "self", "data", "index", "dtype" ]
true
9
7.04
pandas-dev/pandas
47,362
numpy
false
get_previous_dagrun
def get_previous_dagrun( self, state: DagRunState | None = None, session: Session | None = None, ) -> DagRun | None: """ Return the DagRun that ran before this task instance's DagRun. :param state: If passed, it only take into account instances of a specific state. :param session: SQLAlchemy ORM Session. """ if TYPE_CHECKING: assert self.task assert session is not None dag = self.task.dag if dag is None: return None dr = self.get_dagrun(session=session) dr.dag = dag from airflow.models.dagrun import DagRun # Avoid circular import # We always ignore schedule in dagrun lookup when `state` is given # or the DAG is never scheduled. For legacy reasons, when # `catchup=True`, we use `get_previous_scheduled_dagrun` unless # `ignore_schedule` is `True`. ignore_schedule = state is not None or not dag.timetable.can_be_scheduled if dag.catchup is True and not ignore_schedule: last_dagrun = DagRun.get_previous_scheduled_dagrun(dr.id, session=session) else: last_dagrun = DagRun.get_previous_dagrun(dag_run=dr, session=session, state=state) if last_dagrun: return last_dagrun return None
Return the DagRun that ran before this task instance's DagRun. :param state: If passed, it only take into account instances of a specific state. :param session: SQLAlchemy ORM Session.
python
airflow-core/src/airflow/models/taskinstance.py
826
[ "self", "state", "session" ]
DagRun | None
true
8
7.04
apache/airflow
43,597
sphinx
false
notna
def notna(self) -> npt.NDArray[np.bool_]: """ Detect existing (non-missing) values. Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to ``True``. Characters such as empty strings ``''`` or :attr:`numpy.inf` are not considered NA values. NA values, such as None or :attr:`numpy.NaN`, get mapped to ``False`` values. Returns ------- numpy.ndarray[bool] Boolean array to indicate which entries are not NA. See Also -------- Index.notnull : Alias of notna. Index.isna: Inverse of notna. notna : Top-level notna. Examples -------- Show which entries in an Index are not NA. The result is an array. >>> idx = pd.Index([5.2, 6.0, np.nan]) >>> idx Index([5.2, 6.0, nan], dtype='float64') >>> idx.notna() array([ True, True, False]) Empty strings are not considered NA values. None is considered a NA value. >>> idx = pd.Index(["black", "", "red", None]) >>> idx Index(['black', '', 'red', None], dtype='object') >>> idx.notna() array([ True, True, True, False]) """ return ~self.isna()
Detect existing (non-missing) values. Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to ``True``. Characters such as empty strings ``''`` or :attr:`numpy.inf` are not considered NA values. NA values, such as None or :attr:`numpy.NaN`, get mapped to ``False`` values. Returns ------- numpy.ndarray[bool] Boolean array to indicate which entries are not NA. See Also -------- Index.notnull : Alias of notna. Index.isna: Inverse of notna. notna : Top-level notna. Examples -------- Show which entries in an Index are not NA. The result is an array. >>> idx = pd.Index([5.2, 6.0, np.nan]) >>> idx Index([5.2, 6.0, nan], dtype='float64') >>> idx.notna() array([ True, True, False]) Empty strings are not considered NA values. None is considered a NA value. >>> idx = pd.Index(["black", "", "red", None]) >>> idx Index(['black', '', 'red', None], dtype='object') >>> idx.notna() array([ True, True, True, False])
python
pandas/core/indexes/base.py
2,699
[ "self" ]
npt.NDArray[np.bool_]
true
1
7.28
pandas-dev/pandas
47,362
unknown
false
resolveResource
protected @Nullable Resource resolveResource(String filename) { for (String fileExtension : this.fileExtensions) { Resource resource = this.resourceLoader.getResource(filename + fileExtension); if (resource.exists()) { return resource; } } return null; }
Resolve the specified bundle {@code filename} into a concrete {@link Resource}, potentially checking multiple sources or file extensions. <p>If no suitable concrete {@code Resource} can be resolved, this method returns a {@code Resource} for which {@link Resource#exists()} returns {@code false}, which gets subsequently ignored. <p>This can be leveraged to check the last modification timestamp or to load properties from alternative sources &mdash; for example, from an XML BLOB in a database, or from properties serialized using a custom format such as JSON. <p>The default implementation delegates to the configured {@link #setResourceLoader(ResourceLoader) ResourceLoader} to resolve resources, checking in order for existing {@code Resource} with extensions defined by {@link #setFileExtensions(List)} ({@code .properties} and {@code .xml} by default). <p>When overriding this method, {@link #loadProperties(Resource, String)} <strong>must</strong> be capable of loading properties from any type of {@code Resource} returned by this method. As a consequence, implementors are strongly encouraged to also override {@code loadProperties()}. <p>As an alternative to overriding this method, you can configure a {@link #setPropertiesPersister(PropertiesPersister) PropertiesPersister} that is capable of dealing with all resources returned by this method. Please note, however, that the default {@code loadProperties()} implementation uses {@link PropertiesPersister#loadFromXml(Properties, InputStream) loadFromXml} for XML resources and otherwise uses the two {@link PropertiesPersister#load(Properties, InputStream) load} methods for other types of resources. @param filename the bundle filename (basename + Locale) @return the {@code Resource} to use, or {@code null} if none found @since 6.1
java
spring-context/src/main/java/org/springframework/context/support/ReloadableResourceBundleMessageSource.java
542
[ "filename" ]
Resource
true
2
7.44
spring-projects/spring-framework
59,386
javadoc
false
close
public void close() { recordsBuilder.close(); if (!recordsBuilder.isControlBatch()) { CompressionRatioEstimator.updateEstimation(topicPartition.topic(), recordsBuilder.compression().type(), (float) recordsBuilder.compressionRatio()); } reopened = false; }
Release resources required for record appends (e.g. compression buffers). Once this method is called, it's only possible to update the RecordBatch header.
java
clients/src/main/java/org/apache/kafka/clients/producer/internals/ProducerBatch.java
515
[]
void
true
2
6.4
apache/kafka
31,560
javadoc
false
connection
def connection(self, hostname=None, userid=None, password=None, virtual_host=None, port=None, ssl=None, connect_timeout=None, transport=None, transport_options=None, heartbeat=None, login_method=None, failover_strategy=None, **kwargs): """Establish a connection to the message broker. Please use :meth:`connection_for_read` and :meth:`connection_for_write` instead, to convey the intent of use for this connection. Arguments: url: Either the URL or the hostname of the broker to use. hostname (str): URL, Hostname/IP-address of the broker. If a URL is used, then the other argument below will be taken from the URL instead. userid (str): Username to authenticate as. password (str): Password to authenticate with virtual_host (str): Virtual host to use (domain). port (int): Port to connect to. ssl (bool, Dict): Defaults to the :setting:`broker_use_ssl` setting. transport (str): defaults to the :setting:`broker_transport` setting. transport_options (Dict): Dictionary of transport specific options. heartbeat (int): AMQP Heartbeat in seconds (``pyamqp`` only). login_method (str): Custom login method to use (AMQP only). failover_strategy (str, Callable): Custom failover strategy. **kwargs: Additional arguments to :class:`kombu.Connection`. Returns: kombu.Connection: the lazy connection instance. """ return self.connection_for_write( hostname or self.conf.broker_write_url, userid=userid, password=password, virtual_host=virtual_host, port=port, ssl=ssl, connect_timeout=connect_timeout, transport=transport, transport_options=transport_options, heartbeat=heartbeat, login_method=login_method, failover_strategy=failover_strategy, **kwargs )
Establish a connection to the message broker. Please use :meth:`connection_for_read` and :meth:`connection_for_write` instead, to convey the intent of use for this connection. Arguments: url: Either the URL or the hostname of the broker to use. hostname (str): URL, Hostname/IP-address of the broker. If a URL is used, then the other argument below will be taken from the URL instead. userid (str): Username to authenticate as. password (str): Password to authenticate with virtual_host (str): Virtual host to use (domain). port (int): Port to connect to. ssl (bool, Dict): Defaults to the :setting:`broker_use_ssl` setting. transport (str): defaults to the :setting:`broker_transport` setting. transport_options (Dict): Dictionary of transport specific options. heartbeat (int): AMQP Heartbeat in seconds (``pyamqp`` only). login_method (str): Custom login method to use (AMQP only). failover_strategy (str, Callable): Custom failover strategy. **kwargs: Additional arguments to :class:`kombu.Connection`. Returns: kombu.Connection: the lazy connection instance.
python
celery/app/base.py
977
[ "self", "hostname", "userid", "password", "virtual_host", "port", "ssl", "connect_timeout", "transport", "transport_options", "heartbeat", "login_method", "failover_strategy" ]
false
2
6.8
celery/celery
27,741
google
false
nextClearBit
public int nextClearBit(final int fromIndex) { return bitSet.nextClearBit(fromIndex); }
Returns the index of the first bit that is set to {@code false} that occurs on or after the specified starting index. @param fromIndex the index to start checking from (inclusive). @return the index of the next clear bit. @throws IndexOutOfBoundsException if the specified index is negative.
java
src/main/java/org/apache/commons/lang3/util/FluentBitSet.java
314
[ "fromIndex" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
compress
def compress(condition, a, axis=None, out=None): """ Return selected slices of an array along given axis. When working along a given axis, a slice along that axis is returned in `output` for each index where `condition` evaluates to True. When working on a 1-D array, `compress` is equivalent to `extract`. Parameters ---------- condition : 1-D array of bools Array that selects which entries to return. If len(condition) is less than the size of `a` along the given axis, then output is truncated to the length of the condition array. a : array_like Array from which to extract a part. axis : int, optional Axis along which to take slices. If None (default), work on the flattened array. out : ndarray, optional Output array. Its type is preserved and it must be of the right shape to hold the output. Returns ------- compressed_array : ndarray A copy of `a` without the slices along axis for which `condition` is false. See Also -------- take, choose, diag, diagonal, select ndarray.compress : Equivalent method in ndarray extract : Equivalent method when working on 1-D arrays :ref:`ufuncs-output-type` Examples -------- >>> import numpy as np >>> a = np.array([[1, 2], [3, 4], [5, 6]]) >>> a array([[1, 2], [3, 4], [5, 6]]) >>> np.compress([0, 1], a, axis=0) array([[3, 4]]) >>> np.compress([False, True, True], a, axis=0) array([[3, 4], [5, 6]]) >>> np.compress([False, True], a, axis=1) array([[2], [4], [6]]) Working on the flattened array does not return slices along an axis but selects elements. >>> np.compress([False, True], a) array([2]) """ return _wrapfunc(a, 'compress', condition, axis=axis, out=out)
Return selected slices of an array along given axis. When working along a given axis, a slice along that axis is returned in `output` for each index where `condition` evaluates to True. When working on a 1-D array, `compress` is equivalent to `extract`. Parameters ---------- condition : 1-D array of bools Array that selects which entries to return. If len(condition) is less than the size of `a` along the given axis, then output is truncated to the length of the condition array. a : array_like Array from which to extract a part. axis : int, optional Axis along which to take slices. If None (default), work on the flattened array. out : ndarray, optional Output array. Its type is preserved and it must be of the right shape to hold the output. Returns ------- compressed_array : ndarray A copy of `a` without the slices along axis for which `condition` is false. See Also -------- take, choose, diag, diagonal, select ndarray.compress : Equivalent method in ndarray extract : Equivalent method when working on 1-D arrays :ref:`ufuncs-output-type` Examples -------- >>> import numpy as np >>> a = np.array([[1, 2], [3, 4], [5, 6]]) >>> a array([[1, 2], [3, 4], [5, 6]]) >>> np.compress([0, 1], a, axis=0) array([[3, 4]]) >>> np.compress([False, True, True], a, axis=0) array([[3, 4], [5, 6]]) >>> np.compress([False, True], a, axis=1) array([[2], [4], [6]]) Working on the flattened array does not return slices along an axis but selects elements. >>> np.compress([False, True], a) array([2])
python
numpy/_core/fromnumeric.py
2,138
[ "condition", "a", "axis", "out" ]
false
1
6.24
numpy/numpy
31,054
numpy
false
calculateDeadlineMs
static long calculateDeadlineMs(final Time time, final Duration duration) { return calculateDeadlineMs(requireNonNull(time).milliseconds(), requireNonNull(duration).toMillis()); }
Calculate the deadline timestamp based on {@link Timer#currentTimeMs()} and {@link Duration#toMillis()}. @param time Time @param duration Duration @return Absolute time by which event should be completed
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/events/CompletableEvent.java
97
[ "time", "duration" ]
true
1
6.16
apache/kafka
31,560
javadoc
false
varHandleWithoutOffset
static VarHandle varHandleWithoutOffset(MemoryLayout layout, MemoryLayout.PathElement element) { return layout.varHandle(element); }
Return a {@link VarHandle} to access an element within the given memory segment. Note: This is no-op in Java 21, see the Java 22 implementation. @param layout The layout of a struct to access @param element The element within the struct to access @return A {@link VarHandle} that accesses the element with a fixed offset of 0
java
libs/native/src/main/java/org/elasticsearch/nativeaccess/jdk/MemorySegmentUtil.java
43
[ "layout", "element" ]
VarHandle
true
1
6.48
elastic/elasticsearch
75,680
javadoc
false
maybeLeaveGroup
public synchronized RequestFuture<Void> maybeLeaveGroup(CloseOptions.GroupMembershipOperation membershipOperation, String leaveReason) { RequestFuture<Void> future = null; if (shouldSendLeaveGroupRequest(membershipOperation)) { log.info("Member {} sending LeaveGroup request to coordinator {} due to {}", generation.memberId, coordinator, leaveReason); LeaveGroupRequest.Builder request = new LeaveGroupRequest.Builder( rebalanceConfig.groupId, Collections.singletonList(new MemberIdentity().setMemberId(generation.memberId).setReason(JoinGroupRequest.maybeTruncateReason(leaveReason))) ); future = client.send(coordinator, request).compose(new LeaveGroupResponseHandler(generation)); client.pollNoWakeup(); } resetGenerationOnLeaveGroup(); return future; }
Sends LeaveGroupRequest and logs the {@code leaveReason}, unless this member is using static membership with the default consumer group membership operation, or is already not part of the group (i.e., does not have a valid member ID, is in the UNJOINED state, or the coordinator is unknown). @param membershipOperation the operation on consumer group membership that the consumer will perform when closing @param leaveReason the reason to leave the group for logging @throws KafkaException if the rebalance callback throws exception
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
1,170
[ "membershipOperation", "leaveReason" ]
true
2
6.56
apache/kafka
31,560
javadoc
false
splitByCharacterType
public static String[] splitByCharacterType(final String str) { return splitByCharacterType(str, false); }
Splits a String by Character type as returned by {@code java.lang.Character.getType(char)}. Groups of contiguous characters of the same type are returned as complete tokens. <pre> StringUtils.splitByCharacterType(null) = null StringUtils.splitByCharacterType("") = [] StringUtils.splitByCharacterType("ab de fg") = ["ab", " ", "de", " ", "fg"] StringUtils.splitByCharacterType("ab de fg") = ["ab", " ", "de", " ", "fg"] StringUtils.splitByCharacterType("ab:cd:ef") = ["ab", ":", "cd", ":", "ef"] StringUtils.splitByCharacterType("number5") = ["number", "5"] StringUtils.splitByCharacterType("fooBar") = ["foo", "B", "ar"] StringUtils.splitByCharacterType("foo200Bar") = ["foo", "200", "B", "ar"] StringUtils.splitByCharacterType("ASFRules") = ["ASFR", "ules"] </pre> @param str the String to split, may be {@code null}. @return an array of parsed Strings, {@code null} if null String input. @since 2.4
java
src/main/java/org/apache/commons/lang3/StringUtils.java
7,151
[ "str" ]
true
1
6.16
apache/commons-lang
2,896
javadoc
false
getOrder
@Override public int getOrder() { if (this.aspectInstance instanceof Ordered ordered) { return ordered.getOrder(); } return getOrderForAspectClass(this.aspectInstance.getClass()); }
Determine the order for this factory's aspect instance, either an instance-specific order expressed through implementing the {@link org.springframework.core.Ordered} interface, or a fallback order. @see org.springframework.core.Ordered @see #getOrderForAspectClass
java
spring-aop/src/main/java/org/springframework/aop/aspectj/SingletonAspectInstanceFactory.java
70
[]
true
2
6.24
spring-projects/spring-framework
59,386
javadoc
false
insert
public StrBuilder insert(final int index, final double value) { return insert(index, String.valueOf(value)); }
Inserts the value into this builder. @param index the index to add at, must be valid @param value the value to insert @return {@code this} instance. @throws IndexOutOfBoundsException if the index is invalid
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
2,196
[ "index", "value" ]
StrBuilder
true
1
6.48
apache/commons-lang
2,896
javadoc
false
fireFailure
private void fireFailure() { RuntimeException exception = exception(); while (true) { RequestFutureListener<T> listener = listeners.poll(); if (listener == null) break; listener.onFailure(exception); } }
Raise an error. The request will be marked as failed. @param error corresponding error to be passed to caller
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFuture.java
173
[]
void
true
3
6.88
apache/kafka
31,560
javadoc
false
contains
public boolean contains(final char ch) { return (ch >= start && ch <= end) != negated; }
Is the character specified contained in this range. @param ch the character to check. @return {@code true} if this range contains the input character.
java
src/main/java/org/apache/commons/lang3/CharRange.java
247
[ "ch" ]
true
2
8.16
apache/commons-lang
2,896
javadoc
false
_maybe_match_name
def _maybe_match_name(a, b): """ Try to find a name to attach to the result of an operation between a and b. If only one of these has a `name` attribute, return that name. Otherwise return a consensus name if they match or None if they have different names. Parameters ---------- a : object b : object Returns ------- name : str or None See Also -------- pandas.core.common.consensus_name_attr """ a_has = hasattr(a, "name") b_has = hasattr(b, "name") if a_has and b_has: try: if a.name == b.name: return a.name elif is_matching_na(a.name, b.name): # e.g. both are np.nan return a.name else: return None except TypeError: # pd.NA if is_matching_na(a.name, b.name): return a.name return None except ValueError: # e.g. np.int64(1) vs (np.int64(1), np.int64(2)) return None elif a_has: return a.name elif b_has: return b.name return None
Try to find a name to attach to the result of an operation between a and b. If only one of these has a `name` attribute, return that name. Otherwise return a consensus name if they match or None if they have different names. Parameters ---------- a : object b : object Returns ------- name : str or None See Also -------- pandas.core.common.consensus_name_attr
python
pandas/core/ops/common.py
114
[ "a", "b" ]
false
9
6.4
pandas-dev/pandas
47,362
numpy
false
tryAcquire
public synchronized boolean tryAcquire() { prepareAcquire(); return acquirePermit(); }
Tries to acquire a permit from this semaphore. If the limit of this semaphore has not yet been reached, a permit is acquired, and this method returns <strong>true</strong>. Otherwise, this method returns immediately with the result <strong>false</strong>. @return <strong>true</strong> if a permit could be acquired; <strong>false</strong> otherwise. @throws IllegalStateException if this semaphore is already shut down. @since 3.5
java
src/main/java/org/apache/commons/lang3/concurrent/TimedSemaphore.java
478
[]
true
1
6.32
apache/commons-lang
2,896
javadoc
false
add
public void add(ConfigurationMetadataProperty property, ConfigurationMetadataSource source) { if (source != null) { source.getProperties().putIfAbsent(property.getId(), property); } getGroup(source).getProperties().putIfAbsent(property.getId(), property); }
Add a {@link ConfigurationMetadataProperty} with the {@link ConfigurationMetadataSource source} that defines it, if any. @param property the property to add @param source the source
java
configuration-metadata/spring-boot-configuration-metadata/src/main/java/org/springframework/boot/configurationmetadata/SimpleConfigurationMetadataRepository.java
72
[ "property", "source" ]
void
true
2
6.08
spring-projects/spring-boot
79,428
javadoc
false
maybeUpdateAssignment
void maybeUpdateAssignment(SubscriptionState subscription) { int newAssignmentId = subscription.assignmentId(); if (this.assignmentId != newAssignmentId) { Set<TopicPartition> newAssignedPartitions = subscription.assignedPartitions(); for (TopicPartition tp : this.assignedPartitions) { if (!newAssignedPartitions.contains(tp)) { metrics.removeSensor(partitionRecordsLagMetricName(tp)); metrics.removeSensor(partitionRecordsLeadMetricName(tp)); metrics.removeMetric(partitionPreferredReadReplicaMetricName(tp)); // Remove deprecated metrics. metrics.removeSensor(deprecatedMetricName(partitionRecordsLagMetricName(tp))); metrics.removeSensor(deprecatedMetricName(partitionRecordsLeadMetricName(tp))); metrics.removeMetric(deprecatedPartitionPreferredReadReplicaMetricName(tp)); } } for (TopicPartition tp : newAssignedPartitions) { if (!this.assignedPartitions.contains(tp)) { maybeRecordDeprecatedPreferredReadReplica(tp, subscription); MetricName metricName = partitionPreferredReadReplicaMetricName(tp); metrics.addMetricIfAbsent( metricName, null, (Gauge<Integer>) (config, now) -> subscription.preferredReadReplica(tp, 0L).orElse(-1) ); } } this.assignedPartitions = newAssignedPartitions; this.assignmentId = newAssignmentId; } }
This method is called by the {@link Fetch fetch} logic before it requests fetches in order to update the internal set of metrics that are tracked. @param subscription {@link SubscriptionState} that contains the set of assigned partitions @see SubscriptionState#assignmentId()
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/FetchMetricsManager.java
164
[ "subscription" ]
void
true
4
6.4
apache/kafka
31,560
javadoc
false
get_query_results
def get_query_results( self, query_execution_id: str, next_token_id: str | None = None, max_results: int = 1000 ) -> dict | None: """ Fetch submitted query results. .. seealso:: - :external+boto3:py:meth:`Athena.Client.get_query_results` :param query_execution_id: Id of submitted athena query :param next_token_id: The token that specifies where to start pagination. :param max_results: The maximum number of results (rows) to return in this request. :return: *None* if the query is in intermediate, failed, or cancelled state. Otherwise a dict of query outputs. """ query_state = self.check_query_status(query_execution_id) if query_state is None: self.log.error("Invalid Query state. Query execution id: %s", query_execution_id) return None if query_state in self.INTERMEDIATE_STATES or query_state in self.FAILURE_STATES: self.log.error( 'Query is in "%s" state. Cannot fetch results. Query execution id: %s', query_state, query_execution_id, ) return None result_params = {"QueryExecutionId": query_execution_id, "MaxResults": max_results} if next_token_id: result_params["NextToken"] = next_token_id return self.get_conn().get_query_results(**result_params)
Fetch submitted query results. .. seealso:: - :external+boto3:py:meth:`Athena.Client.get_query_results` :param query_execution_id: Id of submitted athena query :param next_token_id: The token that specifies where to start pagination. :param max_results: The maximum number of results (rows) to return in this request. :return: *None* if the query is in intermediate, failed, or cancelled state. Otherwise a dict of query outputs.
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/athena.py
196
[ "self", "query_execution_id", "next_token_id", "max_results" ]
dict | None
true
5
7.44
apache/airflow
43,597
sphinx
false
addOrMergeGenericArgumentValue
private void addOrMergeGenericArgumentValue(ValueHolder newValue) { if (newValue.getName() != null) { for (Iterator<ValueHolder> it = this.genericArgumentValues.iterator(); it.hasNext();) { ValueHolder currentValue = it.next(); if (newValue.getName().equals(currentValue.getName())) { if (newValue.getValue() instanceof Mergeable mergeable) { if (mergeable.isMergeEnabled()) { newValue.setValue(mergeable.merge(currentValue.getValue())); } } it.remove(); } } } this.genericArgumentValues.add(newValue); }
Add a generic argument value, merging the new value (typically a collection) with the current value if demanded: see {@link org.springframework.beans.Mergeable}. @param newValue the argument value in the form of a ValueHolder
java
spring-beans/src/main/java/org/springframework/beans/factory/config/ConstructorArgumentValues.java
226
[ "newValue" ]
void
true
6
6.24
spring-projects/spring-framework
59,386
javadoc
false
toByteArray
default byte[] toByteArray(Charset charset) { Assert.notNull(charset, "'charset' must not be null"); try (ByteArrayOutputStream out = new ByteArrayOutputStream()) { toWriter(new OutputStreamWriter(out, charset)); return out.toByteArray(); } catch (IOException ex) { throw new UncheckedIOException(ex); } }
Write the JSON to a byte array. @param charset the charset @return the JSON bytes
java
core/spring-boot/src/main/java/org/springframework/boot/json/WritableJson.java
77
[ "charset" ]
true
2
7.92
spring-projects/spring-boot
79,428
javadoc
false
aggregate
def aggregate(self, func=None, axis: Axis = 0, *args, **kwargs): """ Aggregate using one or more operations over the specified axis. Parameters ---------- func : function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. Accepted combinations are: - function - string function name - list of functions and/or function names, e.g. ``[np.sum, 'mean']`` - dict of axis labels -> functions, function names or list of such. axis : {0 or 'index'} Unused. Parameter needed for compatibility with DataFrame. *args Positional arguments to pass to `func`. **kwargs Keyword arguments to pass to `func`. Returns ------- scalar, Series or DataFrame The return can be: * scalar : when Series.agg is called with single function * Series : when DataFrame.agg is called with a single function * DataFrame : when DataFrame.agg is called with several functions See Also -------- Series.apply : Invoke function on a Series. Series.transform : Transform function producing a Series with like indexes. Notes ----- The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`, `var`), where the default is to compute the aggregation of the flattened array, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d, axis=0)``. `agg` is an alias for `aggregate`. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See :ref:`gotchas.udf-mutation` for more details. A passed user-defined-function will be passed a Series for evaluation. If ``func`` defines an index relabeling, ``axis`` must be ``0`` or ``index``. Examples -------- >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.agg("min") 1 >>> s.agg(["min", "max"]) min 1 max 4 dtype: int64 """ # Validate the axis parameter self._get_axis_number(axis) # if func is None, will switch to user-provided "named aggregation" kwargs if func is None: func = dict(kwargs.items()) op = SeriesApply(self, func, args=args, kwargs=kwargs) result = op.agg() return result
Aggregate using one or more operations over the specified axis. Parameters ---------- func : function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply. Accepted combinations are: - function - string function name - list of functions and/or function names, e.g. ``[np.sum, 'mean']`` - dict of axis labels -> functions, function names or list of such. axis : {0 or 'index'} Unused. Parameter needed for compatibility with DataFrame. *args Positional arguments to pass to `func`. **kwargs Keyword arguments to pass to `func`. Returns ------- scalar, Series or DataFrame The return can be: * scalar : when Series.agg is called with single function * Series : when DataFrame.agg is called with a single function * DataFrame : when DataFrame.agg is called with several functions See Also -------- Series.apply : Invoke function on a Series. Series.transform : Transform function producing a Series with like indexes. Notes ----- The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`, `var`), where the default is to compute the aggregation of the flattened array, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d, axis=0)``. `agg` is an alias for `aggregate`. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See :ref:`gotchas.udf-mutation` for more details. A passed user-defined-function will be passed a Series for evaluation. If ``func`` defines an index relabeling, ``axis`` must be ``0`` or ``index``. Examples -------- >>> s = pd.Series([1, 2, 3, 4]) >>> s 0 1 1 2 2 3 3 4 dtype: int64 >>> s.agg("min") 1 >>> s.agg(["min", "max"]) min 1 max 4 dtype: int64
python
pandas/core/series.py
4,626
[ "self", "func", "axis" ]
true
2
8.4
pandas-dev/pandas
47,362
numpy
false
equals
@Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; ClientQuotaFilter that = (ClientQuotaFilter) o; return Objects.equals(components, that.components) && Objects.equals(strict, that.strict); }
@return whether the filter is strict, i.e. only includes specified components
java
clients/src/main/java/org/apache/kafka/common/quota/ClientQuotaFilter.java
84
[ "o" ]
true
5
6.24
apache/kafka
31,560
javadoc
false
getFileTypeMap
protected final FileTypeMap getFileTypeMap() { if (this.fileTypeMap == null) { try { this.fileTypeMap = createFileTypeMap(this.mappingLocation, this.mappings); } catch (IOException ex) { throw new IllegalStateException( "Could not load specified MIME type mapping file: " + this.mappingLocation, ex); } } return this.fileTypeMap; }
Return the delegate FileTypeMap, compiled from the mappings in the mapping file and the entries in the {@code mappings} property. @see #setMappingLocation @see #setMappings @see #createFileTypeMap
java
spring-context-support/src/main/java/org/springframework/mail/javamail/ConfigurableMimeFileTypeMap.java
119
[]
FileTypeMap
true
3
6.24
spring-projects/spring-framework
59,386
javadoc
false
rindex
def rindex(a, sub, start=0, end=None): """ Like `rfind`, but raises :exc:`ValueError` when the substring `sub` is not found. Parameters ---------- a : array-like, with `np.bytes_` or `np.str_` dtype sub : array-like, with `np.bytes_` or `np.str_` dtype start, end : array-like, with any integer dtype, optional Returns ------- out : ndarray Output array of ints. See Also -------- rfind, str.rindex Examples -------- >>> a = np.array(["Computer Science"]) >>> np.strings.rindex(a, "Science", start=0, end=None) array([9]) """ end = end if end is not None else MAX return _rindex_ufunc(a, sub, start, end)
Like `rfind`, but raises :exc:`ValueError` when the substring `sub` is not found. Parameters ---------- a : array-like, with `np.bytes_` or `np.str_` dtype sub : array-like, with `np.bytes_` or `np.str_` dtype start, end : array-like, with any integer dtype, optional Returns ------- out : ndarray Output array of ints. See Also -------- rfind, str.rindex Examples -------- >>> a = np.array(["Computer Science"]) >>> np.strings.rindex(a, "Science", start=0, end=None) array([9])
python
numpy/_core/strings.py
371
[ "a", "sub", "start", "end" ]
false
2
7.36
numpy/numpy
31,054
numpy
false
register
function register(specifier, parentURL = undefined, options) { if (parentURL != null && typeof parentURL === 'object' && !isURL(parentURL)) { options = parentURL; parentURL = options.parentURL; } getOrInitializeCascadedLoader().register( specifier, parentURL ?? 'data:', options?.data, options?.transferList, ); }
Register a single loader programmatically. @param {string|URL} specifier @param {string|URL} [parentURL] Base to use when resolving `specifier`; optional if `specifier` is absolute. Same as `options.parentUrl`, just inline @param {object} [options] Additional options to apply, described below. @param {string|URL} [options.parentURL] Base to use when resolving `specifier` @param {any} [options.data] Arbitrary data passed to the loader's `initialize` hook @param {any[]} [options.transferList] Objects in `data` that are changing ownership @returns {void} We want to reserve the return value for potential future extension of the API. @example ```js register('./myLoader.js'); register('ts-node/esm', { parentURL: import.meta.url }); register('./myLoader.js', { parentURL: import.meta.url }); register('ts-node/esm', import.meta.url); register('./myLoader.js', import.meta.url); register(new URL('./myLoader.js', import.meta.url)); register('./myLoader.js', { parentURL: import.meta.url, data: { banana: 'tasty' }, }); register('./myLoader.js', { parentURL: import.meta.url, data: someArrayBuffer, transferList: [someArrayBuffer], }); ```
javascript
lib/internal/modules/esm/loader.js
971
[ "specifier", "options" ]
false
4
8.08
nodejs/node
114,839
jsdoc
false
setAsText
@Override public void setAsText(@Nullable String text) throws IllegalArgumentException { if (this.allowEmpty && !StringUtils.hasLength(text)) { // Treat empty String as null value. setValue(null); } else if (text == null) { throw new IllegalArgumentException("null String cannot be converted to char type"); } else if (isUnicodeCharacterSequence(text)) { setAsUnicode(text); } else if (text.length() == 1) { setValue(text.charAt(0)); } else { throw new IllegalArgumentException("String [" + text + "] with length " + text.length() + " cannot be converted to char type: neither Unicode nor single character"); } }
Create a new CharacterEditor instance. <p>The "allowEmpty" parameter controls whether an empty String is to be allowed in parsing, i.e. be interpreted as the {@code null} value when {@link #setAsText(String) text is being converted}. If {@code false}, an {@link IllegalArgumentException} will be thrown at that time. @param allowEmpty if empty strings are to be allowed
java
spring-beans/src/main/java/org/springframework/beans/propertyeditors/CharacterEditor.java
74
[ "text" ]
void
true
6
6.88
spring-projects/spring-framework
59,386
javadoc
false
addAll
public static boolean[] addAll(final boolean[] array1, final boolean... array2) { if (array1 == null) { return clone(array2); } if (array2 == null) { return clone(array1); } final boolean[] joinedArray = new boolean[array1.length + array2.length]; System.arraycopy(array1, 0, joinedArray, 0, array1.length); System.arraycopy(array2, 0, joinedArray, array1.length, array2.length); return joinedArray; }
Adds all the elements of the given arrays into a new array. <p> The new array contains all of the element of {@code array1} followed by all of the elements {@code array2}. When an array is returned, it is always a new array. </p> <pre> ArrayUtils.addAll(array1, null) = cloned copy of array1 ArrayUtils.addAll(null, array2) = cloned copy of array2 ArrayUtils.addAll([], []) = [] ArrayUtils.addAll(null, null) = null </pre> @param array1 the first array whose elements are added to the new array. @param array2 the second array whose elements are added to the new array. @return The new boolean[] array or {@code null}. @since 2.1
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
855
[ "array1" ]
true
3
7.92
apache/commons-lang
2,896
javadoc
false
shuffle
function shuffle(collection) { var func = isArray(collection) ? arrayShuffle : baseShuffle; return func(collection); }
Creates an array of shuffled values, using a version of the [Fisher-Yates shuffle](https://en.wikipedia.org/wiki/Fisher-Yates_shuffle). @static @memberOf _ @since 0.1.0 @category Collection @param {Array|Object} collection The collection to shuffle. @returns {Array} Returns the new shuffled array. @example _.shuffle([1, 2, 3, 4]); // => [4, 1, 3, 2]
javascript
lodash.js
9,923
[ "collection" ]
false
2
6.96
lodash/lodash
61,490
jsdoc
false
memberEpoch
public int memberEpoch() { return memberEpoch; }
@return Current epoch of the member, maintained by the server.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractMembershipManager.java
276
[]
true
1
6.96
apache/kafka
31,560
javadoc
false
createCloudflareUrl
function createCloudflareUrl(path: string, config: ImageLoaderConfig) { let params = `format=auto`; if (config.width) { params += `,width=${config.width}`; } // When requesting a placeholder image we ask for a low quality image to reduce the load time. if (config.isPlaceholder) { params += `,quality=${PLACEHOLDER_QUALITY}`; } // Cloudflare image URLs format: // https://developers.cloudflare.com/images/image-resizing/url-format/ return `${path}/cdn-cgi/image/${params}/${config.src}`; }
Function that generates an ImageLoader for [Cloudflare Image Resizing](https://developers.cloudflare.com/images/image-resizing/) and turns it into an Angular provider. Note: Cloudflare has multiple image products - this provider is specifically for Cloudflare Image Resizing; it will not work with Cloudflare Images or Cloudflare Polish. @param path Your domain name, e.g. https://mysite.com @returns Provider that provides an ImageLoader function @publicApi
typescript
packages/common/src/directives/ng_optimized_image/image_loaders/cloudflare_loader.ts
29
[ "path", "config" ]
false
3
6.8
angular/angular
99,544
jsdoc
false
getParametersOfDecoratedDeclaration
function getParametersOfDecoratedDeclaration(node: SignatureDeclaration, container: ClassLikeDeclaration) { if (container && node.kind === SyntaxKind.GetAccessor) { const { setAccessor } = getAllAccessorDeclarations(container.members, node as AccessorDeclaration); if (setAccessor) { return setAccessor.parameters; } } return node.parameters; }
Serializes the type of a node for use with decorator type metadata. @param node The node that should have its type serialized.
typescript
src/compiler/transformers/typeSerializer.ts
228
[ "node", "container" ]
false
4
6.24
microsoft/TypeScript
107,154
jsdoc
false
indexOf
private static int indexOf(final Throwable throwable, final Class<? extends Throwable> type, int fromIndex, final boolean subclass) { if (throwable == null || type == null) { return NOT_FOUND; } if (fromIndex < 0) { fromIndex = 0; } final Throwable[] throwables = getThrowables(throwable); if (fromIndex >= throwables.length) { return NOT_FOUND; } if (subclass) { for (int i = fromIndex; i < throwables.length; i++) { if (type.isAssignableFrom(throwables[i].getClass())) { return i; } } } else { for (int i = fromIndex; i < throwables.length; i++) { if (type.equals(throwables[i].getClass())) { return i; } } } return NOT_FOUND; }
Worker method for the {@code indexOfType} methods. @param throwable the throwable to inspect, may be null. @param type the type to search for, subclasses match, null returns -1. @param fromIndex the (zero-based) index of the starting position, negative treated as zero, larger than chain size returns -1. @param subclass if {@code true}, compares with {@link Class#isAssignableFrom(Class)}, otherwise compares using references. @return index of the {@code type} within throwables nested within the specified {@code throwable}.
java
src/main/java/org/apache/commons/lang3/exception/ExceptionUtils.java
575
[ "throwable", "type", "fromIndex", "subclass" ]
true
10
7.6
apache/commons-lang
2,896
javadoc
false
describeConsumerGroups
DescribeConsumerGroupsResult describeConsumerGroups(Collection<String> groupIds, DescribeConsumerGroupsOptions options);
Describe some consumer groups in the cluster. @param groupIds The IDs of the groups to describe. @param options The options to use when describing the groups. @return The DescribeConsumerGroupsResult.
java
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
864
[ "groupIds", "options" ]
DescribeConsumerGroupsResult
true
1
6.48
apache/kafka
31,560
javadoc
false
buildRootMap
function buildRootMap(roots: any[], nodes: any[]): Map<any, any[]> { const rootMap = new Map<any, any[]>(); roots.forEach((root) => rootMap.set(root, [])); if (nodes.length == 0) return rootMap; const NULL_NODE = 1; const nodeSet = new Set(nodes); const localRootMap = new Map<any, any>(); function getRoot(node: any): any { if (!node) return NULL_NODE; let root = localRootMap.get(node); if (root) return root; const parent = node.parentNode; if (rootMap.has(parent)) { // ngIf inside @trigger root = parent; } else if (nodeSet.has(parent)) { // ngIf inside ngIf root = NULL_NODE; } else { // recurse upwards root = getRoot(parent); } localRootMap.set(node, root); return root; } nodes.forEach((node) => { const root = getRoot(node); if (root !== NULL_NODE) { rootMap.get(root)!.push(node); } }); return rootMap; }
@license Copyright Google LLC All Rights Reserved. Use of this source code is governed by an MIT-style license that can be found in the LICENSE file at https://angular.dev/license
typescript
packages/animations/browser/src/render/transition_animation_engine.ts
1,840
[ "roots", "nodes" ]
true
9
6
angular/angular
99,544
jsdoc
false
uncapitalize
public static String uncapitalize(final String str, final char... delimiters) { final int delimLen = delimiters == null ? -1 : delimiters.length; if (StringUtils.isEmpty(str) || delimLen == 0) { return str; } final char[] buffer = str.toCharArray(); boolean uncapitalizeNext = true; for (int i = 0; i < buffer.length; i++) { final char ch = buffer[i]; if (isDelimiter(ch, delimiters)) { uncapitalizeNext = true; } else if (uncapitalizeNext) { buffer[i] = Character.toLowerCase(ch); uncapitalizeNext = false; } } return new String(buffer); }
Uncapitalizes all the whitespace separated words in a String. Only the first character of each word is changed. <p>The delimiters represent a set of characters understood to separate words. The first string character and the first non-delimiter character after a delimiter will be uncapitalized.</p> <p>Whitespace is defined by {@link Character#isWhitespace(char)}. A {@code null} input String returns {@code null}.</p> <pre> WordUtils.uncapitalize(null, *) = null WordUtils.uncapitalize("", *) = "" WordUtils.uncapitalize(*, null) = * WordUtils.uncapitalize(*, new char[0]) = * WordUtils.uncapitalize("I AM.FINE", {'.'}) = "i AM.fINE" </pre> @param str the String to uncapitalize, may be null. @param delimiters set of characters to determine uncapitalization, null means whitespace. @return uncapitalized String, {@code null} if null String input. @see #capitalize(String) @since 2.1
java
src/main/java/org/apache/commons/lang3/text/WordUtils.java
391
[ "str" ]
String
true
7
7.44
apache/commons-lang
2,896
javadoc
false
opt
public Object opt(int index) { if (index < 0 || index >= this.values.size()) { return null; } return this.values.get(index); }
Returns the value at {@code index}, or null if the array has no value at {@code index}. @param index the index to get the value from @return the value at {@code index} or {@code null}
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONArray.java
298
[ "index" ]
Object
true
3
8.24
spring-projects/spring-boot
79,428
javadoc
false
is_ignored_output
def is_ignored_output(output: str) -> int: """ Check if the output matches any ignore pattern. Args: output: The combined stdout/stderr string. Returns: Index of the matched ignore pattern, or -1 if none matched. """ for idx, pattern in enumerate(IGNORE_PATTERNS): if pattern.search(output): return idx return -1
Check if the output matches any ignore pattern. Args: output: The combined stdout/stderr string. Returns: Index of the matched ignore pattern, or -1 if none matched.
python
tools/experimental/torchfuzz/multi_process_fuzzer.py
67
[ "output" ]
int
true
3
8.08
pytorch/pytorch
96,034
google
false
negate
public static MethodMatcher negate(MethodMatcher methodMatcher) { Assert.notNull(methodMatcher, "MethodMatcher must not be null"); return new NegateMethodMatcher(methodMatcher); }
Return a method matcher that represents the logical negation of the specified matcher instance. @param methodMatcher the {@link MethodMatcher} to negate @return a matcher that represents the logical negation of the specified matcher @since 6.1
java
spring-aop/src/main/java/org/springframework/aop/support/MethodMatchers.java
93
[ "methodMatcher" ]
MethodMatcher
true
1
6.32
spring-projects/spring-framework
59,386
javadoc
false
addListener
public void addListener(RequestFutureListener<T> listener) { this.listeners.add(listener); if (failed()) fireFailure(); else if (succeeded()) fireSuccess(); }
Add a listener which will be notified when the future completes @param listener non-null listener to add
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFuture.java
187
[ "listener" ]
void
true
3
6.4
apache/kafka
31,560
javadoc
false
getLastElementOnPage
public int getLastElementOnPage() { int endIndex = getPageSize() * (getPage() + 1); int size = getNrOfElements(); return (endIndex > size ? size : endIndex) - 1; }
Return the element index of the last element on the current page. Element numbering starts with 0.
java
spring-beans/src/main/java/org/springframework/beans/support/PagedListHolder.java
273
[]
true
2
6.88
spring-projects/spring-framework
59,386
javadoc
false
fftfreq
def fftfreq(n, d=1.0, device=None): """ Return the Discrete Fourier Transform sample frequencies. The returned float array `f` contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. Given a window length `n` and a sample spacing `d`:: f = [0, 1, ..., n/2-1, -n/2, ..., -1] / (d*n) if n is even f = [0, 1, ..., (n-1)/2, -(n-1)/2, ..., -1] / (d*n) if n is odd Parameters ---------- n : int Window length. d : scalar, optional Sample spacing (inverse of the sampling rate). Defaults to 1. device : str, optional The device on which to place the created array. Default: ``None``. For Array-API interoperability only, so must be ``"cpu"`` if passed. .. versionadded:: 2.0.0 Returns ------- f : ndarray Array of length `n` containing the sample frequencies. Examples -------- >>> import numpy as np >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=np.float64) >>> fourier = np.fft.fft(signal) >>> n = signal.size >>> timestep = 0.1 >>> freq = np.fft.fftfreq(n, d=timestep) >>> freq array([ 0. , 1.25, 2.5 , ..., -3.75, -2.5 , -1.25]) """ if not isinstance(n, integer_types): raise ValueError("n should be an integer") val = 1.0 / (n * d) results = empty(n, int, device=device) N = (n - 1) // 2 + 1 p1 = arange(0, N, dtype=int, device=device) results[:N] = p1 p2 = arange(-(n // 2), 0, dtype=int, device=device) results[N:] = p2 return results * val
Return the Discrete Fourier Transform sample frequencies. The returned float array `f` contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. Given a window length `n` and a sample spacing `d`:: f = [0, 1, ..., n/2-1, -n/2, ..., -1] / (d*n) if n is even f = [0, 1, ..., (n-1)/2, -(n-1)/2, ..., -1] / (d*n) if n is odd Parameters ---------- n : int Window length. d : scalar, optional Sample spacing (inverse of the sampling rate). Defaults to 1. device : str, optional The device on which to place the created array. Default: ``None``. For Array-API interoperability only, so must be ``"cpu"`` if passed. .. versionadded:: 2.0.0 Returns ------- f : ndarray Array of length `n` containing the sample frequencies. Examples -------- >>> import numpy as np >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=np.float64) >>> fourier = np.fft.fft(signal) >>> n = signal.size >>> timestep = 0.1 >>> freq = np.fft.fftfreq(n, d=timestep) >>> freq array([ 0. , 1.25, 2.5 , ..., -3.75, -2.5 , -1.25])
python
numpy/fft/_helper.py
126
[ "n", "d", "device" ]
false
2
7.68
numpy/numpy
31,054
numpy
false
appendExportsOfImportDeclaration
function appendExportsOfImportDeclaration(statements: Statement[] | undefined, decl: ImportDeclaration): Statement[] | undefined { if (currentModuleInfo.exportEquals) { return statements; } const importClause = decl.importClause; if (!importClause) { return statements; } const seen = new IdentifierNameMap<boolean>(); if (importClause.name) { statements = appendExportsOfDeclaration(statements, seen, importClause); } const namedBindings = importClause.namedBindings; if (namedBindings) { switch (namedBindings.kind) { case SyntaxKind.NamespaceImport: statements = appendExportsOfDeclaration(statements, seen, namedBindings); break; case SyntaxKind.NamedImports: for (const importBinding of namedBindings.elements) { statements = appendExportsOfDeclaration(statements, seen, importBinding, /*liveBinding*/ true); } break; } } return statements; }
Appends the exports of an ImportDeclaration to a statement list, returning the statement list. @param statements A statement list to which the down-level export statements are to be appended. If `statements` is `undefined`, a new array is allocated if statements are appended. @param decl The declaration whose exports are to be recorded.
typescript
src/compiler/transformers/module/module.ts
1,960
[ "statements", "decl" ]
true
5
6.72
microsoft/TypeScript
107,154
jsdoc
false
contributesPair
boolean contributesPair() { return this.contributesPair; }
Return if any of the members contributes a name/value pair to the JSON. @return if a name/value pair is contributed
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
352
[]
true
1
6.96
spring-projects/spring-boot
79,428
javadoc
false
getNextDataPath
function getNextDataPath(fluentPropName?: string, prevDataPath?: string[]) { if (fluentPropName === undefined || prevDataPath === undefined) return [] return [...prevDataPath, 'select', fluentPropName] }
The fluent API makes that nested relations can be retrieved at once. It's a helper for writing `select` statements on relations with a chaining api. Because of this, we automatically add `select` statements to the query, that also means that we need to provide a `dataPath` for unpacking nested values. @see {getNextUserArgs} @param dmmfModelName @param prevDataPath @returns
typescript
packages/client/src/runtime/core/model/applyFluent.ts
20
[ "fluentPropName?", "prevDataPath?" ]
false
3
7.04
prisma/prisma
44,834
jsdoc
false
functions
function functions(object) { return object == null ? [] : baseFunctions(object, keys(object)); }
Creates an array of function property names from own enumerable properties of `object`. @static @since 0.1.0 @memberOf _ @category Object @param {Object} object The object to inspect. @returns {Array} Returns the function names. @see _.functionsIn @example function Foo() { this.a = _.constant('a'); this.b = _.constant('b'); } Foo.prototype.c = _.constant('c'); _.functions(new Foo); // => ['a', 'b']
javascript
lodash.js
13,177
[ "object" ]
false
2
7.12
lodash/lodash
61,490
jsdoc
false
parseTypeAnnotation
function parseTypeAnnotation(): TypeNode | undefined { return parseOptional(SyntaxKind.ColonToken) ? parseType() : undefined; }
Reports a diagnostic error for the current token being an invalid name. @param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName). @param nameDiagnostic Diagnostic to report for all other cases. @param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
typescript
src/compiler/parser.ts
4,961
[]
true
2
6.64
microsoft/TypeScript
107,154
jsdoc
false
_is_sort_key_with_default_timestamp
def _is_sort_key_with_default_timestamp(sort_key: int) -> bool: """ Check if the sort key was generated with the DEFAULT_SORT_TIMESTAMP. This is used to identify log records that don't have timestamp. :param sort_key: The sort key to check :return: True if the sort key was generated with DEFAULT_SORT_TIMESTAMP, False otherwise """ # Extract the timestamp part from the sort key (remove the line number part) timestamp_part = sort_key // SORT_KEY_OFFSET return timestamp_part == DEFAULT_SORT_TIMESTAMP
Check if the sort key was generated with the DEFAULT_SORT_TIMESTAMP. This is used to identify log records that don't have timestamp. :param sort_key: The sort key to check :return: True if the sort key was generated with DEFAULT_SORT_TIMESTAMP, False otherwise
python
airflow-core/src/airflow/utils/log/file_task_handler.py
289
[ "sort_key" ]
bool
true
1
7.04
apache/airflow
43,597
sphinx
false
factoryContainsBean
private boolean factoryContainsBean(ConfigurableBeanFactory beanFactory, String beanName) { return (beanFactory.containsBean(beanName) && !beanFactory.isCurrentlyInCreation(beanName)); }
Check the BeanFactory to see whether the bean named <var>beanName</var> already exists. Accounts for the fact that the requested bean may be "in creation", i.e.: we're in the middle of servicing the initial request for this bean. From an enhanced factory method's perspective, this means that the bean does not actually yet exist, and that it is now our job to create it for the first time by executing the logic in the corresponding factory method. <p>Said another way, this check repurposes {@link ConfigurableBeanFactory#isCurrentlyInCreation(String)} to determine whether the container is calling this method or the user is calling this method. @param beanName name of bean to check for @return whether <var>beanName</var> already exists in the factory
java
spring-context/src/main/java/org/springframework/context/annotation/ConfigurationClassEnhancer.java
500
[ "beanFactory", "beanName" ]
true
2
7.84
spring-projects/spring-framework
59,386
javadoc
false
to_period
def to_period(self, freq=None) -> PeriodIndex: """ Cast to PeriodArray/PeriodIndex at a particular frequency. Converts DatetimeArray/Index to PeriodArray/PeriodIndex. Parameters ---------- freq : str or Period, optional One of pandas' :ref:`period aliases <timeseries.period_aliases>` or a Period object. Will be inferred by default. Returns ------- PeriodArray/PeriodIndex Immutable ndarray holding ordinal values at a particular frequency. Raises ------ ValueError When converting a DatetimeArray/Index with non-regular values, so that a frequency cannot be inferred. See Also -------- PeriodIndex: Immutable ndarray holding ordinal values. DatetimeIndex.to_pydatetime: Return DatetimeIndex as object. Examples -------- >>> df = pd.DataFrame( ... {"y": [1, 2, 3]}, ... index=pd.to_datetime( ... [ ... "2000-03-31 00:00:00", ... "2000-05-31 00:00:00", ... "2000-08-31 00:00:00", ... ] ... ), ... ) >>> df.index.to_period("M") PeriodIndex(['2000-03', '2000-05', '2000-08'], dtype='period[M]') Infer the daily frequency >>> idx = pd.date_range("2017-01-01", periods=2) >>> idx.to_period() PeriodIndex(['2017-01-01', '2017-01-02'], dtype='period[D]') """ from pandas.core.indexes.api import PeriodIndex arr = self._data.to_period(freq) return PeriodIndex._simple_new(arr, name=self.name)
Cast to PeriodArray/PeriodIndex at a particular frequency. Converts DatetimeArray/Index to PeriodArray/PeriodIndex. Parameters ---------- freq : str or Period, optional One of pandas' :ref:`period aliases <timeseries.period_aliases>` or a Period object. Will be inferred by default. Returns ------- PeriodArray/PeriodIndex Immutable ndarray holding ordinal values at a particular frequency. Raises ------ ValueError When converting a DatetimeArray/Index with non-regular values, so that a frequency cannot be inferred. See Also -------- PeriodIndex: Immutable ndarray holding ordinal values. DatetimeIndex.to_pydatetime: Return DatetimeIndex as object. Examples -------- >>> df = pd.DataFrame( ... {"y": [1, 2, 3]}, ... index=pd.to_datetime( ... [ ... "2000-03-31 00:00:00", ... "2000-05-31 00:00:00", ... "2000-08-31 00:00:00", ... ] ... ), ... ) >>> df.index.to_period("M") PeriodIndex(['2000-03', '2000-05', '2000-08'], dtype='period[M]') Infer the daily frequency >>> idx = pd.date_range("2017-01-01", periods=2) >>> idx.to_period() PeriodIndex(['2017-01-01', '2017-01-02'], dtype='period[D]')
python
pandas/core/indexes/datetimes.py
546
[ "self", "freq" ]
PeriodIndex
true
1
6.48
pandas-dev/pandas
47,362
numpy
false
get_valid_filename
def get_valid_filename(name): """ Return the given string converted to a string that can be used for a clean filename. Remove leading and trailing spaces; convert other spaces to underscores; and remove anything that is not an alphanumeric, dash, underscore, or dot. >>> get_valid_filename("john's portrait in 2004.jpg") 'johns_portrait_in_2004.jpg' """ s = str(name).strip().replace(" ", "_") s = re.sub(r"(?u)[^-\w.]", "", s) if s in {"", ".", ".."}: raise SuspiciousFileOperation("Could not derive file name from '%s'" % name) return s
Return the given string converted to a string that can be used for a clean filename. Remove leading and trailing spaces; convert other spaces to underscores; and remove anything that is not an alphanumeric, dash, underscore, or dot. >>> get_valid_filename("john's portrait in 2004.jpg") 'johns_portrait_in_2004.jpg'
python
django/utils/text.py
270
[ "name" ]
false
2
6.32
django/django
86,204
unknown
false
nextAnchor
function nextAnchor(camelCaseWord: string, start: number): number { for (let i = start; i < camelCaseWord.length; i++) { const c = camelCaseWord.charCodeAt(i); if (isUpper(c) || isNumber(c) || (i > 0 && !isAlphanumeric(camelCaseWord.charCodeAt(i - 1)))) { return i; } } return camelCaseWord.length; }
Gets alternative codes to the character code passed in. This comes in the form of an array of character codes, all of which must match _in order_ to successfully match. @param code The character code to check.
typescript
src/vs/base/common/filters.ts
204
[ "camelCaseWord", "start" ]
true
6
7.04
microsoft/vscode
179,840
jsdoc
false
run_ruff
def run_ruff(self, fix: bool) -> tuple[int, str]: """ Original Author: Josh Wilson (@person142) Source: https://github.com/scipy/scipy/blob/main/tools/lint_diff.py Unlike pycodestyle, ruff by itself is not capable of limiting its output to the given diff. """ print("Running Ruff Check...") command = ["ruff", "check"] if fix: command.append("--fix") res = subprocess.run( command, stdout=subprocess.PIPE, cwd=self.repository_root, encoding="utf-8", ) return res.returncode, res.stdout
Original Author: Josh Wilson (@person142) Source: https://github.com/scipy/scipy/blob/main/tools/lint_diff.py Unlike pycodestyle, ruff by itself is not capable of limiting its output to the given diff.
python
tools/linter.py
13
[ "self", "fix" ]
tuple[int, str]
true
2
6.24
numpy/numpy
31,054
unknown
false
add
public <V> Member<V> add(String name, Supplier<@Nullable V> supplier) { Assert.notNull(supplier, "'supplier' must not be null"); return add(name, (instance) -> supplier.get()); }
Add a new member with a supplied value. @param <V> the value type @param name the member name @param supplier a supplier of the value @return the added {@link Member} which may be configured further
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
225
[ "name", "supplier" ]
true
1
6.96
spring-projects/spring-boot
79,428
javadoc
false
append
public StrBuilder append(final String str) { if (str == null) { return appendNull(); } final int strLen = str.length(); if (strLen > 0) { final int len = length(); ensureCapacity(len + strLen); str.getChars(0, strLen, buffer, len); size += strLen; } return this; }
Appends a string to this string builder. Appending null will call {@link #appendNull()}. @param str the string to append @return {@code this} instance.
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
624
[ "str" ]
StrBuilder
true
3
7.92
apache/commons-lang
2,896
javadoc
false
joinReason
public static String joinReason(JoinGroupRequestData request) { String joinReason = request.reason(); if (joinReason == null || joinReason.isEmpty()) { joinReason = "not provided"; } return joinReason; }
Get the client's join reason. @param request The JoinGroupRequest. @return The join reason.
java
clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java
164
[ "request" ]
String
true
3
7.76
apache/kafka
31,560
javadoc
false
lastIndexOf
public static int lastIndexOf(final double[] array, final double valueToFind, final double tolerance) { return lastIndexOf(array, valueToFind, Integer.MAX_VALUE, tolerance); }
Finds the last index of the given value within a given tolerance in the array. This method will return the index of the last value which falls between the region defined by valueToFind - tolerance and valueToFind + tolerance. <p> This method returns {@link #INDEX_NOT_FOUND} ({@code -1}) for a {@code null} input array. </p> @param array the array to search for the object, may be {@code null}. @param valueToFind the value to find. @param tolerance tolerance of the search. @return the index of the value within the array, {@link #INDEX_NOT_FOUND} ({@code -1}) if not found or {@code null} array input.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
3,919
[ "array", "valueToFind", "tolerance" ]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
toLocalDateTime
public static LocalDateTime toLocalDateTime(final Calendar calendar) { return LocalDateTime.ofInstant(calendar.toInstant(), toZoneId(calendar)); }
Converts a Calendar to a LocalDateTime. @param calendar the Calendar to convert. @return a LocalDateTime. @since 3.17.0
java
src/main/java/org/apache/commons/lang3/time/CalendarUtils.java
74
[ "calendar" ]
LocalDateTime
true
1
6.32
apache/commons-lang
2,896
javadoc
false
getReturnOccurrences
function getReturnOccurrences(returnStatement: ReturnStatement, sourceFile: SourceFile): Node[] | undefined { const func = getContainingFunction(returnStatement) as FunctionLikeDeclaration; if (!func) { return undefined; } const keywords: Node[] = []; forEachReturnStatement(cast(func.body, isBlock), returnStatement => { keywords.push(findChildOfKind(returnStatement, SyntaxKind.ReturnKeyword, sourceFile)!); }); // Include 'throw' statements that do not occur within a try block. forEach(aggregateOwnedThrowStatements(func.body!), throwStatement => { keywords.push(findChildOfKind(throwStatement, SyntaxKind.ThrowKeyword, sourceFile)!); }); return keywords; }
For lack of a better name, this function takes a throw statement and returns the nearest ancestor that is a try-block (whose try statement has a catch clause), function-block, or source file.
typescript
src/services/documentHighlights.ts
452
[ "returnStatement", "sourceFile" ]
true
2
6
microsoft/TypeScript
107,154
jsdoc
false
checkJarHell
@SuppressForbidden(reason = "needs JarFile for speed, just reading entries") public static void checkJarHell(Set<URL> urls, Consumer<String> output) throws IOException { // we don't try to be sneaky and use deprecated/internal/not portable stuff // like sun.boot.class.path, and with jigsaw we don't yet have a way to get // a "list" at all. So just exclude any elements underneath the java home String javaHome = System.getProperty("java.home"); output.accept("java.home: " + javaHome); final Map<String, Path> clazzes = new HashMap<>(32768); Set<Path> seenJars = new HashSet<>(); for (final URL url : urls) { final Path path = toPath(url); // exclude system resources if (path.startsWith(javaHome)) { output.accept("excluding system resource: " + path); continue; } if (path.toString().endsWith(".jar")) { if (seenJars.add(path) == false) { throw new IllegalStateException("jar hell!" + System.lineSeparator() + "duplicate jar on classpath: " + path); } output.accept("examining jar: " + path); try (JarFile file = new JarFile(path.toString())) { Manifest manifest = file.getManifest(); if (manifest != null) { checkManifest(manifest, path); } // inspect entries Enumeration<JarEntry> elements = file.entries(); while (elements.hasMoreElements()) { String entry = elements.nextElement().getName(); if (entry.endsWith(".class")) { // for jar format, the separator is defined as / entry = entry.replace('/', '.').substring(0, entry.length() - 6); checkClass(clazzes, entry, path); } } } } else { output.accept("examining directory: " + path); // case for tests: where we have class files in the classpath final Path root = toPath(url); final String sep = root.getFileSystem().getSeparator(); // don't try and walk class or resource directories that don't exist // gradle will add these to the classpath even if they never get created if (Files.exists(root)) { Files.walkFileTree(root, new SimpleFileVisitor<Path>() { @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { String entry = root.relativize(file).toString(); if (entry.endsWith(".class")) { // normalize with the os separator, remove '.class' entry = entry.replace(sep, ".").substring(0, entry.length() - ".class".length()); checkClass(clazzes, entry, path); } return super.visitFile(file, attrs); } }); } } } }
Checks the set of URLs for duplicate classes @param urls A set of URLs from the system class loader to be checked for conflicting jars @param output A {@link String} {@link Consumer} to which debug output will be sent @throws IllegalStateException if jar hell was found
java
libs/core/src/main/java/org/elasticsearch/jdk/JarHell.java
201
[ "urls", "output" ]
void
true
9
6.16
elastic/elasticsearch
75,680
javadoc
false
onFailure
default @Nullable Object onFailure(ConfigurationPropertyName name, Bindable<?> target, BindContext context, Exception error) throws Exception { throw error; }
Called when binding fails for any reason (including failures from {@link #onSuccess} or {@link #onCreate} calls). Implementations may choose to swallow exceptions and return an alternative result. @param name the name of the element being bound @param target the item being bound @param context the bind context @param error the cause of the error (if the exception stands it may be re-thrown) @return the actual result that should be used (may be {@code null}). @throws Exception if the binding isn't valid
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/bind/BindHandler.java
92
[ "name", "target", "context", "error" ]
Object
true
1
6.64
spring-projects/spring-boot
79,428
javadoc
false
_build_names_mapper
def _build_names_mapper( rownames: list[str], colnames: list[str] ) -> tuple[dict[str, str], list[str], dict[str, str], list[str]]: """ Given the names of a DataFrame's rows and columns, returns a set of unique row and column names and mappers that convert to original names. A row or column name is replaced if it is duplicate among the rows of the inputs, among the columns of the inputs or between the rows and the columns. Parameters ---------- rownames: list[str] colnames: list[str] Returns ------- Tuple(Dict[str, str], List[str], Dict[str, str], List[str]) rownames_mapper: dict[str, str] a dictionary with new row names as keys and original rownames as values unique_rownames: list[str] a list of rownames with duplicate names replaced by dummy names colnames_mapper: dict[str, str] a dictionary with new column names as keys and original column names as values unique_colnames: list[str] a list of column names with duplicate names replaced by dummy names """ dup_names = set(rownames) | set(colnames) rownames_mapper = { f"row_{i}": name for i, name in enumerate(rownames) if name in dup_names } unique_rownames = [ f"row_{i}" if name in dup_names else name for i, name in enumerate(rownames) ] colnames_mapper = { f"col_{i}": name for i, name in enumerate(colnames) if name in dup_names } unique_colnames = [ f"col_{i}" if name in dup_names else name for i, name in enumerate(colnames) ] return rownames_mapper, unique_rownames, colnames_mapper, unique_colnames
Given the names of a DataFrame's rows and columns, returns a set of unique row and column names and mappers that convert to original names. A row or column name is replaced if it is duplicate among the rows of the inputs, among the columns of the inputs or between the rows and the columns. Parameters ---------- rownames: list[str] colnames: list[str] Returns ------- Tuple(Dict[str, str], List[str], Dict[str, str], List[str]) rownames_mapper: dict[str, str] a dictionary with new row names as keys and original rownames as values unique_rownames: list[str] a list of rownames with duplicate names replaced by dummy names colnames_mapper: dict[str, str] a dictionary with new column names as keys and original column names as values unique_colnames: list[str] a list of column names with duplicate names replaced by dummy names
python
pandas/core/reshape/pivot.py
1,233
[ "rownames", "colnames" ]
tuple[dict[str, str], list[str], dict[str, str], list[str]]
true
3
6.4
pandas-dev/pandas
47,362
numpy
false
filterJsxAttributes
function filterJsxAttributes(symbols: Symbol[], attributes: NodeArray<JsxAttribute | JsxSpreadAttribute>): Symbol[] { const seenNames = new Set<__String>(); const membersDeclaredBySpreadAssignment = new Set<string>(); for (const attr of attributes) { // If this is the current item we are editing right now, do not filter it out if (isCurrentlyEditingNode(attr)) { continue; } if (attr.kind === SyntaxKind.JsxAttribute) { seenNames.add(getEscapedTextOfJsxAttributeName(attr.name)); } else if (isJsxSpreadAttribute(attr)) { setMembersDeclaredBySpreadAssignment(attr, membersDeclaredBySpreadAssignment); } } const filteredSymbols = symbols.filter(a => !seenNames.has(a.escapedName)); setSortTextToMemberDeclaredBySpreadAssignment(membersDeclaredBySpreadAssignment, filteredSymbols); return filteredSymbols; }
Filters out completion suggestions from 'symbols' according to existing JSX attributes. @returns Symbols to be suggested in a JSX element, barring those whose attributes do not occur at the current position and have not otherwise been typed.
typescript
src/services/completions.ts
5,289
[ "symbols", "attributes" ]
true
5
6.72
microsoft/TypeScript
107,154
jsdoc
false
getLibrary
public <T extends NativeLibrary> T getLibrary(Class<T> cls) { Supplier<?> libraryCtor = libraries.get(cls); Object library = libraryCtor.get(); assert library != null; assert cls.isAssignableFrom(library.getClass()); return cls.cast(library); }
Construct an instance of the given library class. @param cls The library class to create @return An instance of the class
java
libs/native/src/main/java/org/elasticsearch/nativeaccess/lib/NativeLibraryProvider.java
54
[ "cls" ]
T
true
1
7.04
elastic/elasticsearch
75,680
javadoc
false
createTask
private Callable<T> createTask(final ExecutorService execDestroy) { return new InitializationTask(execDestroy); }
Creates a task for the background initialization. The {@link Callable} object returned by this method is passed to the {@link ExecutorService}. This implementation returns a task that invokes the {@link #initialize()} method. If a temporary {@link ExecutorService} is used, it is destroyed at the end of the task. @param execDestroy the {@link ExecutorService} to be destroyed by the task. @return a task for the background initialization.
java
src/main/java/org/apache/commons/lang3/concurrent/BackgroundInitializer.java
238
[ "execDestroy" ]
true
1
6.64
apache/commons-lang
2,896
javadoc
false
addAll
@CanIgnoreReturnValue @Override public Builder<E> addAll(Iterable<? extends E> elements) { if (elements instanceof Multiset) { Multiset<? extends E> multiset = (Multiset<? extends E>) elements; multiset.forEachEntry((e, n) -> contents.add(checkNotNull(e), n)); } else { super.addAll(elements); } return this; }
Adds each element of {@code elements} to the {@code ImmutableMultiset}. @param elements the {@code Iterable} to add to the {@code ImmutableMultiset} @return this {@code Builder} object @throws NullPointerException if {@code elements} is null or contains a null element
java
guava/src/com/google/common/collect/ImmutableMultiset.java
559
[ "elements" ]
true
2
7.44
google/guava
51,352
javadoc
false
dot
def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series: """ Compute the matrix multiplication between the DataFrame and other. This method computes the matrix product between the DataFrame and the values of an other Series, DataFrame or a numpy array. It can also be called using ``self @ other``. Parameters ---------- other : Series, DataFrame or array-like The other object to compute the matrix product with. Returns ------- Series or DataFrame If other is a Series, return the matrix product between self and other as a Series. If other is a DataFrame or a numpy.array, return the matrix product of self and other in a DataFrame of a np.array. See Also -------- Series.dot: Similar method for Series. Notes ----- The dimensions of DataFrame and other must be compatible in order to compute the matrix multiplication. In addition, the column names of DataFrame and the index of other must contain the same values, as they will be aligned prior to the multiplication. The dot method for Series computes the inner product, instead of the matrix product here. Examples -------- Here we multiply a DataFrame with a Series. >>> df = pd.DataFrame([[0, 1, -2, -1], [1, 1, 1, 1]]) >>> s = pd.Series([1, 1, 2, 1]) >>> df.dot(s) 0 -4 1 5 dtype: int64 Here we multiply a DataFrame with another DataFrame. >>> other = pd.DataFrame([[0, 1], [1, 2], [-1, -1], [2, 0]]) >>> df.dot(other) 0 1 0 1 4 1 2 2 Note that the dot method give the same result as @ >>> df @ other 0 1 0 1 4 1 2 2 The dot method works also if other is an np.array. >>> arr = np.array([[0, 1], [1, 2], [-1, -1], [2, 0]]) >>> df.dot(arr) 0 1 0 1 4 1 2 2 Note how shuffling of the objects does not change the result. >>> s2 = s.reindex([1, 0, 2, 3]) >>> df.dot(s2) 0 -4 1 5 dtype: int64 """ if isinstance(other, (Series, DataFrame)): common = self.columns.union(other.index) if len(common) > len(self.columns) or len(common) > len(other.index): raise ValueError("matrices are not aligned") left = self.reindex(columns=common) right = other.reindex(index=common) lvals = left.values rvals = right._values else: left = self lvals = self.values rvals = np.asarray(other) if lvals.shape[1] != rvals.shape[0]: raise ValueError( f"Dot product shape mismatch, {lvals.shape} vs {rvals.shape}" ) if isinstance(other, DataFrame): common_type = find_common_type(list(self.dtypes) + list(other.dtypes)) return self._constructor( np.dot(lvals, rvals), index=left.index, columns=other.columns, copy=False, dtype=common_type, ) elif isinstance(other, Series): common_type = find_common_type(list(self.dtypes) + [other.dtypes]) return self._constructor_sliced( np.dot(lvals, rvals), index=left.index, copy=False, dtype=common_type ) elif isinstance(rvals, (np.ndarray, Index)): result = np.dot(lvals, rvals) if result.ndim == 2: return self._constructor(result, index=left.index, copy=False) else: return self._constructor_sliced(result, index=left.index, copy=False) else: # pragma: no cover raise TypeError(f"unsupported type: {type(other)}")
Compute the matrix multiplication between the DataFrame and other. This method computes the matrix product between the DataFrame and the values of an other Series, DataFrame or a numpy array. It can also be called using ``self @ other``. Parameters ---------- other : Series, DataFrame or array-like The other object to compute the matrix product with. Returns ------- Series or DataFrame If other is a Series, return the matrix product between self and other as a Series. If other is a DataFrame or a numpy.array, return the matrix product of self and other in a DataFrame of a np.array. See Also -------- Series.dot: Similar method for Series. Notes ----- The dimensions of DataFrame and other must be compatible in order to compute the matrix multiplication. In addition, the column names of DataFrame and the index of other must contain the same values, as they will be aligned prior to the multiplication. The dot method for Series computes the inner product, instead of the matrix product here. Examples -------- Here we multiply a DataFrame with a Series. >>> df = pd.DataFrame([[0, 1, -2, -1], [1, 1, 1, 1]]) >>> s = pd.Series([1, 1, 2, 1]) >>> df.dot(s) 0 -4 1 5 dtype: int64 Here we multiply a DataFrame with another DataFrame. >>> other = pd.DataFrame([[0, 1], [1, 2], [-1, -1], [2, 0]]) >>> df.dot(other) 0 1 0 1 4 1 2 2 Note that the dot method give the same result as @ >>> df @ other 0 1 0 1 4 1 2 2 The dot method works also if other is an np.array. >>> arr = np.array([[0, 1], [1, 2], [-1, -1], [2, 0]]) >>> df.dot(arr) 0 1 0 1 4 1 2 2 Note how shuffling of the objects does not change the result. >>> s2 = s.reindex([1, 0, 2, 3]) >>> df.dot(s2) 0 -4 1 5 dtype: int64
python
pandas/core/frame.py
1,692
[ "self", "other" ]
DataFrame | Series
true
12
8.4
pandas-dev/pandas
47,362
numpy
false
findCacheOperations
protected abstract @Nullable Collection<CacheOperation> findCacheOperations(Class<?> clazz);
Subclasses need to implement this to return the cache operations for the given class, if any. @param clazz the class to retrieve the cache operations for @return all cache operations associated with this class, or {@code null} if none
java
spring-context/src/main/java/org/springframework/cache/interceptor/AbstractFallbackCacheOperationSource.java
184
[ "clazz" ]
true
1
6.48
spring-projects/spring-framework
59,386
javadoc
false
toString
@Override public String toString() { PropertyValue[] pvs = getPropertyValues(); if (pvs.length > 0) { return "PropertyValues: length=" + pvs.length + "; " + StringUtils.arrayToDelimitedString(pvs, "; "); } return "PropertyValues: length=0"; }
Return whether this holder contains converted values only ({@code true}), or whether the values still need to be converted ({@code false}).
java
spring-beans/src/main/java/org/springframework/beans/MutablePropertyValues.java
379
[]
String
true
2
6.88
spring-projects/spring-framework
59,386
javadoc
false
replace
@Deprecated public static String replace(final String text, final String searchString, final String replacement) { return Strings.CS.replace(text, searchString, replacement); }
Replaces all occurrences of a String within another String. <p> A {@code null} reference passed to this method is a no-op. </p> <pre> StringUtils.replace(null, *, *) = null StringUtils.replace("", *, *) = "" StringUtils.replace("any", null, *) = "any" StringUtils.replace("any", *, null) = "any" StringUtils.replace("any", "", *) = "any" StringUtils.replace("aba", "a", null) = "aba" StringUtils.replace("aba", "a", "") = "b" StringUtils.replace("aba", "a", "z") = "zbz" </pre> @param text text to search and replace in, may be null. @param searchString the String to search for, may be null. @param replacement the String to replace it with, may be null. @return the text with any replacements processed, {@code null} if null String input. @see #replace(String text, String searchString, String replacement, int max) @deprecated Use {@link Strings#replace(String, String, String) Strings.CS.replace(String, String, String)}.
java
src/main/java/org/apache/commons/lang3/StringUtils.java
6,153
[ "text", "searchString", "replacement" ]
String
true
1
6.48
apache/commons-lang
2,896
javadoc
false
registerBeanDefinitions
public int registerBeanDefinitions(Document doc, Resource resource) throws BeanDefinitionStoreException { BeanDefinitionDocumentReader documentReader = createBeanDefinitionDocumentReader(); int countBefore = getRegistry().getBeanDefinitionCount(); documentReader.registerBeanDefinitions(doc, createReaderContext(resource)); return getRegistry().getBeanDefinitionCount() - countBefore; }
Register the bean definitions contained in the given DOM document. Called by {@code loadBeanDefinitions}. <p>Creates a new instance of the parser class and invokes {@code registerBeanDefinitions} on it. @param doc the DOM document @param resource the resource descriptor (for context information) @return the number of bean definitions found @throws BeanDefinitionStoreException in case of parsing errors @see #loadBeanDefinitions @see #setDocumentReaderClass @see BeanDefinitionDocumentReader#registerBeanDefinitions
java
spring-beans/src/main/java/org/springframework/beans/factory/xml/XmlBeanDefinitionReader.java
515
[ "doc", "resource" ]
true
1
6.08
spring-projects/spring-framework
59,386
javadoc
false
_validate_integer
def _validate_integer(self, key: int | np.integer, axis: AxisInt) -> None: """ Check that 'key' is a valid position in the desired axis. Parameters ---------- key : int Requested position. axis : int Desired axis. Raises ------ IndexError If 'key' is not a valid position in axis 'axis'. """ len_axis = len(self.obj._get_axis(axis)) if key >= len_axis or key < -len_axis: raise IndexError("single positional indexer is out-of-bounds")
Check that 'key' is a valid position in the desired axis. Parameters ---------- key : int Requested position. axis : int Desired axis. Raises ------ IndexError If 'key' is not a valid position in axis 'axis'.
python
pandas/core/indexing.py
1,688
[ "self", "key", "axis" ]
None
true
3
6.72
pandas-dev/pandas
47,362
numpy
false
fill
public static long[] fill(final long[] a, final long val) { if (a != null) { Arrays.fill(a, val); } return a; }
Fills and returns the given array, assigning the given {@code long} value to each element of the array. @param a the array to be filled (may be null). @param val the value to be stored in all elements of the array. @return the given array. @see Arrays#fill(long[],long)
java
src/main/java/org/apache/commons/lang3/ArrayFill.java
131
[ "a", "val" ]
true
2
8.08
apache/commons-lang
2,896
javadoc
false
previousIndex
@Override public int previousIndex() { return tokenPos - 1; }
Gets the index of the previous token. @return the previous token index.
java
src/main/java/org/apache/commons/lang3/text/StrTokenizer.java
680
[]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
clear_db_references
def clear_db_references(self, session: Session): """ Clear db tables that have a reference to this instance. :param session: ORM Session :meta private: """ from airflow.models.renderedtifields import RenderedTaskInstanceFields tables: list[type[TaskInstanceDependencies]] = [ XComModel, RenderedTaskInstanceFields, TaskMap, ] tables_by_id: list[type[Base]] = [TaskInstanceNote, TaskReschedule] for table in tables: session.execute( delete(table).where( table.dag_id == self.dag_id, table.task_id == self.task_id, table.run_id == self.run_id, table.map_index == self.map_index, ) ) for table in tables_by_id: session.execute(delete(table).where(table.ti_id == self.id))
Clear db tables that have a reference to this instance. :param session: ORM Session :meta private:
python
airflow-core/src/airflow/models/taskinstance.py
2,083
[ "self", "session" ]
true
3
7.04
apache/airflow
43,597
sphinx
false
truePredicate
@SuppressWarnings("unchecked") static <E extends Throwable> FailableDoublePredicate<E> truePredicate() { return TRUE; }
Gets the TRUE singleton. @param <E> The kind of thrown exception or error. @return The NOP singleton.
java
src/main/java/org/apache/commons/lang3/function/FailableDoublePredicate.java
57
[]
true
1
6.96
apache/commons-lang
2,896
javadoc
false
createCurry
function createCurry(func, bitmask, arity) { var Ctor = createCtor(func); function wrapper() { var length = arguments.length, args = Array(length), index = length, placeholder = getHolder(wrapper); while (index--) { args[index] = arguments[index]; } var holders = (length < 3 && args[0] !== placeholder && args[length - 1] !== placeholder) ? [] : replaceHolders(args, placeholder); length -= holders.length; if (length < arity) { return createRecurry( func, bitmask, createHybrid, wrapper.placeholder, undefined, args, holders, undefined, undefined, arity - length); } var fn = (this && this !== root && this instanceof wrapper) ? Ctor : func; return apply(fn, this, args); } return wrapper; }
Creates a function that wraps `func` to enable currying. @private @param {Function} func The function to wrap. @param {number} bitmask The bitmask flags. See `createWrap` for more details. @param {number} arity The arity of `func`. @returns {Function} Returns the new wrapped function.
javascript
lodash.js
5,117
[ "func", "bitmask", "arity" ]
false
9
6.08
lodash/lodash
61,490
jsdoc
false
substituteNode
function substituteNode(hint: EmitHint, node: Node) { Debug.assert(state < TransformationState.Disposed, "Cannot substitute a node after the result is disposed."); return node && isSubstitutionEnabled(node) && onSubstituteNode(hint, node) || node; }
Emits a node with possible substitution. @param hint A hint as to the intended usage of the node. @param node The node to emit. @param emitCallback The callback used to emit the node or its substitute.
typescript
src/compiler/transformer.ts
385
[ "hint", "node" ]
false
4
6
microsoft/TypeScript
107,154
jsdoc
false
extract
@Nullable R extract(@NonNull T value);
Extract from the given value. @param value the source value (never {@code null}) @return an extracted value or {@code null}
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
1,078
[ "value" ]
R
true
1
6.8
spring-projects/spring-boot
79,428
javadoc
false
apply
public static <T, U, R, E extends Throwable> R apply(final FailableBiFunction<T, U, R, E> function, final T input1, final U input2) { return get(() -> function.apply(input1, input2)); }
Applies a function and rethrows any exception as a {@link RuntimeException}. @param function the function to apply @param input1 the first input to apply {@code function} on @param input2 the second input to apply {@code function} on @param <T> the type of the first argument the function accepts @param <U> the type of the second argument the function accepts @param <R> the return type of the function @param <E> the type of checked exception the function may throw @return the value returned from the function
java
src/main/java/org/apache/commons/lang3/function/Failable.java
146
[ "function", "input1", "input2" ]
R
true
1
6.48
apache/commons-lang
2,896
javadoc
false
log
private void log(LogLevel level, Object message, @Nullable Throwable t) { synchronized (this.lines) { if (this.destination != null) { level.log(this.destination, message, t); } else { this.lines.add(this.destinationSupplier, level, message, t); } } }
Create a new {@link DeferredLog} instance managed by a {@link DeferredLogFactory}. @param destination the switch-over destination @param lines the lines backing all related deferred logs @since 2.4.0
java
core/spring-boot/src/main/java/org/springframework/boot/logging/DeferredLog.java
167
[ "level", "message", "t" ]
void
true
2
6.4
spring-projects/spring-boot
79,428
javadoc
false
_generate_kernel_call_helper
def _generate_kernel_call_helper( self, kernel_name: str, call_args, *, device=None, triton=True, arg_types=None, raw_keys=None, raw_args=None, triton_meta=None, graph_name="", original_fxnode_name=None, ): """ Generates kernel call code. triton: Defines whether the GPU backend uses Triton for codegen. Otherwise it uses the CUDA language for codegen. Only valid when cuda == True. """ assert not triton, ( "CppWrapperCpuArrayRef.generate_kernel_call does not support GPU" ) assert arg_types is not None and len(call_args) == len(arg_types), ( "Mismatch call_args and arg_types in generate_kernel_call" ) new_args = [] for idx, arg in enumerate(call_args): if "*" in arg_types[idx]: var_name = f"var_{next(self.arg_var_id)}" self.writeline(f"auto* {var_name} = get_data_ptr_wrapper({arg});") new_args.append(f"({arg_types[idx]})({var_name})") else: # arg is a scalar new_args.append(arg) # debug printer related logic for cpp kernel type. debug_printer_manager = V.graph.wrapper_code.debug_printer debug_printer_manager.set_printer_args( call_args, kernel_name, None, None, "cpp", ) with debug_printer_manager: self.writeline(self.wrap_kernel_call(kernel_name, new_args))
Generates kernel call code. triton: Defines whether the GPU backend uses Triton for codegen. Otherwise it uses the CUDA language for codegen. Only valid when cuda == True.
python
torch/_inductor/codegen/cpp_wrapper_cpu_array_ref.py
106
[ "self", "kernel_name", "call_args", "device", "triton", "arg_types", "raw_keys", "raw_args", "triton_meta", "graph_name", "original_fxnode_name" ]
true
5
6.88
pytorch/pytorch
96,034
unknown
false
dtypes
def dtypes(self) -> Series: """ Return the dtypes as a Series for the underlying MultiIndex. See Also -------- Index.dtype : Return the dtype object of the underlying data. Series.dtypes : Return the data type of the underlying Series. Examples -------- >>> idx = pd.MultiIndex.from_product( ... [(0, 1, 2), ("green", "purple")], names=["number", "color"] ... ) >>> idx MultiIndex([(0, 'green'), (0, 'purple'), (1, 'green'), (1, 'purple'), (2, 'green'), (2, 'purple')], names=['number', 'color']) >>> idx.dtypes number int64 color object dtype: object """ from pandas import Series names = com.fill_missing_names(self.names) return Series([level.dtype for level in self.levels], index=Index(names))
Return the dtypes as a Series for the underlying MultiIndex. See Also -------- Index.dtype : Return the dtype object of the underlying data. Series.dtypes : Return the data type of the underlying Series. Examples -------- >>> idx = pd.MultiIndex.from_product( ... [(0, 1, 2), ("green", "purple")], names=["number", "color"] ... ) >>> idx MultiIndex([(0, 'green'), (0, 'purple'), (1, 'green'), (1, 'purple'), (2, 'green'), (2, 'purple')], names=['number', 'color']) >>> idx.dtypes number int64 color object dtype: object
python
pandas/core/indexes/multi.py
773
[ "self" ]
Series
true
1
6.8
pandas-dev/pandas
47,362
unknown
false
matches
public boolean matches(String name, List<FilterPath> nextFilters, boolean matchFieldNamesWithDots) { // << here if (nextFilters == null) { return false; } // match dot first if (matchFieldNamesWithDots) { // contains dot and not the first or last char int dotIndex = name.indexOf('.'); if ((dotIndex != -1) && (dotIndex != 0) && (dotIndex != name.length() - 1)) { return matchFieldNamesWithDots(name, dotIndex, nextFilters); } } FilterPath termNode = termsChildren.get(name); if (termNode != null) { if (termNode.isFinalNode()) { return true; } else { nextFilters.add(termNode); } } for (FilterPath wildcardNode : wildcardChildren) { String wildcardPattern = wildcardNode.getPattern(); if (Glob.globMatch(wildcardPattern, name)) { if (wildcardNode.isFinalNode()) { return true; } else { nextFilters.add(wildcardNode); } } } if (isDoubleWildcard) { nextFilters.add(this); } return false; }
check if the name matches filter nodes if the name equals the filter node name, the node will add to nextFilters. if the filter node is a final node, it means the name matches the pattern, and return true if the name don't equal a final node, then return false, continue to check the inner filter node if current node is a double wildcard node, the node will also add to nextFilters. @param name the xcontent property name @param nextFilters nextFilters is a List, used to check the inner property of name @param matchFieldNamesWithDots support dot in field name or not @return true if the name equal a final node, otherwise return false
java
libs/x-content/src/main/java/org/elasticsearch/xcontent/support/filtering/FilterPath.java
78
[ "name", "nextFilters", "matchFieldNamesWithDots" ]
true
11
7.92
elastic/elasticsearch
75,680
javadoc
false
nextFetchedRecord
private boolean nextFetchedRecord(final boolean checkCrcs) { while (true) { if (records == null || !records.hasNext()) { maybeCloseRecordStream(); if (!batches.hasNext()) { drain(); lastRecord = null; break; } currentBatch = batches.next(); maybeEnsureValid(currentBatch, checkCrcs); records = currentBatch.streamingIterator(decompressionBufferSupplier); } else { Record record = records.next(); maybeEnsureValid(record, checkCrcs); // control records are not returned to the user if (!currentBatch.isControlBatch()) { lastRecord = record; break; } } } return records != null && records.hasNext(); }
Scans for the next record in the available batches, skipping control records @param checkCrcs Whether to check the CRC of fetched records @return true if the current batch has more records, else false
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareCompletedFetch.java
358
[ "checkCrcs" ]
true
7
7.44
apache/kafka
31,560
javadoc
false
createClassLoader
protected ClassLoader createClassLoader(Collection<URL> urls) throws Exception { return createClassLoader(urls.toArray(new URL[0])); }
Create a classloader for the specified archives. @param urls the classpath URLs @return the classloader @throws Exception if the classloader cannot be created
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/launch/Launcher.java
81
[ "urls" ]
ClassLoader
true
1
6.8
spring-projects/spring-boot
79,428
javadoc
false
limit
public Splitter limit(int maxItems) { checkArgument(maxItems > 0, "must be greater than zero: %s", maxItems); return new Splitter(strategy, omitEmptyStrings, trimmer, maxItems); }
Returns a splitter that behaves equivalently to {@code this} splitter but stops splitting after it reaches the limit. The limit defines the maximum number of items returned by the iterator, or the maximum size of the list returned by {@link #splitToList}. <p>For example, {@code Splitter.on(',').limit(3).split("a,b,c,d")} returns an iterable containing {@code ["a", "b", "c,d"]}. When omitting empty strings, the omitted strings do not count. Hence, {@code Splitter.on(',').limit(3).omitEmptyStrings().split("a,,,b,,,c,d")} returns an iterable containing {@code ["a", "b", "c,d"]}. When trim is requested, all entries are trimmed, including the last. Hence {@code Splitter.on(',').limit(3).trimResults().split(" a , b , c , d ")} results in {@code ["a", "b", "c , d"]}. @param maxItems the maximum number of items returned @return a splitter with the desired configuration @since 9.0
java
android/guava/src/com/google/common/base/Splitter.java
326
[ "maxItems" ]
Splitter
true
1
6.32
google/guava
51,352
javadoc
false
always
@SuppressWarnings("unchecked") public static <T> Predicate<T> always() { return (Predicate<T>) ALWAYS; }
@return a predicate that accepts all input values @param <T> type of the predicate
java
libs/core/src/main/java/org/elasticsearch/core/Predicates.java
82
[]
true
1
6.32
elastic/elasticsearch
75,680
javadoc
false
_flush_logs_out_of_heap
def _flush_logs_out_of_heap( heap: list[tuple[int, StructuredLogMessage]], flush_size: int, last_log_container: list[StructuredLogMessage | None], ) -> Generator[StructuredLogMessage, None, None]: """ Flush logs out of the heap, deduplicating them based on the last log. :param heap: heap to flush logs from :param flush_size: number of logs to flush :param last_log_container: a container to store the last log, to avoid duplicate logs :return: a generator that yields deduplicated logs """ last_log = last_log_container[0] for _ in range(flush_size): sort_key, line = heapq.heappop(heap) if line != last_log or _is_sort_key_with_default_timestamp(sort_key): # dedupe yield line last_log = line # update the last log container with the last log last_log_container[0] = last_log
Flush logs out of the heap, deduplicating them based on the last log. :param heap: heap to flush logs from :param flush_size: number of logs to flush :param last_log_container: a container to store the last log, to avoid duplicate logs :return: a generator that yields deduplicated logs
python
airflow-core/src/airflow/utils/log/file_task_handler.py
332
[ "heap", "flush_size", "last_log_container" ]
Generator[StructuredLogMessage, None, None]
true
4
8.4
apache/airflow
43,597
sphinx
false
to_pytimedelta
def to_pytimedelta(self) -> npt.NDArray[np.object_]: """ Return an ndarray of datetime.timedelta objects. Returns ------- numpy.ndarray A NumPy ``timedelta64`` object representing the same duration as the original pandas ``Timedelta`` object. The precision of the resulting object is in nanoseconds, which is the default time resolution used by pandas for ``Timedelta`` objects, ensuring high precision for time-based calculations. See Also -------- to_timedelta : Convert argument to timedelta format. Timedelta : Represents a duration between two dates or times. DatetimeIndex: Index of datetime64 data. Timedelta.components : Return a components namedtuple-like of a single timedelta. Examples -------- >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit="D") >>> tdelta_idx TimedeltaIndex(['1 days', '2 days', '3 days'], dtype='timedelta64[ns]', freq=None) >>> tdelta_idx.to_pytimedelta() array([datetime.timedelta(days=1), datetime.timedelta(days=2), datetime.timedelta(days=3)], dtype=object) >>> tidx = pd.TimedeltaIndex(data=["1 days 02:30:45", "3 days 04:15:10"]) >>> tidx TimedeltaIndex(['1 days 02:30:45', '3 days 04:15:10'], dtype='timedelta64[ns]', freq=None) >>> tidx.to_pytimedelta() array([datetime.timedelta(days=1, seconds=9045), datetime.timedelta(days=3, seconds=15310)], dtype=object) """ return ints_to_pytimedelta(self._ndarray)
Return an ndarray of datetime.timedelta objects. Returns ------- numpy.ndarray A NumPy ``timedelta64`` object representing the same duration as the original pandas ``Timedelta`` object. The precision of the resulting object is in nanoseconds, which is the default time resolution used by pandas for ``Timedelta`` objects, ensuring high precision for time-based calculations. See Also -------- to_timedelta : Convert argument to timedelta format. Timedelta : Represents a duration between two dates or times. DatetimeIndex: Index of datetime64 data. Timedelta.components : Return a components namedtuple-like of a single timedelta. Examples -------- >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit="D") >>> tdelta_idx TimedeltaIndex(['1 days', '2 days', '3 days'], dtype='timedelta64[ns]', freq=None) >>> tdelta_idx.to_pytimedelta() array([datetime.timedelta(days=1), datetime.timedelta(days=2), datetime.timedelta(days=3)], dtype=object) >>> tidx = pd.TimedeltaIndex(data=["1 days 02:30:45", "3 days 04:15:10"]) >>> tidx TimedeltaIndex(['1 days 02:30:45', '3 days 04:15:10'], dtype='timedelta64[ns]', freq=None) >>> tidx.to_pytimedelta() array([datetime.timedelta(days=1, seconds=9045), datetime.timedelta(days=3, seconds=15310)], dtype=object)
python
pandas/core/arrays/timedeltas.py
827
[ "self" ]
npt.NDArray[np.object_]
true
1
6.64
pandas-dev/pandas
47,362
unknown
false
empty
public static ConditionMessage empty() { return new ConditionMessage(); }
Factory method to return a new empty {@link ConditionMessage}. @return a new empty {@link ConditionMessage}
java
core/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/condition/ConditionMessage.java
139
[]
ConditionMessage
true
1
6
spring-projects/spring-boot
79,428
javadoc
false
addCopies
@CanIgnoreReturnValue public Builder<E> addCopies(E element, int occurrences) { contents.add(checkNotNull(element), occurrences); return this; }
Adds a number of occurrences of an element to this {@code ImmutableMultiset}. @param element the element to add @param occurrences the number of occurrences of the element to add. May be zero, in which case no change will be made. @return this {@code Builder} object @throws NullPointerException if {@code element} is null @throws IllegalArgumentException if {@code occurrences} is negative, or if this operation would result in more than {@link Integer#MAX_VALUE} occurrences of the element
java
guava/src/com/google/common/collect/ImmutableMultiset.java
530
[ "element", "occurrences" ]
true
1
6.56
google/guava
51,352
javadoc
false
once
function once<T>(event: Event<T>): Event<T> { return (listener, thisArgs = null, disposables?) => { // we need this, in case the event fires during the listener call let didFire = false; let result: Disposable | undefined = undefined; result = event(e => { if (didFire) { return; } else if (result) { result.dispose(); } else { didFire = true; } return listener.call(thisArgs, e); }, null, disposables); if (didFire) { result.dispose(); } return result; }; }
Given an event, returns another event which only fires once. @param event The event source for the new event.
typescript
extensions/microsoft-authentication/src/common/async.ts
83
[ "event" ]
true
6
7.2
microsoft/vscode
179,840
jsdoc
false
capitalizeFully
public static String capitalizeFully(final String str, final char... delimiters) { final int delimLen = delimiters == null ? -1 : delimiters.length; if (StringUtils.isEmpty(str) || delimLen == 0) { return str; } return capitalize(str.toLowerCase(), delimiters); }
Converts all the delimiter separated words in a String into capitalized words, that is each word is made up of a titlecase character and then a series of lowercase characters. <p>The delimiters represent a set of characters understood to separate words. The first string character and the first non-delimiter character after a delimiter will be capitalized.</p> <p>A {@code null} input String returns {@code null}. Capitalization uses the Unicode title case, normally equivalent to upper case.</p> <pre> WordUtils.capitalizeFully(null, *) = null WordUtils.capitalizeFully("", *) = "" WordUtils.capitalizeFully(*, null) = * WordUtils.capitalizeFully(*, new char[0]) = * WordUtils.capitalizeFully("i aM.fine", {'.'}) = "I am.Fine" </pre> @param str the String to capitalize, may be null. @param delimiters set of characters to determine capitalization, null means whitespace. @return capitalized String, {@code null} if null String input. @since 2.1
java
src/main/java/org/apache/commons/lang3/text/WordUtils.java
163
[ "str" ]
String
true
4
7.44
apache/commons-lang
2,896
javadoc
false