function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
parseNumberFormat
function parseNumberFormat(format: string, minusSign = '-'): ParsedNumberFormat { const p = { minInt: 1, minFrac: 0, maxFrac: 0, posPre: '', posSuf: '', negPre: '', negSuf: '', gSize: 0, lgSize: 0, }; const patternParts = format.split(PATTERN_SEP); const positive = patternParts[0]; const negative = patternParts[1]; const positiveParts = positive.indexOf(DECIMAL_SEP) !== -1 ? positive.split(DECIMAL_SEP) : [ positive.substring(0, positive.lastIndexOf(ZERO_CHAR) + 1), positive.substring(positive.lastIndexOf(ZERO_CHAR) + 1), ], integer = positiveParts[0], fraction = positiveParts[1] || ''; p.posPre = integer.substring(0, integer.indexOf(DIGIT_CHAR)); for (let i = 0; i < fraction.length; i++) { const ch = fraction.charAt(i); if (ch === ZERO_CHAR) { p.minFrac = p.maxFrac = i + 1; } else if (ch === DIGIT_CHAR) { p.maxFrac = i + 1; } else { p.posSuf += ch; } } const groups = integer.split(GROUP_SEP); p.gSize = groups[1] ? groups[1].length : 0; p.lgSize = groups[2] || groups[1] ? (groups[2] || groups[1]).length : 0; if (negative) { const trunkLen = positive.length - p.posPre.length - p.posSuf.length, pos = negative.indexOf(DIGIT_CHAR); p.negPre = negative.substring(0, pos).replace(/'/g, ''); p.negSuf = negative.slice(pos + trunkLen).replace(/'/g, ''); } else { p.negPre = minusSign + p.posPre; p.negSuf = p.posSuf; } return p; }
@ngModule CommonModule @description Formats a number as text, with group sizing, separator, and other parameters based on the locale. @param value The number to format. @param locale A locale code for the locale format rules to use. @param digitsInfo Decimal representation options, specified by a string in the following format: `{minIntegerDigits}.{minFractionDigits}-{maxFractionDigits}`. See `DecimalPipe` for more details. @returns The formatted text string. @see [Internationalization (i18n) Guide](guide/i18n) @publicApi
typescript
packages/common/src/i18n/format_number.ts
289
[ "format", "minusSign" ]
true
14
7.44
angular/angular
99,544
jsdoc
false
uniqBy
function uniqBy(array, iteratee) { return (array && array.length) ? baseUniq(array, getIteratee(iteratee, 2)) : []; }
This method is like `_.uniq` except that it accepts `iteratee` which is invoked for each element in `array` to generate the criterion by which uniqueness is computed. The order of result values is determined by the order they occur in the array. The iteratee is invoked with one argument: (value). @static @memberOf _ @since 4.0.0 @category Array @param {Array} array The array to inspect. @param {Function} [iteratee=_.identity] The iteratee invoked per element. @returns {Array} Returns the new duplicate free array. @example _.uniqBy([2.1, 1.2, 2.3], Math.floor); // => [2.1, 1.2] // The `_.property` iteratee shorthand. _.uniqBy([{ 'x': 1 }, { 'x': 2 }, { 'x': 1 }], 'x'); // => [{ 'x': 1 }, { 'x': 2 }]
javascript
lodash.js
8,520
[ "array", "iteratee" ]
false
3
7.44
lodash/lodash
61,490
jsdoc
false
readLines
public ImmutableList<String> readLines() throws IOException { Closer closer = Closer.create(); try { BufferedReader reader = closer.register(openBufferedStream()); List<String> result = new ArrayList<>(); String line; while ((line = reader.readLine()) != null) { result.add(line); } return ImmutableList.copyOf(result); } catch (Throwable e) { throw closer.rethrow(e); } finally { closer.close(); } }
Reads all the lines of this source as a list of strings. The returned list will be empty if this source is empty. <p>Like {@link BufferedReader#readLine()}, this method considers a line to be a sequence of text that is terminated by (but does not include) one of {@code \r\n}, {@code \r} or {@code \n}. If the source's content does not end in a line termination sequence, it is treated as if it does. @throws IOException if an I/O error occurs while reading from this source
java
android/guava/src/com/google/common/io/CharSource.java
337
[]
true
3
6.88
google/guava
51,352
javadoc
false
substituteTypeVariables
private static Type substituteTypeVariables(final Type type, final Map<TypeVariable<?>, Type> typeVarAssigns) { if (type instanceof TypeVariable<?> && typeVarAssigns != null) { final Type replacementType = typeVarAssigns.get(type); if (replacementType == null) { throw new IllegalArgumentException("missing assignment type for type variable " + type); } return replacementType; } return type; }
Finds the mapping for {@code type} in {@code typeVarAssigns}. @param type the type to be replaced. @param typeVarAssigns the map with type variables. @return the replaced type. @throws IllegalArgumentException if the type cannot be substituted.
java
src/main/java/org/apache/commons/lang3/reflect/TypeUtils.java
1,488
[ "type", "typeVarAssigns" ]
Type
true
4
7.76
apache/commons-lang
2,896
javadoc
false
unreflectUnchecked
private static MethodHandle unreflectUnchecked(final Method method) { try { return unreflect(method); } catch (final IllegalAccessException e) { throw new UncheckedIllegalAccessException(e); } }
Throws NullPointerException if {@code method} is {@code null}. @param method The method to test. @return The given method. @throws NullPointerException if {@code method} is {@code null}.
java
src/main/java/org/apache/commons/lang3/function/MethodInvokers.java
244
[ "method" ]
MethodHandle
true
2
7.76
apache/commons-lang
2,896
javadoc
false
laplacian_kernel
def laplacian_kernel(X, Y=None, gamma=None): """Compute the laplacian kernel between X and Y. The laplacian kernel is defined as: .. code-block:: text K(x, y) = exp(-gamma ||x-y||_1) for each pair of rows x in X and y in Y. Read more in the :ref:`User Guide <laplacian_kernel>`. .. versionadded:: 0.17 Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples_X, n_features) A feature array. Y : {array-like, sparse matrix} of shape (n_samples_Y, n_features), default=None An optional second feature array. If `None`, uses `Y=X`. gamma : float, default=None If None, defaults to 1.0 / n_features. Otherwise it should be strictly positive. Returns ------- kernel : ndarray of shape (n_samples_X, n_samples_Y) The kernel matrix. Examples -------- >>> from sklearn.metrics.pairwise import laplacian_kernel >>> X = [[0, 0, 0], [1, 1, 1]] >>> Y = [[1, 0, 0], [1, 1, 0]] >>> laplacian_kernel(X, Y) array([[0.71, 0.51], [0.51, 0.71]]) """ X, Y = check_pairwise_arrays(X, Y) if gamma is None: gamma = 1.0 / X.shape[1] K = -gamma * manhattan_distances(X, Y) xp, _ = get_namespace(X, Y) if _is_numpy_namespace(xp): np.exp(K, K) # exponentiate K in-place else: K = xp.exp(K) return K
Compute the laplacian kernel between X and Y. The laplacian kernel is defined as: .. code-block:: text K(x, y) = exp(-gamma ||x-y||_1) for each pair of rows x in X and y in Y. Read more in the :ref:`User Guide <laplacian_kernel>`. .. versionadded:: 0.17 Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples_X, n_features) A feature array. Y : {array-like, sparse matrix} of shape (n_samples_Y, n_features), default=None An optional second feature array. If `None`, uses `Y=X`. gamma : float, default=None If None, defaults to 1.0 / n_features. Otherwise it should be strictly positive. Returns ------- kernel : ndarray of shape (n_samples_X, n_samples_Y) The kernel matrix. Examples -------- >>> from sklearn.metrics.pairwise import laplacian_kernel >>> X = [[0, 0, 0], [1, 1, 1]] >>> Y = [[1, 0, 0], [1, 1, 0]] >>> laplacian_kernel(X, Y) array([[0.71, 0.51], [0.51, 0.71]])
python
sklearn/metrics/pairwise.py
1,631
[ "X", "Y", "gamma" ]
false
4
7.68
scikit-learn/scikit-learn
64,340
numpy
false
toOffsetDateTime
public static OffsetDateTime toOffsetDateTime(final Date date, final TimeZone timeZone) { return OffsetDateTime.ofInstant(date.toInstant(), toZoneId(timeZone)); }
Converts a {@link Date} to a {@link OffsetDateTime}. @param date the Date to convert to a OffsetDateTime, not null. @param timeZone the time zone, null maps to to the default time zone. @return a new OffsetDateTime. @since 3.19.0
java
src/main/java/org/apache/commons/lang3/time/DateUtils.java
1,674
[ "date", "timeZone" ]
OffsetDateTime
true
1
6.64
apache/commons-lang
2,896
javadoc
false
intToHexDigit
public static char intToHexDigit(final int nibble) { final char c = Character.forDigit(nibble, 16); if (c == Character.MIN_VALUE) { throw new IllegalArgumentException("nibble value not between 0 and 15: " + nibble); } return c; }
Converts the 4 LSB of an int to a hexadecimal digit. <p> 0 returns '0' </p> <p> 1 returns '1' </p> <p> 10 returns 'A' and so on... </p> @param nibble the 4 bits to convert. @return a hexadecimal digit representing the 4 LSB of {@code nibble}. @throws IllegalArgumentException if {@code nibble < 0} or {@code nibble > 15}.
java
src/main/java/org/apache/commons/lang3/Conversion.java
980
[ "nibble" ]
true
2
8.08
apache/commons-lang
2,896
javadoc
false
exposeTargetClass
static void exposeTargetClass( ConfigurableListableBeanFactory beanFactory, @Nullable String beanName, Class<?> targetClass) { if (beanName != null && beanFactory.containsBeanDefinition(beanName)) { beanFactory.getMergedBeanDefinition(beanName).setAttribute(ORIGINAL_TARGET_CLASS_ATTRIBUTE, targetClass); } }
Expose the given target class for the specified bean, if possible. @param beanFactory the containing ConfigurableListableBeanFactory @param beanName the name of the bean @param targetClass the corresponding target class @since 4.2.3
java
spring-aop/src/main/java/org/springframework/aop/framework/autoproxy/AutoProxyUtils.java
183
[ "beanFactory", "beanName", "targetClass" ]
void
true
3
6.24
spring-projects/spring-framework
59,386
javadoc
false
toStringOnOff
public static String toStringOnOff(final Boolean bool) { return toString(bool, ON, OFF, null); }
Converts a Boolean to a String returning {@code 'on'}, {@code 'off'}, or {@code null}. <pre> BooleanUtils.toStringOnOff(Boolean.TRUE) = "on" BooleanUtils.toStringOnOff(Boolean.FALSE) = "off" BooleanUtils.toStringOnOff(null) = null; </pre> @param bool the Boolean to check @return {@code 'on'}, {@code 'off'}, or {@code null}
java
src/main/java/org/apache/commons/lang3/BooleanUtils.java
1,073
[ "bool" ]
String
true
1
6.48
apache/commons-lang
2,896
javadoc
false
issctype
def issctype(rep): """ Determines whether the given object represents a scalar data-type. Parameters ---------- rep : any If `rep` is an instance of a scalar dtype, True is returned. If not, False is returned. Returns ------- out : bool Boolean result of check whether `rep` is a scalar dtype. See Also -------- issubsctype, issubdtype, obj2sctype, sctype2char Examples -------- >>> from numpy._core.numerictypes import issctype >>> issctype(np.int32) True >>> issctype(list) False >>> issctype(1.1) False Strings are also a scalar type: >>> issctype(np.dtype(np.str_)) True """ if not isinstance(rep, (type, dtype)): return False try: res = obj2sctype(rep) if res and res != object_: return True else: return False except Exception: return False
Determines whether the given object represents a scalar data-type. Parameters ---------- rep : any If `rep` is an instance of a scalar dtype, True is returned. If not, False is returned. Returns ------- out : bool Boolean result of check whether `rep` is a scalar dtype. See Also -------- issubsctype, issubdtype, obj2sctype, sctype2char Examples -------- >>> from numpy._core.numerictypes import issctype >>> issctype(np.int32) True >>> issctype(list) False >>> issctype(1.1) False Strings are also a scalar type: >>> issctype(np.dtype(np.str_)) True
python
numpy/_core/numerictypes.py
128
[ "rep" ]
false
5
7.04
numpy/numpy
31,054
numpy
false
get_group
def get_group(self, name) -> DataFrame | Series: """ Construct DataFrame from group with provided name. Parameters ---------- name : object The name of the group to get as a DataFrame. Returns ------- Series or DataFrame Get the respective Series or DataFrame corresponding to the group provided. See Also -------- DataFrameGroupBy.groups: Dictionary representation of the groupings formed during a groupby operation. DataFrameGroupBy.indices: Provides a mapping of group rows to positions of the elements. SeriesGroupBy.groups: Dictionary representation of the groupings formed during a groupby operation. SeriesGroupBy.indices: Provides a mapping of group rows to positions of the elements. Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b"] >>> ser = pd.Series([1, 2, 3], index=lst) >>> ser a 1 a 2 b 3 dtype: int64 >>> ser.groupby(level=0).get_group("a") a 1 a 2 dtype: int64 For DataFrameGroupBy: >>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]] >>> df = pd.DataFrame( ... data, columns=["a", "b", "c"], index=["owl", "toucan", "eagle"] ... ) >>> df a b c owl 1 2 3 toucan 1 5 6 eagle 7 8 9 >>> df.groupby(by=["a"]).get_group((1,)) a b c owl 1 2 3 toucan 1 5 6 For Resampler: >>> ser = pd.Series( ... [1, 2, 3, 4], ... index=pd.DatetimeIndex( ... ["2023-01-01", "2023-01-15", "2023-02-01", "2023-02-15"] ... ), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 2023-02-15 4 dtype: int64 >>> ser.resample("MS").get_group("2023-01-01") 2023-01-01 1 2023-01-15 2 dtype: int64 """ keys = self.keys level = self.level # mypy doesn't recognize level/keys as being sized when passed to len if (is_list_like(level) and len(level) == 1) or ( # type: ignore[arg-type] is_list_like(keys) and len(keys) == 1 # type: ignore[arg-type] ): # GH#25971 if isinstance(name, tuple) and len(name) == 1: name = name[0] else: raise KeyError(name) inds = self._get_index(name) if not len(inds): raise KeyError(name) return self._selected_obj.iloc[inds]
Construct DataFrame from group with provided name. Parameters ---------- name : object The name of the group to get as a DataFrame. Returns ------- Series or DataFrame Get the respective Series or DataFrame corresponding to the group provided. See Also -------- DataFrameGroupBy.groups: Dictionary representation of the groupings formed during a groupby operation. DataFrameGroupBy.indices: Provides a mapping of group rows to positions of the elements. SeriesGroupBy.groups: Dictionary representation of the groupings formed during a groupby operation. SeriesGroupBy.indices: Provides a mapping of group rows to positions of the elements. Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b"] >>> ser = pd.Series([1, 2, 3], index=lst) >>> ser a 1 a 2 b 3 dtype: int64 >>> ser.groupby(level=0).get_group("a") a 1 a 2 dtype: int64 For DataFrameGroupBy: >>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]] >>> df = pd.DataFrame( ... data, columns=["a", "b", "c"], index=["owl", "toucan", "eagle"] ... ) >>> df a b c owl 1 2 3 toucan 1 5 6 eagle 7 8 9 >>> df.groupby(by=["a"]).get_group((1,)) a b c owl 1 2 3 toucan 1 5 6 For Resampler: >>> ser = pd.Series( ... [1, 2, 3, 4], ... index=pd.DatetimeIndex( ... ["2023-01-01", "2023-01-15", "2023-02-01", "2023-02-15"] ... ), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 2023-02-15 4 dtype: int64 >>> ser.resample("MS").get_group("2023-01-01") 2023-01-01 1 2023-01-15 2 dtype: int64
python
pandas/core/groupby/groupby.py
818
[ "self", "name" ]
DataFrame | Series
true
9
8.56
pandas-dev/pandas
47,362
numpy
false
start
@Override public void start() throws SchedulingException { if (this.scheduler != null) { if (this.jobStore != null) { this.jobStore.initializeConnectionProvider(); } try { startScheduler(this.scheduler, this.startupDelay); } catch (SchedulerException ex) { throw new SchedulingException("Could not start Quartz Scheduler", ex); } } }
Start the Quartz Scheduler, respecting the "startupDelay" setting. @param scheduler the Scheduler to start @param startupDelay the number of seconds to wait before starting the Scheduler asynchronously
java
spring-context-support/src/main/java/org/springframework/scheduling/quartz/SchedulerFactoryBean.java
783
[]
void
true
4
6.24
spring-projects/spring-framework
59,386
javadoc
false
_dispatch_frame_op
def _dispatch_frame_op( self, right, func: Callable, axis: AxisInt | None = None ) -> DataFrame: """ Evaluate the frame operation func(left, right) by evaluating column-by-column, dispatching to the Series implementation. Parameters ---------- right : scalar, Series, or DataFrame func : arithmetic or comparison operator axis : {None, 0, 1} Returns ------- DataFrame Notes ----- Caller is responsible for setting np.errstate where relevant. """ # Get the appropriate array-op to apply to each column/block's values. array_op = ops.get_array_op(func) right = lib.item_from_zerodim(right) if not is_list_like(right): # i.e. scalar, faster than checking np.ndim(right) == 0 bm = self._mgr.apply(array_op, right=right) return self._constructor_from_mgr(bm, axes=bm.axes) elif isinstance(right, DataFrame): assert self.index.equals(right.index) assert self.columns.equals(right.columns) # TODO: The previous assertion `assert right._indexed_same(self)` # fails in cases with empty columns reached via # _frame_arith_method_with_reindex # TODO operate_blockwise expects a manager of the same type bm = self._mgr.operate_blockwise( right._mgr, array_op, ) return self._constructor_from_mgr(bm, axes=bm.axes) elif isinstance(right, Series) and axis == 1: # axis=1 means we want to operate row-by-row assert right.index.equals(self.columns) right = right._values # maybe_align_as_frame ensures we do not have an ndarray here assert not isinstance(right, np.ndarray) arrays = [ array_op(_left, _right) for _left, _right in zip(self._iter_column_arrays(), right, strict=True) ] elif isinstance(right, Series): assert right.index.equals(self.index) right = right._values arrays = [array_op(left, right) for left in self._iter_column_arrays()] else: raise NotImplementedError(right) return type(self)._from_arrays( arrays, self.columns, self.index, verify_integrity=False )
Evaluate the frame operation func(left, right) by evaluating column-by-column, dispatching to the Series implementation. Parameters ---------- right : scalar, Series, or DataFrame func : arithmetic or comparison operator axis : {None, 0, 1} Returns ------- DataFrame Notes ----- Caller is responsible for setting np.errstate where relevant.
python
pandas/core/frame.py
8,687
[ "self", "right", "func", "axis" ]
DataFrame
true
7
6.32
pandas-dev/pandas
47,362
numpy
false
equals
@Override public boolean equals(final Object obj) { if (this == obj) { return true; } if (!(obj instanceof ConstantInitializer<?>)) { return false; } final ConstantInitializer<?> c = (ConstantInitializer<?>) obj; return Objects.equals(getObject(), c.getObject()); }
Compares this object with another one. This implementation returns <strong>true</strong> if and only if the passed in object is an instance of {@link ConstantInitializer} which refers to an object equals to the object managed by this instance. @param obj the object to compare to @return a flag whether the objects are equal
java
src/main/java/org/apache/commons/lang3/concurrent/ConstantInitializer.java
69
[ "obj" ]
true
3
7.76
apache/commons-lang
2,896
javadoc
false
reconstruct_from_patches_2d
def reconstruct_from_patches_2d(patches, image_size): """Reconstruct the image from all of its patches. Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping regions. Read more in the :ref:`User Guide <image_feature_extraction>`. Parameters ---------- patches : ndarray of shape (n_patches, patch_height, patch_width) or \ (n_patches, patch_height, patch_width, n_channels) The complete set of patches. If the patches contain colour information, channels are indexed along the last dimension: RGB patches would have `n_channels=3`. image_size : tuple of int (image_height, image_width) or \ (image_height, image_width, n_channels) The size of the image that will be reconstructed. Returns ------- image : ndarray of shape image_size The reconstructed image. Examples -------- >>> from sklearn.datasets import load_sample_image >>> from sklearn.feature_extraction import image >>> one_image = load_sample_image("china.jpg") >>> print('Image shape: {}'.format(one_image.shape)) Image shape: (427, 640, 3) >>> image_patches = image.extract_patches_2d(image=one_image, patch_size=(10, 10)) >>> print('Patches shape: {}'.format(image_patches.shape)) Patches shape: (263758, 10, 10, 3) >>> image_reconstructed = image.reconstruct_from_patches_2d( ... patches=image_patches, ... image_size=one_image.shape ... ) >>> print(f"Reconstructed shape: {image_reconstructed.shape}") Reconstructed shape: (427, 640, 3) """ i_h, i_w = image_size[:2] p_h, p_w = patches.shape[1:3] img = np.zeros(image_size) # compute the dimensions of the patches array n_h = i_h - p_h + 1 n_w = i_w - p_w + 1 for p, (i, j) in zip(patches, product(range(n_h), range(n_w))): img[i : i + p_h, j : j + p_w] += p for i in range(i_h): for j in range(i_w): # divide by the amount of overlap # XXX: is this the most efficient way? memory-wise yes, cpu wise? img[i, j] /= float(min(i + 1, p_h, i_h - i) * min(j + 1, p_w, i_w - j)) return img
Reconstruct the image from all of its patches. Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping regions. Read more in the :ref:`User Guide <image_feature_extraction>`. Parameters ---------- patches : ndarray of shape (n_patches, patch_height, patch_width) or \ (n_patches, patch_height, patch_width, n_channels) The complete set of patches. If the patches contain colour information, channels are indexed along the last dimension: RGB patches would have `n_channels=3`. image_size : tuple of int (image_height, image_width) or \ (image_height, image_width, n_channels) The size of the image that will be reconstructed. Returns ------- image : ndarray of shape image_size The reconstructed image. Examples -------- >>> from sklearn.datasets import load_sample_image >>> from sklearn.feature_extraction import image >>> one_image = load_sample_image("china.jpg") >>> print('Image shape: {}'.format(one_image.shape)) Image shape: (427, 640, 3) >>> image_patches = image.extract_patches_2d(image=one_image, patch_size=(10, 10)) >>> print('Patches shape: {}'.format(image_patches.shape)) Patches shape: (263758, 10, 10, 3) >>> image_reconstructed = image.reconstruct_from_patches_2d( ... patches=image_patches, ... image_size=one_image.shape ... ) >>> print(f"Reconstructed shape: {image_reconstructed.shape}") Reconstructed shape: (427, 640, 3)
python
sklearn/feature_extraction/image.py
470
[ "patches", "image_size" ]
false
4
7.12
scikit-learn/scikit-learn
64,340
numpy
false
maxBucketIndex
OptionalLong maxBucketIndex();
@return the highest populated bucket index, or an empty optional if no buckets are populated
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialHistogram.java
140
[]
OptionalLong
true
1
6
elastic/elasticsearch
75,680
javadoc
false
throwIfPropertyFound
static void throwIfPropertyFound(ConfigDataEnvironmentContributor contributor, ConfigurationPropertyName name) { ConfigurationPropertySource source = contributor.getConfigurationPropertySource(); ConfigurationProperty property = (source != null) ? source.getConfigurationProperty(name) : null; if (property != null) { PropertySource<?> propertySource = contributor.getPropertySource(); ConfigDataResource location = contributor.getResource(); Assert.state(propertySource != null, "'propertySource' must not be null"); throw new InactiveConfigDataAccessException(propertySource, location, name.toString(), property.getOrigin()); } }
Throw an {@link InactiveConfigDataAccessException} if the given {@link ConfigDataEnvironmentContributor} contains the property. @param contributor the contributor to check @param name the name to check
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/InactiveConfigDataAccessException.java
122
[ "contributor", "name" ]
void
true
3
6.08
spring-projects/spring-boot
79,428
javadoc
false
nextAlphanumeric
public String nextAlphanumeric(final int count) { return next(count, true, true); }
Creates a random string whose length is the number of characters specified. <p> Characters will be chosen from the set of Latin alphabetic characters (a-z, A-Z) and the digits 0-9. </p> @param count the length of random string to create. @return the random string. @throws IllegalArgumentException if {@code count} &lt; 0.
java
src/main/java/org/apache/commons/lang3/RandomStringUtils.java
840
[ "count" ]
String
true
1
6.8
apache/commons-lang
2,896
javadoc
false
handleTimeouts
int handleTimeouts(Collection<Call> calls, String msg) { int numTimedOut = 0; for (Iterator<Call> iter = calls.iterator(); iter.hasNext(); ) { Call call = iter.next(); int remainingMs = calcTimeoutMsRemainingAsInt(now, call.deadlineMs); if (remainingMs < 0) { call.fail(now, new TimeoutException(msg + " Call: " + call.callName)); iter.remove(); numTimedOut++; } else { nextTimeoutMs = Math.min(nextTimeoutMs, remainingMs); } } return numTimedOut; }
Check for calls which have timed out. Timed out calls will be removed and failed. The remaining milliseconds until the next timeout will be updated. @param calls The collection of calls. @return The number of calls which were timed out.
java
clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java
1,045
[ "calls", "msg" ]
true
3
8.24
apache/kafka
31,560
javadoc
false
nextCleanInternal
private int nextCleanInternal() throws JSONException { while (this.pos < this.in.length()) { int c = this.in.charAt(this.pos++); switch (c) { case '\t', ' ', '\n', '\r': continue; case '/': if (this.pos == this.in.length()) { return c; } char peek = this.in.charAt(this.pos); switch (peek) { case '*': // skip a /* c-style comment */ this.pos++; int commentEnd = this.in.indexOf("*/", this.pos); if (commentEnd == -1) { throw syntaxError("Unterminated comment"); } this.pos = commentEnd + 2; continue; case '/': // skip a // end-of-line comment this.pos++; skipToEndOfLine(); continue; default: return c; } case '#': /* * Skip a # hash end-of-line comment. The JSON RFC doesn't specify * this behavior, but it's required to parse existing documents. See * https://b/2571423. */ skipToEndOfLine(); continue; default: return c; } } return -1; }
Returns the next value from the input. @return a {@link JSONObject}, {@link JSONArray}, String, Boolean, Integer, Long, Double or {@link JSONObject#NULL}. @throws JSONException if the input is malformed.
java
cli/spring-boot-cli/src/json-shade/java/org/springframework/boot/cli/json/JSONTokener.java
112
[]
true
4
8.24
spring-projects/spring-boot
79,428
javadoc
false
get_named_param_nodes
def get_named_param_nodes(graph: fx.Graph) -> dict[str, fx.Node]: """Get parameter nodes mapped by their fully qualified names. This function traverses the graph to find all parameter input nodes and returns them in a dictionary where keys are the parameter names (FQNs) and values are the corresponding FX nodes. Args: graph: The FX joint graph with descriptors Returns: A dictionary mapping parameter names (str) to their corresponding FX nodes. Raises: RuntimeError: If subclass tensors are encountered (not yet supported), as with subclasses a FQN does not necessarily map to a single plain tensor. """ r = {} for n in graph.nodes: if n.op == "placeholder": desc = n.meta["desc"] if isinstance(desc, SubclassGetAttrAOTInput): _raise_fqn_subclass_not_implemented(n, desc) elif isinstance(desc, ParamAOTInput): r[desc.target] = n return r
Get parameter nodes mapped by their fully qualified names. This function traverses the graph to find all parameter input nodes and returns them in a dictionary where keys are the parameter names (FQNs) and values are the corresponding FX nodes. Args: graph: The FX joint graph with descriptors Returns: A dictionary mapping parameter names (str) to their corresponding FX nodes. Raises: RuntimeError: If subclass tensors are encountered (not yet supported), as with subclasses a FQN does not necessarily map to a single plain tensor.
python
torch/_functorch/_aot_autograd/fx_utils.py
223
[ "graph" ]
dict[str, fx.Node]
true
5
7.92
pytorch/pytorch
96,034
google
false
size
def size(self) -> DataFrame | Series: """ Compute group sizes. Returns ------- DataFrame or Series Number of rows in each group as a Series if as_index is True or a DataFrame if as_index is False. %(see_also)s Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b"] >>> ser = pd.Series([1, 2, 3], index=lst) >>> ser a 1 a 2 b 3 dtype: int64 >>> ser.groupby(level=0).size() a 2 b 1 dtype: int64 >>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]] >>> df = pd.DataFrame( ... data, columns=["a", "b", "c"], index=["owl", "toucan", "eagle"] ... ) >>> df a b c owl 1 2 3 toucan 1 5 6 eagle 7 8 9 >>> df.groupby("a").size() a 1 2 7 1 dtype: int64 For Resampler: >>> ser = pd.Series( ... [1, 2, 3], ... index=pd.DatetimeIndex(["2023-01-01", "2023-01-15", "2023-02-01"]), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 dtype: int64 >>> ser.resample("MS").size() 2023-01-01 2 2023-02-01 1 Freq: MS, dtype: int64 """ result = self._grouper.size() dtype_backend: None | Literal["pyarrow", "numpy_nullable"] = None if isinstance(self.obj, Series): if isinstance(self.obj.array, ArrowExtensionArray): if isinstance(self.obj.array, ArrowStringArray): if self.obj.array.dtype.na_value is np.nan: dtype_backend = None else: dtype_backend = "numpy_nullable" else: dtype_backend = "pyarrow" elif isinstance(self.obj.array, BaseMaskedArray): dtype_backend = "numpy_nullable" # TODO: For DataFrames what if columns are mixed arrow/numpy/masked? # GH28330 preserve subclassed Series/DataFrames through calls if isinstance(self.obj, Series): result = self._obj_1d_constructor(result, name=self.obj.name) else: result = self._obj_1d_constructor(result) if dtype_backend is not None: result = result.convert_dtypes( infer_objects=False, convert_string=False, convert_boolean=False, convert_floating=False, dtype_backend=dtype_backend, ) if not self.as_index: result = result.rename("size").reset_index() return result
Compute group sizes. Returns ------- DataFrame or Series Number of rows in each group as a Series if as_index is True or a DataFrame if as_index is False. %(see_also)s Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b"] >>> ser = pd.Series([1, 2, 3], index=lst) >>> ser a 1 a 2 b 3 dtype: int64 >>> ser.groupby(level=0).size() a 2 b 1 dtype: int64 >>> data = [[1, 2, 3], [1, 5, 6], [7, 8, 9]] >>> df = pd.DataFrame( ... data, columns=["a", "b", "c"], index=["owl", "toucan", "eagle"] ... ) >>> df a b c owl 1 2 3 toucan 1 5 6 eagle 7 8 9 >>> df.groupby("a").size() a 1 2 7 1 dtype: int64 For Resampler: >>> ser = pd.Series( ... [1, 2, 3], ... index=pd.DatetimeIndex(["2023-01-01", "2023-01-15", "2023-02-01"]), ... ) >>> ser 2023-01-01 1 2023-01-15 2 2023-02-01 3 dtype: int64 >>> ser.resample("MS").size() 2023-01-01 2 2023-02-01 1 Freq: MS, dtype: int64
python
pandas/core/groupby/groupby.py
2,861
[ "self" ]
DataFrame | Series
true
12
8.4
pandas-dev/pandas
47,362
unknown
false
correctedDoForward
@Nullable B correctedDoForward(@Nullable A a) { if (handleNullAutomatically) { // TODO(kevinb): we shouldn't be checking for a null result at runtime. Assert? return a == null ? null : checkNotNull(doForward(a)); } else { return unsafeDoForward(a); } }
Returns a representation of {@code a} as an instance of type {@code B}. @return the converted value; is null <i>if and only if</i> {@code a} is null
java
android/guava/src/com/google/common/base/Converter.java
200
[ "a" ]
B
true
3
7.2
google/guava
51,352
javadoc
false
toString
@Override public String toString() { final String msgStr = Objects.toString(message, StringUtils.EMPTY); final String formattedTime = formatTime(); return msgStr.isEmpty() ? formattedTime : msgStr + StringUtils.SPACE + formattedTime; }
Gets a summary of the time that this StopWatch recorded as a string. <p> The format used is ISO 8601-like, [<em>message</em> ]<em>hours</em>:<em>minutes</em>:<em>seconds</em>.<em>milliseconds</em>. </p> @return the time as a String. @since 3.10 Returns the prefix {@code "message "} if the message is set.
java
src/main/java/org/apache/commons/lang3/time/StopWatch.java
815
[]
String
true
2
7.76
apache/commons-lang
2,896
javadoc
false
getTotalTransformationCost
private static float getTotalTransformationCost(final Class<?>[] srcArgs, final Executable executable) { final Class<?>[] destArgs = executable.getParameterTypes(); final boolean isVarArgs = executable.isVarArgs(); // "source" and "destination" are the actual and declared args respectively. float totalCost = 0.0f; final long normalArgsLen = isVarArgs ? destArgs.length - 1 : destArgs.length; if (srcArgs.length < normalArgsLen) { return Float.MAX_VALUE; } for (int i = 0; i < normalArgsLen; i++) { totalCost += getObjectTransformationCost(srcArgs[i], destArgs[i]); } if (isVarArgs) { // When isVarArgs is true, srcArgs and dstArgs may differ in length. // There are two special cases to consider: final boolean noVarArgsPassed = srcArgs.length < destArgs.length; final boolean explicitArrayForVarargs = srcArgs.length == destArgs.length && srcArgs[srcArgs.length - 1] != null && srcArgs[srcArgs.length - 1].isArray(); final float varArgsCost = 0.001f; final Class<?> destClass = destArgs[destArgs.length - 1].getComponentType(); if (noVarArgsPassed) { // When no varargs passed, the best match is the most generic matching type, not the most specific. totalCost += getObjectTransformationCost(destClass, Object.class) + varArgsCost; } else if (explicitArrayForVarargs) { final Class<?> sourceClass = srcArgs[srcArgs.length - 1].getComponentType(); totalCost += getObjectTransformationCost(sourceClass, destClass) + varArgsCost; } else { // This is typical varargs case. for (int i = destArgs.length - 1; i < srcArgs.length; i++) { final Class<?> srcClass = srcArgs[i]; totalCost += getObjectTransformationCost(srcClass, destClass) + varArgsCost; } } } return totalCost; }
Gets the sum of the object transformation cost for each class in the source argument list. @param srcArgs The source arguments. @param executable The executable to calculate transformation costs for. @return The total transformation cost.
java
src/main/java/org/apache/commons/lang3/reflect/MemberUtils.java
199
[ "srcArgs", "executable" ]
true
10
8.08
apache/commons-lang
2,896
javadoc
false
isSorted
public static <T> boolean isSorted(final T[] array, final Comparator<T> comparator) { Objects.requireNonNull(comparator, "comparator"); if (getLength(array) < 2) { return true; } T previous = array[0]; final int n = array.length; for (int i = 1; i < n; i++) { final T current = array[i]; if (comparator.compare(previous, current) > 0) { return false; } previous = current; } return true; }
Tests whether the provided array is sorted according to the provided {@link Comparator}. @param array the array to check. @param comparator the {@link Comparator} to compare over. @param <T> the datatype of the array. @return whether the array is sorted. @throws NullPointerException if {@code comparator} is {@code null}. @since 3.4
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
3,745
[ "array", "comparator" ]
true
4
7.92
apache/commons-lang
2,896
javadoc
false
hex2dToGeo
static LatLng hex2dToGeo(double x, double y, int face, int res, boolean substrate) { // calculate (r, theta) in hex2d double r = Math.sqrt(x * x + y * y); if (r < Constants.EPSILON) { return faceCenterGeo[face]; } double theta = FastMath.atan2(y, x); // scale for current resolution length u for (int i = 0; i < res; i++) { r *= Constants.M_RSQRT7; } // scale accordingly if this is a substrate grid if (substrate) { r *= M_ONETHIRD; if (H3Index.isResolutionClassIII(res)) { r *= Constants.M_RSQRT7; } } r *= Constants.RES0_U_GNOMONIC; // perform inverse gnomonic scaling of r r = FastMath.atan(r); // adjust theta for Class III // if a substrate grid, then it's already been adjusted for Class III if (substrate == false && H3Index.isResolutionClassIII(res)) { theta = posAngleRads(theta + Constants.M_AP7_ROT_RADS); } // find theta as an azimuth theta = posAngleRads(faceAxesAzRadsCII[face][0] - theta); // now find the point at (r,theta) from the face center return Vec3d.faceCenterPoint[face].geoAzDistanceRads(theta, r); }
Determines the center point in spherical coordinates of a cell given by the provided 2D hex coordinates on a particular icosahedral face. @param x The x component of the 2D hex coordinates. @param y The y component of the 2D hex coordinates. @param face The icosahedral face upon which the 2D hex coordinate system is centered. @param res The H3 resolution of the cell. @param substrate Indicates whether or not this grid is actually a substrate grid relative to the specified resolution.
java
libs/h3/src/main/java/org/elasticsearch/h3/Vec2d.java
122
[ "x", "y", "face", "res", "substrate" ]
LatLng
true
7
6.88
elastic/elasticsearch
75,680
javadoc
false
get
def get(self, i): """ Extract element from each component at specified position or with specified key. Extract element from lists, tuples, dict, or strings in each element in the Series/Index. Parameters ---------- i : int or hashable dict label Position or key of element to extract. Returns ------- Series or Index Series or Index where each value is the extracted element from the corresponding input component. See Also -------- Series.str.extract : Extract capture groups in the regex as columns in a DataFrame. Examples -------- >>> s = pd.Series( ... [ ... "String", ... (1, 2, 3), ... ["a", "b", "c"], ... 123, ... -456, ... {1: "Hello", "2": "World"}, ... ] ... ) >>> s 0 String 1 (1, 2, 3) 2 [a, b, c] 3 123 4 -456 5 {1: 'Hello', '2': 'World'} dtype: object >>> s.str.get(1) 0 t 1 2 2 b 3 NaN 4 NaN 5 Hello dtype: object >>> s.str.get(-1) 0 g 1 3 2 c 3 NaN 4 NaN 5 None dtype: object Return element with given key >>> s = pd.Series( ... [ ... {"name": "Hello", "value": "World"}, ... {"name": "Goodbye", "value": "Planet"}, ... ] ... ) >>> s.str.get("name") 0 Hello 1 Goodbye dtype: object """ result = self._data.array._str_get(i) return self._wrap_result(result)
Extract element from each component at specified position or with specified key. Extract element from lists, tuples, dict, or strings in each element in the Series/Index. Parameters ---------- i : int or hashable dict label Position or key of element to extract. Returns ------- Series or Index Series or Index where each value is the extracted element from the corresponding input component. See Also -------- Series.str.extract : Extract capture groups in the regex as columns in a DataFrame. Examples -------- >>> s = pd.Series( ... [ ... "String", ... (1, 2, 3), ... ["a", "b", "c"], ... 123, ... -456, ... {1: "Hello", "2": "World"}, ... ] ... ) >>> s 0 String 1 (1, 2, 3) 2 [a, b, c] 3 123 4 -456 5 {1: 'Hello', '2': 'World'} dtype: object >>> s.str.get(1) 0 t 1 2 2 b 3 NaN 4 NaN 5 Hello dtype: object >>> s.str.get(-1) 0 g 1 3 2 c 3 NaN 4 NaN 5 None dtype: object Return element with given key >>> s = pd.Series( ... [ ... {"name": "Hello", "value": "World"}, ... {"name": "Goodbye", "value": "Planet"}, ... ] ... ) >>> s.str.get("name") 0 Hello 1 Goodbye dtype: object
python
pandas/core/strings/accessor.py
1,073
[ "self", "i" ]
false
1
6.4
pandas-dev/pandas
47,362
numpy
false
template_filter
def template_filter( self, name: T_template_filter | str | None = None ) -> T_template_filter | t.Callable[[T_template_filter], T_template_filter]: """Decorate a function to register it as a custom Jinja filter. The name is optional. The decorator may be used without parentheses. .. code-block:: python @app.template_filter("reverse") def reverse_filter(s): return reversed(s) The :meth:`add_template_filter` method may be used to register a function later rather than decorating. :param name: The name to register the filter as. If not given, uses the function's name. """ if callable(name): self.add_template_filter(name) return name def decorator(f: T_template_filter) -> T_template_filter: self.add_template_filter(f, name=name) return f return decorator
Decorate a function to register it as a custom Jinja filter. The name is optional. The decorator may be used without parentheses. .. code-block:: python @app.template_filter("reverse") def reverse_filter(s): return reversed(s) The :meth:`add_template_filter` method may be used to register a function later rather than decorating. :param name: The name to register the filter as. If not given, uses the function's name.
python
src/flask/sansio/app.py
667
[ "self", "name" ]
T_template_filter | t.Callable[[T_template_filter], T_template_filter]
true
2
6.4
pallets/flask
70,946
sphinx
false
equals
@Override public boolean equals(Object obj) { if (this == obj) { return true; } if (obj == null || getClass() != obj.getClass()) { return false; } return Objects.equals(this.cause, ((Error) obj).cause); }
Return the original cause of the error. @return the error cause
java
core/spring-boot/src/main/java/org/springframework/boot/web/error/Error.java
77
[ "obj" ]
true
4
7.04
spring-projects/spring-boot
79,428
javadoc
false
delete
def delete(self, loc) -> list[Block]: """Deletes the locs from the block. We split the block to avoid copying the underlying data. We create new blocks for every connected segment of the initial block that is not deleted. The new blocks point to the initial array. """ if not is_list_like(loc): loc = [loc] if self.ndim == 1: values = cast(np.ndarray, self.values) values = np.delete(values, loc) mgr_locs = self._mgr_locs.delete(loc) return [type(self)(values, placement=mgr_locs, ndim=self.ndim)] if np.max(loc) >= self.values.shape[0]: raise IndexError # Add one out-of-bounds indexer as maximum to collect # all columns after our last indexer if any loc = np.concatenate([loc, [self.values.shape[0]]]) mgr_locs_arr = self._mgr_locs.as_array new_blocks: list[Block] = [] previous_loc = -1 # TODO(CoW): This is tricky, if parent block goes out of scope # all split blocks are referencing each other even though they # don't share data refs = self.refs if self.refs.has_reference() else None for idx in loc: if idx == previous_loc + 1: # There is no column between current and last idx pass else: # No overload variant of "__getitem__" of "ExtensionArray" matches # argument type "Tuple[slice, slice]" values = self.values[previous_loc + 1 : idx, :] # type: ignore[call-overload] locs = mgr_locs_arr[previous_loc + 1 : idx] nb = type(self)( values, placement=BlockPlacement(locs), ndim=self.ndim, refs=refs ) new_blocks.append(nb) previous_loc = idx return new_blocks
Deletes the locs from the block. We split the block to avoid copying the underlying data. We create new blocks for every connected segment of the initial block that is not deleted. The new blocks point to the initial array.
python
pandas/core/internals/blocks.py
1,541
[ "self", "loc" ]
list[Block]
true
8
6
pandas-dev/pandas
47,362
unknown
false
lreshape
def lreshape(data: DataFrame, groups: dict, dropna: bool = True) -> DataFrame: """ Reshape wide-format data to long. Generalized inverse of DataFrame.pivot. Accepts a dictionary, ``groups``, in which each key is a new column name and each value is a list of old column names that will be "melted" under the new column name as part of the reshape. Parameters ---------- data : DataFrame The wide-format DataFrame. groups : dict {new_name : list_of_columns}. dropna : bool, default True Do not include columns whose entries are all NaN. Returns ------- DataFrame Reshaped DataFrame. See Also -------- melt : Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. pivot : Create a spreadsheet-style pivot table as a DataFrame. DataFrame.pivot : Pivot without aggregation that can handle non-numeric data. DataFrame.pivot_table : Generalization of pivot that can handle duplicate values for one index/column pair. DataFrame.unstack : Pivot based on the index values instead of a column. wide_to_long : Wide panel to long format. Less flexible but more user-friendly than melt. Examples -------- >>> data = pd.DataFrame( ... { ... "hr1": [514, 573], ... "hr2": [545, 526], ... "team": ["Red Sox", "Yankees"], ... "year1": [2007, 2007], ... "year2": [2008, 2008], ... } ... ) >>> data hr1 hr2 team year1 year2 0 514 545 Red Sox 2007 2008 1 573 526 Yankees 2007 2008 >>> pd.lreshape(data, {"year": ["year1", "year2"], "hr": ["hr1", "hr2"]}) team year hr 0 Red Sox 2007 514 1 Yankees 2007 573 2 Red Sox 2008 545 3 Yankees 2008 526 """ mdata = {} pivot_cols = [] all_cols: set[Hashable] = set() K = len(next(iter(groups.values()))) for target, names in groups.items(): if len(names) != K: raise ValueError("All column lists must be same length") to_concat = [data[col]._values for col in names] mdata[target] = concat_compat(to_concat) pivot_cols.append(target) all_cols = all_cols.union(names) id_cols = list(data.columns.difference(all_cols)) for col in id_cols: mdata[col] = np.tile(data[col]._values, K) if dropna: mask = np.ones(len(mdata[pivot_cols[0]]), dtype=bool) for c in pivot_cols: mask &= notna(mdata[c]) if not mask.all(): mdata = {k: v[mask] for k, v in mdata.items()} return data._constructor(mdata, columns=id_cols + pivot_cols)
Reshape wide-format data to long. Generalized inverse of DataFrame.pivot. Accepts a dictionary, ``groups``, in which each key is a new column name and each value is a list of old column names that will be "melted" under the new column name as part of the reshape. Parameters ---------- data : DataFrame The wide-format DataFrame. groups : dict {new_name : list_of_columns}. dropna : bool, default True Do not include columns whose entries are all NaN. Returns ------- DataFrame Reshaped DataFrame. See Also -------- melt : Unpivot a DataFrame from wide to long format, optionally leaving identifiers set. pivot : Create a spreadsheet-style pivot table as a DataFrame. DataFrame.pivot : Pivot without aggregation that can handle non-numeric data. DataFrame.pivot_table : Generalization of pivot that can handle duplicate values for one index/column pair. DataFrame.unstack : Pivot based on the index values instead of a column. wide_to_long : Wide panel to long format. Less flexible but more user-friendly than melt. Examples -------- >>> data = pd.DataFrame( ... { ... "hr1": [514, 573], ... "hr2": [545, 526], ... "team": ["Red Sox", "Yankees"], ... "year1": [2007, 2007], ... "year2": [2008, 2008], ... } ... ) >>> data hr1 hr2 team year1 year2 0 514 545 Red Sox 2007 2008 1 573 526 Yankees 2007 2008 >>> pd.lreshape(data, {"year": ["year1", "year2"], "hr": ["hr1", "hr2"]}) team year hr 0 Red Sox 2007 514 1 Yankees 2007 573 2 Red Sox 2008 545 3 Yankees 2008 526
python
pandas/core/reshape/melt.py
282
[ "data", "groups", "dropna" ]
DataFrame
true
7
8.4
pandas-dev/pandas
47,362
numpy
false
newInstance
public static @Nullable ColorConverter newInstance(@Nullable Configuration config, @Nullable String[] options) { if (options.length < 1) { LOGGER.error("Incorrect number of options on style. Expected at least 1, received {}", options.length); return null; } if (options[0] == null) { LOGGER.error("No pattern supplied on style"); return null; } PatternParser parser = PatternLayout.createPatternParser(config); List<PatternFormatter> formatters = parser.parse(options[0]); AnsiElement element = (options.length != 1) ? ELEMENTS.get(options[1]) : null; return new ColorConverter(formatters, element); }
Creates a new instance of the class. Required by Log4J2. @param config the configuration @param options the options @return a new instance, or {@code null} if the options are invalid
java
core/spring-boot/src/main/java/org/springframework/boot/logging/log4j2/ColorConverter.java
123
[ "config", "options" ]
ColorConverter
true
4
8.24
spring-projects/spring-boot
79,428
javadoc
false
isParsable
public static boolean isParsable(final String str) { if (StringUtils.isEmpty(str)) { return false; } if (str.charAt(0) == '-') { if (str.length() == 1) { return false; } return isParsableDecimal(str, 1); } return isParsableDecimal(str, 0); }
Checks whether the given String is a parsable number. <p> Parsable numbers include those Strings understood by {@link Integer#parseInt(String)}, {@link Long#parseLong(String)}, {@link Float#parseFloat(String)} or {@link Double#parseDouble(String)}. This method can be used instead of catching {@link java.text.ParseException} when calling one of those methods. </p> <p> Hexadecimal and scientific notations are <strong>not</strong> considered parsable. See {@link #isCreatable(String)} on those cases. </p> <p> {@code null} and empty String will return {@code false}. </p> @param str the String to check. @return {@code true} if the string is a parsable number. @since 3.4
java
src/main/java/org/apache/commons/lang3/math/NumberUtils.java
726
[ "str" ]
true
4
7.76
apache/commons-lang
2,896
javadoc
false
simplifyDurationFactoryArg
std::string simplifyDurationFactoryArg(const MatchFinder::MatchResult &Result, const Expr &Node) { // Check for an explicit cast to `float` or `double`. if (std::optional<std::string> MaybeArg = stripFloatCast(Result, Node)) return *MaybeArg; // Check for floats without fractional components. if (std::optional<std::string> MaybeArg = stripFloatLiteralFraction(Result, Node)) return *MaybeArg; // We couldn't simplify any further, so return the argument text. return tooling::fixit::getText(Node, *Result.Context).str(); }
Returns `true` if `Node` is a value which evaluates to a literal `0`.
cpp
clang-tools-extra/clang-tidy/abseil/DurationRewriter.cpp
169
[]
true
3
7.2
llvm/llvm-project
36,021
doxygen
false
get_error_info
def get_error_info(self, aws_account_id: str | None, data_set_id: str, ingestion_id: str) -> dict | None: """ Get info about the error if any. :param aws_account_id: An AWS Account ID, if set to ``None`` then use associated AWS Account ID. :param data_set_id: QuickSight Data Set ID :param ingestion_id: QuickSight Ingestion ID :return: Error info dict containing the error type (key 'Type') and message (key 'Message') if available. Else, returns None. """ aws_account_id = aws_account_id or self.account_id describe_ingestion_response = self.conn.describe_ingestion( AwsAccountId=aws_account_id, DataSetId=data_set_id, IngestionId=ingestion_id ) # using .get() to get None if the key is not present, instead of an exception. return describe_ingestion_response["Ingestion"].get("ErrorInfo")
Get info about the error if any. :param aws_account_id: An AWS Account ID, if set to ``None`` then use associated AWS Account ID. :param data_set_id: QuickSight Data Set ID :param ingestion_id: QuickSight Ingestion ID :return: Error info dict containing the error type (key 'Type') and message (key 'Message') if available. Else, returns None.
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/quicksight.py
118
[ "self", "aws_account_id", "data_set_id", "ingestion_id" ]
dict | None
true
2
8.08
apache/airflow
43,597
sphinx
false
describe_nodegroup
def describe_nodegroup(self, clusterName: str, nodegroupName: str, verbose: bool = False) -> dict: """ Return descriptive information about an Amazon EKS managed node group. .. seealso:: - :external+boto3:py:meth:`EKS.Client.describe_nodegroup` :param clusterName: The name of the Amazon EKS Cluster associated with the nodegroup. :param nodegroupName: The name of the nodegroup to describe. :param verbose: Provides additional logging if set to True. Defaults to False. :return: Returns descriptive information about a specific EKS Nodegroup. """ eks_client = self.conn response = eks_client.describe_nodegroup(clusterName=clusterName, nodegroupName=nodegroupName) self.log.info( "Retrieved details for Amazon EKS managed node group named %s in Amazon EKS cluster %s.", response.get("nodegroup").get("nodegroupName"), response.get("nodegroup").get("clusterName"), ) if verbose: nodegroup_data = response.get("nodegroup") self.log.info( "Amazon EKS managed node group details: %s", json.dumps(nodegroup_data, default=repr), ) return response
Return descriptive information about an Amazon EKS managed node group. .. seealso:: - :external+boto3:py:meth:`EKS.Client.describe_nodegroup` :param clusterName: The name of the Amazon EKS Cluster associated with the nodegroup. :param nodegroupName: The name of the nodegroup to describe. :param verbose: Provides additional logging if set to True. Defaults to False. :return: Returns descriptive information about a specific EKS Nodegroup.
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/eks.py
334
[ "self", "clusterName", "nodegroupName", "verbose" ]
dict
true
2
7.44
apache/airflow
43,597
sphinx
false
moveProfileSpecific
private void moveProfileSpecific(Map<ImportPhase, List<ConfigDataEnvironmentContributor>> children) { List<ConfigDataEnvironmentContributor> before = children.get(ImportPhase.BEFORE_PROFILE_ACTIVATION); if (!hasAnyProfileSpecificChildren(before)) { return; } List<ConfigDataEnvironmentContributor> updatedBefore = new ArrayList<>(before.size()); List<ConfigDataEnvironmentContributor> updatedAfter = new ArrayList<>(); for (ConfigDataEnvironmentContributor contributor : before) { updatedBefore.add(moveProfileSpecificChildren(contributor, updatedAfter)); } updatedAfter.addAll(children.getOrDefault(ImportPhase.AFTER_PROFILE_ACTIVATION, Collections.emptyList())); children.put(ImportPhase.BEFORE_PROFILE_ACTIVATION, updatedBefore); children.put(ImportPhase.AFTER_PROFILE_ACTIVATION, updatedAfter); }
Create a new {@link ConfigDataEnvironmentContributor} instance with a new set of children for the given phase. @param importPhase the import phase @param children the new children @return a new contributor instance
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigDataEnvironmentContributor.java
283
[ "children" ]
void
true
2
7.76
spring-projects/spring-boot
79,428
javadoc
false
estimator_html_repr
def estimator_html_repr(estimator): """Build a HTML representation of an estimator. Read more in the :ref:`User Guide <visualizing_composite_estimators>`. Parameters ---------- estimator : estimator object The estimator to visualize. Returns ------- html: str HTML representation of estimator. Examples -------- >>> from sklearn.utils._repr_html.estimator import estimator_html_repr >>> from sklearn.linear_model import LogisticRegression >>> estimator_html_repr(LogisticRegression()) '<style>#sk-container-id...' """ from sklearn.exceptions import NotFittedError from sklearn.utils.validation import check_is_fitted if not hasattr(estimator, "fit"): status_label = "<span>Not fitted</span>" is_fitted_css_class = "" else: try: check_is_fitted(estimator) status_label = "<span>Fitted</span>" is_fitted_css_class = "fitted" except NotFittedError: status_label = "<span>Not fitted</span>" is_fitted_css_class = "" is_fitted_icon = ( f'<span class="sk-estimator-doc-link {is_fitted_css_class}">' f"i{status_label}</span>" ) with closing(StringIO()) as out: container_id = _CONTAINER_ID_COUNTER.get_id() style_template = Template(_CSS_STYLE) style_with_id = style_template.substitute(id=container_id) estimator_str = str(estimator) # The fallback message is shown by default and loading the CSS sets # div.sk-text-repr-fallback to display: none to hide the fallback message. # # If the notebook is trusted, the CSS is loaded which hides the fallback # message. If the notebook is not trusted, then the CSS is not loaded and the # fallback message is shown by default. # # The reverse logic applies to HTML repr div.sk-container. # div.sk-container is hidden by default and the loading the CSS displays it. fallback_msg = ( "In a Jupyter environment, please rerun this cell to show the HTML" " representation or trust the notebook. <br />On GitHub, the" " HTML representation is unable to render, please try loading this page" " with nbviewer.org." ) html_template = ( f"<style>{style_with_id}</style>" f"<body>" f'<div id="{container_id}" class="sk-top-container">' '<div class="sk-text-repr-fallback">' f"<pre>{html.escape(estimator_str)}</pre><b>{fallback_msg}</b>" "</div>" '<div class="sk-container" hidden>' ) out.write(html_template) _write_estimator_html( out, estimator, estimator.__class__.__name__, estimator_str, first_call=True, is_fitted_css_class=is_fitted_css_class, is_fitted_icon=is_fitted_icon, ) with open(str(Path(__file__).parent / "estimator.js"), "r") as f: script = f.read() html_end = ( f"</div></div><script>{script}" f"\nforceTheme('{container_id}');</script></body>" ) out.write(html_end) html_output = out.getvalue() return html_output
Build a HTML representation of an estimator. Read more in the :ref:`User Guide <visualizing_composite_estimators>`. Parameters ---------- estimator : estimator object The estimator to visualize. Returns ------- html: str HTML representation of estimator. Examples -------- >>> from sklearn.utils._repr_html.estimator import estimator_html_repr >>> from sklearn.linear_model import LogisticRegression >>> estimator_html_repr(LogisticRegression()) '<style>#sk-container-id...'
python
sklearn/utils/_repr_html/estimator.py
407
[ "estimator" ]
false
3
6.96
scikit-learn/scikit-learn
64,340
numpy
false
doStart
private void doStart(Map<String, ? extends Lifecycle> lifecycleBeans, String beanName, boolean autoStartupOnly, @Nullable List<CompletableFuture<?>> futures) { Lifecycle bean = lifecycleBeans.remove(beanName); if (bean != null && bean != this) { String[] dependenciesForBean = getBeanFactory().getDependenciesForBean(beanName); for (String dependency : dependenciesForBean) { doStart(lifecycleBeans, dependency, autoStartupOnly, futures); } if (!bean.isRunning() && (!autoStartupOnly || toBeStarted(beanName, bean))) { if (futures != null) { futures.add(CompletableFuture.runAsync(() -> doStart(beanName, bean), getBootstrapExecutor())); } else { doStart(beanName, bean); } } } }
Start the specified bean as part of the given set of Lifecycle beans, making sure that any beans that it depends on are started first. @param lifecycleBeans a Map with bean name as key and Lifecycle instance as value @param beanName the name of the bean to start
java
spring-context/src/main/java/org/springframework/context/support/DefaultLifecycleProcessor.java
395
[ "lifecycleBeans", "beanName", "autoStartupOnly", "futures" ]
void
true
7
6.72
spring-projects/spring-framework
59,386
javadoc
false
reverse
public static void reverse(final float[] array) { if (array != null) { reverse(array, 0, array.length); } }
Reverses the order of the given array. <p> This method does nothing for a {@code null} input array. </p> @param array the array to reverse, may be {@code null}.
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
6,514
[ "array" ]
void
true
2
7.04
apache/commons-lang
2,896
javadoc
false
end
def end(self, heartbeat_interval=10): """ End execution. Poll until all outstanding tasks are marked as completed. This is a blocking call and async Lambda tasks can not be cancelled, so this will wait until all tasks are either completed or the timeout is reached. :param heartbeat_interval: The interval in seconds to wait between checks for task completion. """ self.log.info("Received signal to end, waiting for outstanding tasks to finish.") time_to_wait = int(conf.get(CONFIG_GROUP_NAME, AllLambdaConfigKeys.END_WAIT_TIMEOUT)) start_time = timezone.utcnow() while True: if time_to_wait: current_time = timezone.utcnow() elapsed_time = (current_time - start_time).total_seconds() if elapsed_time > time_to_wait: self.log.warning( "Timed out waiting for tasks to finish. Some tasks may not be handled gracefully" " as the executor is force ending due to timeout." ) break self.sync() if not self.running_tasks: self.log.info("All tasks completed; executor ending.") break self.log.info("Waiting for %d task(s) to complete.", len(self.running_tasks)) time.sleep(heartbeat_interval)
End execution. Poll until all outstanding tasks are marked as completed. This is a blocking call and async Lambda tasks can not be cancelled, so this will wait until all tasks are either completed or the timeout is reached. :param heartbeat_interval: The interval in seconds to wait between checks for task completion.
python
providers/amazon/src/airflow/providers/amazon/aws/executors/aws_lambda/lambda_executor.py
492
[ "self", "heartbeat_interval" ]
false
5
6.24
apache/airflow
43,597
sphinx
false
currentNode
function currentNode(parsingContext: ParsingContext, pos?: number): Node | undefined { // If we don't have a cursor or the parsing context isn't reusable, there's nothing to reuse. // // If there is an outstanding parse error that we've encountered, but not attached to // some node, then we cannot get a node from the old source tree. This is because we // want to mark the next node we encounter as being unusable. // // Note: This may be too conservative. Perhaps we could reuse the node and set the bit // on it (or its leftmost child) as having the error. For now though, being conservative // is nice and likely won't ever affect perf. if (!syntaxCursor || !isReusableParsingContext(parsingContext) || parseErrorBeforeNextFinishedNode) { return undefined; } const node = syntaxCursor.currentNode(pos ?? scanner.getTokenFullStart()); // Can't reuse a missing node. // Can't reuse a node that intersected the change range. // Can't reuse a node that contains a parse error. This is necessary so that we // produce the same set of errors again. if (nodeIsMissing(node) || intersectsIncrementalChange(node) || containsParseError(node)) { return undefined; } // We can only reuse a node if it was parsed under the same strict mode that we're // currently in. i.e. if we originally parsed a node in non-strict mode, but then // the user added 'using strict' at the top of the file, then we can't use that node // again as the presence of strict mode may cause us to parse the tokens in the file // differently. // // Note: we *can* reuse tokens when the strict mode changes. That's because tokens // are unaffected by strict mode. It's just the parser will decide what to do with it // differently depending on what mode it is in. // // This also applies to all our other context flags as well. const nodeContextFlags = node.flags & NodeFlags.ContextFlags; if (nodeContextFlags !== contextFlags) { return undefined; } // Ok, we have a node that looks like it could be reused. Now verify that it is valid // in the current list parsing context that we're currently at. if (!canReuseNode(node, parsingContext)) { return undefined; } if (canHaveJSDoc(node) && node.jsDoc?.jsDocCache) { // jsDocCache may include tags from parent nodes, which might have been modified. node.jsDoc.jsDocCache = undefined; } return node; }
Reports a diagnostic error for the current token being an invalid name. @param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName). @param nameDiagnostic Diagnostic to report for all other cases. @param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
typescript
src/compiler/parser.ts
3,125
[ "parsingContext", "pos?" ]
true
11
6.88
microsoft/TypeScript
107,154
jsdoc
false
getRunListeners
private SpringApplicationRunListeners getRunListeners(String[] args) { ArgumentResolver argumentResolver = ArgumentResolver.of(SpringApplication.class, this); argumentResolver = argumentResolver.and(String[].class, args); List<SpringApplicationRunListener> listeners = getSpringFactoriesInstances(SpringApplicationRunListener.class, argumentResolver); SpringApplicationHook hook = applicationHook.get(); SpringApplicationRunListener hookListener = (hook != null) ? hook.getRunListener(this) : null; if (hookListener != null) { listeners = new ArrayList<>(listeners); listeners.add(hookListener); } return new SpringApplicationRunListeners(logger, listeners, this.applicationStartup); }
Run the Spring application, creating and refreshing a new {@link ApplicationContext}. @param args the application arguments (usually passed from a Java main method) @return a running {@link ApplicationContext}
java
core/spring-boot/src/main/java/org/springframework/boot/SpringApplication.java
453
[ "args" ]
SpringApplicationRunListeners
true
3
7.28
spring-projects/spring-boot
79,428
javadoc
false
detectAndParse
public static Period detectAndParse(String value, @Nullable ChronoUnit unit) { return detect(value).parse(value, unit); }
Detect the style then parse the value to return a period. @param value the value to parse @param unit the period unit to use if the value doesn't specify one ({@code null} will default to ms) @return the parsed period @throws IllegalArgumentException if the value is not a known style or cannot be parsed
java
core/spring-boot/src/main/java/org/springframework/boot/convert/PeriodStyle.java
198
[ "value", "unit" ]
Period
true
1
6.48
spring-projects/spring-boot
79,428
javadoc
false
getNextRequestId
function getNextRequestId() { if (requestId === NumberMAX_SAFE_INTEGER) { requestId = 0; } return `node-network-event-${++requestId}`; }
Return a monotonically increasing time in seconds since an arbitrary point in the past. @returns {number}
javascript
lib/internal/inspector/network.js
47
[]
false
2
7.12
nodejs/node
114,839
jsdoc
false
bindJSDocTypeAlias
function bindJSDocTypeAlias(node: JSDocTypedefTag | JSDocCallbackTag | JSDocEnumTag) { bind(node.tagName); if (node.kind !== SyntaxKind.JSDocEnumTag && node.fullName) { // don't bind the type name yet; that's delayed until delayedBindJSDocTypedefTag setParent(node.fullName, node); setParentRecursive(node.fullName, /*incremental*/ false); } if (typeof node.comment !== "string") { bindEach(node.comment); } }
Declares a Symbol for the node and adds it to symbols. Reports errors for conflicting identifier names. @param symbolTable - The symbol table which node will be added to. @param parent - node's parent declaration. @param node - The declaration to be added to the symbol table @param includes - The SymbolFlags that node has in addition to its declaration type (eg: export, ambient, etc.) @param excludes - The flags which node cannot be declared alongside in a symbol table. Used to report forbidden declarations.
typescript
src/compiler/binder.ts
2,113
[ "node" ]
false
4
6.08
microsoft/TypeScript
107,154
jsdoc
false
unmute
private void unmute(KafkaChannel channel) { // Remove the channel from explicitlyMutedChannels only if the channel has been actually unmuted. if (channel.maybeUnmute()) { explicitlyMutedChannels.remove(channel); if (channel.hasBytesBuffered()) { keysWithBufferedRead.add(channel.selectionKey()); madeReadProgressLastPoll = true; } } }
handle any ready I/O on a set of selection keys @param selectionKeys set of keys to handle @param isImmediatelyConnected true if running over a set of keys for just-connected sockets @param currentTimeNanos time at which set of keys was determined
java
clients/src/main/java/org/apache/kafka/common/network/Selector.java
760
[ "channel" ]
void
true
3
6.08
apache/kafka
31,560
javadoc
false
write
<N, V> void write(@Nullable N name, @Nullable V value) { if (name != null) { writePair(name, value); } else { write(value); } }
Write a name value pair, or just a value if {@code name} is {@code null}. @param <N> the name type in the pair @param <V> the value type in the pair @param name the name of the pair or {@code null} if only the value should be written @param value the value
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonValueWriter.java
99
[ "name", "value" ]
void
true
2
6.88
spring-projects/spring-boot
79,428
javadoc
false
dotAll
public static Pattern dotAll(final String regex) { return Pattern.compile(regex, Pattern.DOTALL); }
Compiles the given regular expression into a pattern with the {@link Pattern#DOTALL} flag. @param regex The expression to be compiled. @return the given regular expression compiled into a pattern with the {@link Pattern#DOTALL} flag. @since 3.13.0
java
src/main/java/org/apache/commons/lang3/RegExUtils.java
43
[ "regex" ]
Pattern
true
1
6.96
apache/commons-lang
2,896
javadoc
false
min
public static long min(long a, final long b, final long c) { if (b < a) { a = b; } if (c < a) { a = c; } return a; }
Gets the minimum of three {@code long} values. @param a value 1. @param b value 2. @param c value 3. @return the smallest of the values.
java
src/main/java/org/apache/commons/lang3/math/NumberUtils.java
1,291
[ "a", "b", "c" ]
true
3
8.24
apache/commons-lang
2,896
javadoc
false
isMaskedArray
def isMaskedArray(x): """ Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters ---------- x : object Object to test. Returns ------- result : bool True if `x` is a MaskedArray. See Also -------- isMA : Alias to isMaskedArray. isarray : Alias to isMaskedArray. Examples -------- >>> import numpy as np >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False """ return isinstance(x, MaskedArray)
Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters ---------- x : object Object to test. Returns ------- result : bool True if `x` is a MaskedArray. See Also -------- isMA : Alias to isMaskedArray. isarray : Alias to isMaskedArray. Examples -------- >>> import numpy as np >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False
python
numpy/ma/core.py
6,669
[ "x" ]
false
1
6.48
numpy/numpy
31,054
numpy
false
whenInstanceOf
default ValueProcessor<T> whenInstanceOf(Class<?> type) { Predicate<@Nullable T> isInstance = type::isInstance; return when(isInstance); }
Return a new processor from this one that only applies to member with values of the given type. @param type the type that must match @return a new {@link ValueProcessor} that only applies when value is the given type.
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
1,022
[ "type" ]
true
1
6.64
spring-projects/spring-boot
79,428
javadoc
false
slice
FileDataBlock slice(long offset, long size) { if (offset == 0 && size == this.size) { return this; } if (offset < 0) { throw new IllegalArgumentException("Offset must not be negative"); } if (size < 0 || offset + size > this.size) { throw new IllegalArgumentException("Size must not be negative and must be within bounds"); } debug.log("Slicing %s at %s with size %s", this.fileAccess, offset, size); return new FileDataBlock(this.fileAccess, this.offset + offset, size); }
Return a new {@link FileDataBlock} slice providing access to a subset of the data. The caller is responsible for calling {@link #open()} and {@link #close()} on the returned block. @param offset the start offset for the slice relative to this block @param size the size of the new slice @return a new {@link FileDataBlock} instance
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/zip/FileDataBlock.java
140
[ "offset", "size" ]
FileDataBlock
true
6
7.92
spring-projects/spring-boot
79,428
javadoc
false
hasReadyNodes
public boolean hasReadyNodes(long now) { lock.lock(); try { return client.hasReadyNodes(now); } finally { lock.unlock(); } }
Send a new request. Note that the request is not actually transmitted on the network until one of the {@link #poll(Timer)} variants is invoked. At this point the request will either be transmitted successfully or will fail. Use the returned future to obtain the result of the send. Note that there is no need to check for disconnects explicitly on the {@link ClientResponse} object; instead, the future will be failed with a {@link DisconnectException}. @param node The destination of the request @param requestBuilder A builder for the request payload @param requestTimeoutMs Maximum time in milliseconds to await a response before disconnecting the socket and cancelling the request. The request may be cancelled sooner if the socket disconnects for any reason. @return A future which indicates the result of the send.
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java
149
[ "now" ]
true
1
6.72
apache/kafka
31,560
javadoc
false
load
@SuppressWarnings("resource") private static SecurityInfo load(ZipContent content) throws IOException { int size = content.size(); boolean hasSecurityInfo = false; Certificate[][] entryCertificates = new Certificate[size][]; CodeSigner[][] entryCodeSigners = new CodeSigner[size][]; try (JarEntriesStream entries = new JarEntriesStream(content.openRawZipData().asInputStream())) { JarEntry entry = entries.getNextEntry(); while (entry != null) { ZipContent.Entry relatedEntry = content.getEntry(entry.getName()); if (relatedEntry != null && entries.matches(relatedEntry.isDirectory(), relatedEntry.getUncompressedSize(), relatedEntry.getCompressionMethod(), () -> relatedEntry.openContent().asInputStream())) { Certificate[] certificates = entry.getCertificates(); CodeSigner[] codeSigners = entry.getCodeSigners(); if (certificates != null || codeSigners != null) { hasSecurityInfo = true; entryCertificates[relatedEntry.getLookupIndex()] = certificates; entryCodeSigners[relatedEntry.getLookupIndex()] = codeSigners; } } entry = entries.getNextEntry(); } } return (!hasSecurityInfo) ? NONE : new SecurityInfo(entryCertificates, entryCodeSigners); }
Load security info from the jar file. We need to use {@link JarInputStream} to obtain the security info since we don't have an actual real file to read. This isn't that fast, but hopefully doesn't happen too often and the result is cached. @param content the zip content @return the security info @throws IOException on I/O error
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/jar/SecurityInfo.java
84
[ "content" ]
SecurityInfo
true
7
8.08
spring-projects/spring-boot
79,428
javadoc
false
get
@SuppressWarnings("unchecked") @Override public T get(RegisteredBean registeredBean) { Assert.notNull(registeredBean, "'registeredBean' must not be null"); if (this.generatorWithoutArguments != null) { Executable executable = getFactoryMethodForGenerator(); return invokeBeanSupplier(executable, () -> this.generatorWithoutArguments.apply(registeredBean)); } else if (this.generatorWithArguments != null) { Executable executable = getFactoryMethodForGenerator(); AutowiredArguments arguments = resolveArguments(registeredBean, executable != null ? executable : this.lookup.get(registeredBean)); return invokeBeanSupplier(executable, () -> this.generatorWithArguments.apply(registeredBean, arguments)); } else { Executable executable = this.lookup.get(registeredBean); @Nullable Object[] arguments = resolveArguments(registeredBean, executable).toArray(); return invokeBeanSupplier(executable, () -> (T) instantiate(registeredBean, executable, arguments)); } }
Return a new {@link BeanInstanceSupplier} instance that uses direct bean name injection shortcuts for specific parameters. @param beanNames the bean names to use as shortcut (aligned with the constructor or factory method parameters) @return a new {@link BeanInstanceSupplier} instance that uses the given shortcut bean names @since 6.2
java
spring-beans/src/main/java/org/springframework/beans/factory/aot/BeanInstanceSupplier.java
188
[ "registeredBean" ]
T
true
4
7.28
spring-projects/spring-framework
59,386
javadoc
false
rolloverDataStream
private void rolloverDataStream( ProjectId projectId, String writeIndexName, RolloverRequest rolloverRequest, ActionListener<Void> listener ) { // "saving" the rollover target name here so we don't capture the entire request ResolvedExpression resolvedRolloverTarget = SelectorResolver.parseExpression( rolloverRequest.getRolloverTarget(), rolloverRequest.indicesOptions() ); logger.trace("Data stream lifecycle issues rollover request for data stream [{}]", rolloverRequest.getRolloverTarget()); client.projectClient(projectId).admin().indices().rolloverIndex(rolloverRequest, new ActionListener<>() { @Override public void onResponse(RolloverResponse rolloverResponse) { // Log only when the conditions were met and the index was rolled over. if (rolloverResponse.isRolledOver()) { List<String> metConditions = rolloverResponse.getConditionStatus() .entrySet() .stream() .filter(Map.Entry::getValue) .map(Map.Entry::getKey) .toList(); logger.info( "Data stream lifecycle successfully rolled over datastream [{}] due to the following met rollover " + "conditions {}. The new index is [{}]", rolloverRequest.getRolloverTarget(), metConditions, rolloverResponse.getNewIndex() ); } listener.onResponse(null); } @Override public void onFailure(Exception e) { ProjectMetadata latestProject = clusterService.state().metadata().projects().get(projectId); DataStream dataStream = latestProject == null ? null : latestProject.dataStreams().get(resolvedRolloverTarget.resource()); boolean targetsFailureStore = IndexComponentSelector.FAILURES == resolvedRolloverTarget.selector(); if (dataStream == null || Objects.equals(getWriteIndexName(dataStream, targetsFailureStore), writeIndexName) == false) { // the data stream has another write index so no point in recording an error for the previous write index we were // attempting to roll over // if there are persistent issues with rolling over this data stream, the next data stream lifecycle run will attempt to // rollover the _current_ write index and the error problem should surface then listener.onResponse(null); } else { // the data stream has NOT been rolled over since we issued our rollover request, so let's record the // error against the data stream's write index. listener.onFailure(e); } } }); }
This method sends requests to delete any indices in the datastream that exceed its retention policy. It returns the set of indices it has sent delete requests for. @param project The project metadata from which to get index metadata @param dataStream The data stream @param indicesToExcludeForRemainingRun Indices to exclude from retention even if it would be time for them to be deleted @return The set of indices that delete requests have been sent for
java
modules/data-streams/src/main/java/org/elasticsearch/datastreams/lifecycle/DataStreamLifecycleService.java
1,081
[ "projectId", "writeIndexName", "rolloverRequest", "listener" ]
void
true
5
7.84
elastic/elasticsearch
75,680
javadoc
false
constrainToRange
public static char constrainToRange(char value, char min, char max) { checkArgument(min <= max, "min (%s) must be less than or equal to max (%s)", min, max); return value < min ? min : value < max ? value : max; }
Returns the value nearest to {@code value} which is within the closed range {@code [min..max]}. <p>If {@code value} is within the range {@code [min..max]}, {@code value} is returned unchanged. If {@code value} is less than {@code min}, {@code min} is returned, and if {@code value} is greater than {@code max}, {@code max} is returned. @param value the {@code char} value to constrain @param min the lower bound (inclusive) of the range to constrain {@code value} to @param max the upper bound (inclusive) of the range to constrain {@code value} to @throws IllegalArgumentException if {@code min > max} @since 21.0
java
android/guava/src/com/google/common/primitives/Chars.java
266
[ "value", "min", "max" ]
true
3
6.8
google/guava
51,352
javadoc
false
wait_for_job
def wait_for_job( self, job_id: str, delay: int | float | None = None, get_batch_log_fetcher: Callable[[str], AwsTaskLogFetcher | None] | None = None, ) -> None: """ Wait for Batch job to complete. :param job_id: a Batch job ID :param delay: a delay before polling for job status :param get_batch_log_fetcher : a method that returns batch_log_fetcher :raises: AirflowException """ self.delay(delay) self.poll_for_job_running(job_id, delay) batch_log_fetcher = None try: if get_batch_log_fetcher: batch_log_fetcher = get_batch_log_fetcher(job_id) if batch_log_fetcher: batch_log_fetcher.start() self.poll_for_job_complete(job_id, delay) finally: if batch_log_fetcher: batch_log_fetcher.stop() batch_log_fetcher.join() self.log.info("AWS Batch job (%s) has completed", job_id)
Wait for Batch job to complete. :param job_id: a Batch job ID :param delay: a delay before polling for job status :param get_batch_log_fetcher : a method that returns batch_log_fetcher :raises: AirflowException
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/batch_client.py
281
[ "self", "job_id", "delay", "get_batch_log_fetcher" ]
None
true
4
6.72
apache/airflow
43,597
sphinx
false
vecdot
def vecdot(x1, x2, /, *, axis=-1): """ Computes the vector dot product. This function is restricted to arguments compatible with the Array API, contrary to :func:`numpy.vecdot`. Let :math:`\\mathbf{a}` be a vector in ``x1`` and :math:`\\mathbf{b}` be a corresponding vector in ``x2``. The dot product is defined as: .. math:: \\mathbf{a} \\cdot \\mathbf{b} = \\sum_{i=0}^{n-1} \\overline{a_i}b_i over the dimension specified by ``axis`` and where :math:`\\overline{a_i}` denotes the complex conjugate if :math:`a_i` is complex and the identity otherwise. Parameters ---------- x1 : array_like First input array. x2 : array_like Second input array. axis : int, optional Axis over which to compute the dot product. Default: ``-1``. Returns ------- output : ndarray The vector dot product of the input. See Also -------- numpy.vecdot Examples -------- Get the projected size along a given normal for an array of vectors. >>> v = np.array([[0., 5., 0.], [0., 0., 10.], [0., 6., 8.]]) >>> n = np.array([0., 0.6, 0.8]) >>> np.linalg.vecdot(v, n) array([ 3., 8., 10.]) """ return _core_vecdot(x1, x2, axis=axis)
Computes the vector dot product. This function is restricted to arguments compatible with the Array API, contrary to :func:`numpy.vecdot`. Let :math:`\\mathbf{a}` be a vector in ``x1`` and :math:`\\mathbf{b}` be a corresponding vector in ``x2``. The dot product is defined as: .. math:: \\mathbf{a} \\cdot \\mathbf{b} = \\sum_{i=0}^{n-1} \\overline{a_i}b_i over the dimension specified by ``axis`` and where :math:`\\overline{a_i}` denotes the complex conjugate if :math:`a_i` is complex and the identity otherwise. Parameters ---------- x1 : array_like First input array. x2 : array_like Second input array. axis : int, optional Axis over which to compute the dot product. Default: ``-1``. Returns ------- output : ndarray The vector dot product of the input. See Also -------- numpy.vecdot Examples -------- Get the projected size along a given normal for an array of vectors. >>> v = np.array([[0., 5., 0.], [0., 0., 10.], [0., 6., 8.]]) >>> n = np.array([0., 0.6, 0.8]) >>> np.linalg.vecdot(v, n) array([ 3., 8., 10.])
python
numpy/linalg/_linalg.py
3,605
[ "x1", "x2", "axis" ]
false
1
6.32
numpy/numpy
31,054
numpy
false
evaluate
protected @Nullable Object evaluate(@Nullable Object value) { if (value instanceof String str) { return doEvaluate(str); } else if (value instanceof String[] values) { boolean actuallyResolved = false; @Nullable Object[] resolvedValues = new Object[values.length]; for (int i = 0; i < values.length; i++) { String originalValue = values[i]; Object resolvedValue = doEvaluate(originalValue); if (resolvedValue != originalValue) { actuallyResolved = true; } resolvedValues[i] = resolvedValue; } return (actuallyResolved ? resolvedValues : values); } else { return value; } }
Evaluate the given value as an expression, if necessary. @param value the original value (may be an expression) @return the resolved value if necessary, or the original value
java
spring-beans/src/main/java/org/springframework/beans/factory/support/BeanDefinitionValueResolver.java
284
[ "value" ]
Object
true
6
7.92
spring-projects/spring-framework
59,386
javadoc
false
textLength
@Override public int textLength() throws IOException { try { return parser.getTextLength(); } catch (IOException e) { throw handleParserException(e); } }
Handle parser exception depending on type. This converts known exceptions to XContentParseException and rethrows them.
java
libs/x-content/impl/src/main/java/org/elasticsearch/xcontent/provider/json/JsonXContentParser.java
227
[]
true
2
6.08
elastic/elasticsearch
75,680
javadoc
false
castArrayLikeObject
function castArrayLikeObject(value) { return isArrayLikeObject(value) ? value : []; }
Casts `value` to an empty array if it's not an array like object. @private @param {*} value The value to inspect. @returns {Array|Object} Returns the cast array-like object.
javascript
lodash.js
4,533
[ "value" ]
false
2
6.16
lodash/lodash
61,490
jsdoc
false
listClientMetricsResources
@Deprecated(since = "4.1", forRemoval = true) ListClientMetricsResourcesResult listClientMetricsResources(ListClientMetricsResourcesOptions options);
List the client metrics configuration resources available in the cluster. @param options The options to use when listing the client metrics resources. @return The ListClientMetricsResourcesResult. @deprecated Since 4.1. Use {@link #listConfigResources(Set, ListConfigResourcesOptions)} instead.
java
clients/src/main/java/org/apache/kafka/clients/admin/Admin.java
1,808
[ "options" ]
ListClientMetricsResourcesResult
true
1
6
apache/kafka
31,560
javadoc
false
indexOf
static int indexOf(final CharSequence cs, final int searchChar, int start) { if (cs instanceof String) { return ((String) cs).indexOf(searchChar, start); } final int sz = cs.length(); if (start < 0) { start = 0; } if (searchChar < Character.MIN_SUPPLEMENTARY_CODE_POINT) { for (int i = start; i < sz; i++) { if (cs.charAt(i) == searchChar) { return i; } } return NOT_FOUND; } //supplementary characters (LANG1300) if (searchChar <= Character.MAX_CODE_POINT) { final char[] chars = Character.toChars(searchChar); for (int i = start; i < sz - 1; i++) { final char high = cs.charAt(i); final char low = cs.charAt(i + 1); if (high == chars[0] && low == chars[1]) { return i; } } } return NOT_FOUND; }
Returns the index within {@code cs} of the first occurrence of the specified character, starting the search at the specified index. <p> If a character with value {@code searchChar} occurs in the character sequence represented by the {@code cs} object at an index no smaller than {@code start}, then the index of the first such occurrence is returned. For values of {@code searchChar} in the range from 0 to 0xFFFF (inclusive), this is the smallest value <em>k</em> such that: </p> <pre> (this.charAt(<em>k</em>) == searchChar) &amp;&amp; (<em>k</em> &gt;= start) </pre> <p> is true. For other values of {@code searchChar}, it is the smallest value <em>k</em> such that: </p> <pre> (this.codePointAt(<em>k</em>) == searchChar) &amp;&amp; (<em>k</em> &gt;= start) </pre> <p> is true. In either case, if no such character occurs inm {@code cs} at or after position {@code start}, then {@code -1} is returned. </p> <p> There is no restriction on the value of {@code start}. If it is negative, it has the same effect as if it were zero: the entire {@link CharSequence} may be searched. If it is greater than the length of {@code cs}, it has the same effect as if it were equal to the length of {@code cs}: {@code -1} is returned. </p> <p> All indices are specified in {@code char} values (Unicode code units). </p> @param cs the {@link CharSequence} to be processed, not null. @param searchChar the char to be searched for. @param start the start index, negative starts at the string start. @return the index where the search char was found, -1 if not found. @since 3.6 updated to behave more like {@link String}.
java
src/main/java/org/apache/commons/lang3/CharSequenceUtils.java
110
[ "cs", "searchChar", "start" ]
true
10
8.24
apache/commons-lang
2,896
javadoc
false
sort
public static float[] sort(final float[] array) { if (array != null) { Arrays.sort(array); } return array; }
Sorts the given array into ascending order and returns it. @param array the array to sort (may be null). @return the given array. @see Arrays#sort(float[])
java
src/main/java/org/apache/commons/lang3/ArraySorter.java
79
[ "array" ]
true
2
8.24
apache/commons-lang
2,896
javadoc
false
of
public static <L, R> ImmutablePair<L, R> of(final Map.Entry<L, R> pair) { return pair != null ? new ImmutablePair<>(pair.getKey(), pair.getValue()) : nullPair(); }
Creates an immutable pair from a map entry. @param <L> the left element type. @param <R> the right element type. @param pair the existing map entry. @return an immutable formed from the map entry. @since 3.10
java
src/main/java/org/apache/commons/lang3/tuple/ImmutablePair.java
118
[ "pair" ]
true
2
8.16
apache/commons-lang
2,896
javadoc
false
version_prefix
def version_prefix(self: Self) -> bytes: """ Get the version prefix for the cache. Returns: bytes: The version prefix as bytes, derived from the cache version string. """ return sha256(str(OnDiskCache.version).encode()).digest()[:4]
Get the version prefix for the cache. Returns: bytes: The version prefix as bytes, derived from the cache version string.
python
torch/_inductor/cache.py
314
[ "self" ]
bytes
true
1
6.56
pytorch/pytorch
96,034
unknown
false
toUtf16Escape
protected String toUtf16Escape(final int codePoint) { return "\\u" + hex(codePoint); }
Converts the given code point to a hexadecimal string of the form {@code "\\uXXXX"} @param codePoint a Unicode code point. @return the hexadecimal string for the given code point. @since 3.2
java
src/main/java/org/apache/commons/lang3/text/translate/UnicodeEscaper.java
110
[ "codePoint" ]
String
true
1
6.8
apache/commons-lang
2,896
javadoc
false
append_fields
def append_fields(base, names, data, dtypes=None, fill_value=-1, usemask=True, asrecarray=False): """ Add new fields to an existing array. The names of the fields are given with the `names` arguments, the corresponding values with the `data` arguments. If a single field is appended, `names`, `data` and `dtypes` do not have to be lists but just values. Parameters ---------- base : array Input array to extend. names : string, sequence String or sequence of strings corresponding to the names of the new fields. data : array or sequence of arrays Array or sequence of arrays storing the fields to add to the base. dtypes : sequence of datatypes, optional Datatype or sequence of datatypes. If None, the datatypes are estimated from the `data`. fill_value : {float}, optional Filling value used to pad missing data on the shorter arrays. usemask : {False, True}, optional Whether to return a masked array or not. asrecarray : {False, True}, optional Whether to return a recarray (MaskedRecords) or not. """ # Check the names if isinstance(names, (tuple, list)): if len(names) != len(data): msg = "The number of arrays does not match the number of names" raise ValueError(msg) elif isinstance(names, str): names = [names, ] data = [data, ] # if dtypes is None: data = [np.array(a, copy=None, subok=True) for a in data] data = [a.view([(name, a.dtype)]) for (name, a) in zip(names, data)] else: if not isinstance(dtypes, (tuple, list)): dtypes = [dtypes, ] if len(data) != len(dtypes): if len(dtypes) == 1: dtypes = dtypes * len(data) else: msg = "The dtypes argument must be None, a dtype, or a list." raise ValueError(msg) data = [np.array(a, copy=None, subok=True, dtype=d).view([(n, d)]) for (a, n, d) in zip(data, names, dtypes)] # base = merge_arrays(base, usemask=usemask, fill_value=fill_value) if len(data) > 1: data = merge_arrays(data, flatten=True, usemask=usemask, fill_value=fill_value) else: data = data.pop() # output = ma.masked_all( max(len(base), len(data)), dtype=_get_fieldspec(base.dtype) + _get_fieldspec(data.dtype)) output = recursive_fill_fields(base, output) output = recursive_fill_fields(data, output) # return _fix_output(output, usemask=usemask, asrecarray=asrecarray)
Add new fields to an existing array. The names of the fields are given with the `names` arguments, the corresponding values with the `data` arguments. If a single field is appended, `names`, `data` and `dtypes` do not have to be lists but just values. Parameters ---------- base : array Input array to extend. names : string, sequence String or sequence of strings corresponding to the names of the new fields. data : array or sequence of arrays Array or sequence of arrays storing the fields to add to the base. dtypes : sequence of datatypes, optional Datatype or sequence of datatypes. If None, the datatypes are estimated from the `data`. fill_value : {float}, optional Filling value used to pad missing data on the shorter arrays. usemask : {False, True}, optional Whether to return a masked array or not. asrecarray : {False, True}, optional Whether to return a recarray (MaskedRecords) or not.
python
numpy/lib/recfunctions.py
655
[ "base", "names", "data", "dtypes", "fill_value", "usemask", "asrecarray" ]
false
12
6.16
numpy/numpy
31,054
numpy
false
bulk_write_to_db
def bulk_write_to_db( cls, bundle_name: str, bundle_version: str | None, dags: Collection[DAG | LazyDeserializedDAG], parse_duration: float | None = None, session: Session = NEW_SESSION, ) -> None: """ Ensure the DagModel rows for the given dags are up-to-date in the dag table in the DB. :param dags: the DAG objects to save to the DB :return: None """ if not dags: return from airflow.dag_processing.collection import AssetModelOperation, DagModelOperation log.info("Sync %s DAGs", len(dags)) dag_op = DagModelOperation( bundle_name=bundle_name, bundle_version=bundle_version, dags={d.dag_id: LazyDeserializedDAG.from_dag(d) for d in dags}, ) orm_dags = dag_op.add_dags(session=session) dag_op.update_dags(orm_dags, parse_duration, session=session) asset_op = AssetModelOperation.collect(dag_op.dags) orm_assets = asset_op.sync_assets(session=session) orm_asset_aliases = asset_op.sync_asset_aliases(session=session) session.flush() # This populates id so we can create fks in later calls. orm_dags = dag_op.find_orm_dags(session=session) # Refetch so relationship is up to date. asset_op.add_dag_asset_references(orm_dags, orm_assets, session=session) asset_op.add_dag_asset_alias_references(orm_dags, orm_asset_aliases, session=session) asset_op.add_dag_asset_name_uri_references(session=session) asset_op.add_task_asset_references(orm_dags, orm_assets, session=session) asset_op.activate_assets_if_possible(orm_assets.values(), session=session) session.flush() # Activation is needed when we add trigger references. asset_op.add_asset_trigger_references(orm_assets, session=session) dag_op.update_dag_asset_expression(orm_dags=orm_dags, orm_assets=orm_assets) session.flush()
Ensure the DagModel rows for the given dags are up-to-date in the dag table in the DB. :param dags: the DAG objects to save to the DB :return: None
python
airflow-core/src/airflow/serialization/serialized_objects.py
2,767
[ "cls", "bundle_name", "bundle_version", "dags", "parse_duration", "session" ]
None
true
2
7.76
apache/airflow
43,597
sphinx
false
getPropertySourceProperty
protected final @Nullable Object getPropertySourceProperty(String name) { // Save calls to SystemEnvironmentPropertySource.resolvePropertyName(...) // since we've already done the mapping PropertySource<?> propertySource = getPropertySource(); return (!this.systemEnvironmentSource) ? propertySource.getProperty(name) : getSystemEnvironmentProperty(((SystemEnvironmentPropertySource) propertySource).getSource(), name); }
Create a new {@link SpringConfigurationPropertySource} implementation. @param propertySource the source property source @param systemEnvironmentSource if the source is from the system environment @param mappers the property mappers
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/SpringConfigurationPropertySource.java
106
[ "name" ]
Object
true
2
6.24
spring-projects/spring-boot
79,428
javadoc
false
clearTask
public void clearTask() { pendingTask.getAndUpdate(task -> { if (task == null) { return null; } else if (task instanceof ActiveFuture || task instanceof FetchAction || task instanceof ShareFetchAction) { return null; } return task; }); }
If there is no pending task, set the pending task active. If wakeup was called before setting an active task, the current task will complete exceptionally with WakeupException right away. If there is an active task, throw exception. @param currentTask @param <T> @return
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/WakeupTrigger.java
134
[]
void
true
5
8.08
apache/kafka
31,560
javadoc
false
download_patch
def download_patch(pr_number: int, repo_url: str, download_dir: str) -> str: """ Downloads the patch file for a given PR from the specified GitHub repository. Args: pr_number (int): The pull request number. repo_url (str): The URL of the repository where the PR is hosted. download_dir (str): The directory to store the downloaded patch. Returns: str: The path to the downloaded patch file. Exits: If the download fails, the script will exit. """ patch_url = f"{repo_url}/pull/{pr_number}.diff" patch_file = os.path.join(download_dir, f"pr-{pr_number}.patch") print(f"Downloading PR #{pr_number} patch from {patch_url}...") try: with ( urllib.request.urlopen(patch_url) as response, open(patch_file, "wb") as out_file, ): # pyrefly: ignore [bad-specialization] shutil.copyfileobj(response, out_file) if not os.path.isfile(patch_file): print(f"Failed to download patch for PR #{pr_number}") sys.exit(1) print(f"Patch downloaded to {patch_file}") return patch_file except urllib.error.HTTPError as e: print(f"HTTP Error: {e.code} when downloading patch for PR #{pr_number}") sys.exit(1) except Exception as e: print(f"An error occurred while downloading the patch: {e}") sys.exit(1)
Downloads the patch file for a given PR from the specified GitHub repository. Args: pr_number (int): The pull request number. repo_url (str): The URL of the repository where the PR is hosted. download_dir (str): The directory to store the downloaded patch. Returns: str: The path to the downloaded patch file. Exits: If the download fails, the script will exit.
python
tools/nightly_hotpatch.py
98
[ "pr_number", "repo_url", "download_dir" ]
str
true
2
8.24
pytorch/pytorch
96,034
google
false
getApplicationLog
protected Log getApplicationLog() { if (this.mainApplicationClass == null) { return logger; } return LogFactory.getLog(this.mainApplicationClass); }
Returns the {@link Log} for the application. By default will be deduced. @return the application log
java
core/spring-boot/src/main/java/org/springframework/boot/SpringApplication.java
671
[]
Log
true
2
8.24
spring-projects/spring-boot
79,428
javadoc
false
resolveId
protected String resolveId(Element element, AbstractBeanDefinition definition, ParserContext parserContext) throws BeanDefinitionStoreException { if (shouldGenerateId()) { return parserContext.getReaderContext().generateBeanName(definition); } else { String id = element.getAttribute(ID_ATTRIBUTE); if (!StringUtils.hasText(id) && shouldGenerateIdAsFallback()) { id = parserContext.getReaderContext().generateBeanName(definition); } return id; } }
Resolve the ID for the supplied {@link BeanDefinition}. <p>When using {@link #shouldGenerateId generation}, a name is generated automatically. Otherwise, the ID is extracted from the "id" attribute, potentially with a {@link #shouldGenerateIdAsFallback() fallback} to a generated id. @param element the element that the bean definition has been built from @param definition the bean definition to be registered @param parserContext the object encapsulating the current state of the parsing process; provides access to a {@link org.springframework.beans.factory.support.BeanDefinitionRegistry} @return the resolved id @throws BeanDefinitionStoreException if no unique name could be generated for the given bean definition
java
spring-beans/src/main/java/org/springframework/beans/factory/xml/AbstractBeanDefinitionParser.java
109
[ "element", "definition", "parserContext" ]
String
true
4
7.44
spring-projects/spring-framework
59,386
javadoc
false
executeUserEntryPoint
function executeUserEntryPoint(main = process.argv[1]) { let useESMLoader; let resolvedMain; if (getOptionValue('--entry-url')) { useESMLoader = true; } else { resolvedMain = resolveMainPath(main); useESMLoader = shouldUseESMLoader(resolvedMain); } // Unless we know we should use the ESM loader to handle the entry point per the checks in `shouldUseESMLoader`, first // try to run the entry point via the CommonJS loader; and if that fails under certain conditions, retry as ESM. if (!useESMLoader) { const cjsLoader = require('internal/modules/cjs/loader'); const { wrapModuleLoad } = cjsLoader; wrapModuleLoad(main, null, true); } else { const mainPath = resolvedMain || main; const mainURL = getOptionValue('--entry-url') ? new URL(mainPath, getCWDURL()) : pathToFileURL(mainPath); runEntryPointWithESMLoader((cascadedLoader) => { // Note that if the graph contains unsettled TLA, this may never resolve // even after the event loop stops running. return cascadedLoader.import(mainURL, undefined, { __proto__: null }, undefined, true); }); } }
Parse the CLI main entry point string and run it. For backwards compatibility, we have to run a bunch of monkey-patchable code that belongs to the CJS loader (exposed by `require('module')`) even when the entry point is ESM. Because of backwards compatibility, this function is exposed publicly via `import { runMain } from 'node:module'`. Because of module detection, this function will attempt to run ambiguous (no explicit extension, no `package.json` type field) entry points as CommonJS first; under certain conditions, it will retry running as ESM. @param {string} main - First positional CLI argument, such as `'entry.js'` from `node entry.js`
javascript
lib/internal/modules/run_main.js
140
[]
false
7
6.08
nodejs/node
114,839
jsdoc
false
get_versions_from_toml
def get_versions_from_toml() -> dict[str, str]: """Min versions in pyproject.toml for pip install pandas[extra].""" install_map = _optional.INSTALL_MAPPING optional_dependencies = {} with open(SETUP_PATH, "rb") as pyproject_f: pyproject_toml = tomllib.load(pyproject_f) opt_deps = pyproject_toml["project"]["optional-dependencies"] dependencies = set(opt_deps["all"]) # remove pytest plugin dependencies pytest_plugins = {dep for dep in opt_deps["test"] if dep.startswith("pytest-")} dependencies = dependencies.difference(pytest_plugins) for dependency in dependencies: package, version = dependency.strip().split(">=") optional_dependencies[install_map.get(package, package).casefold()] = version for item in EXCLUDE_DEPS: optional_dependencies.pop(item, None) return optional_dependencies
Min versions in pyproject.toml for pip install pandas[extra].
python
scripts/validate_min_versions_in_sync.py
230
[]
dict[str, str]
true
3
6.72
pandas-dev/pandas
47,362
unknown
false
setSubject
public void setSubject(String subject) throws MessagingException { Assert.notNull(subject, "Subject must not be null"); if (getEncoding() != null) { this.mimeMessage.setSubject(subject, getEncoding()); } else { this.mimeMessage.setSubject(subject); } }
Set the subject of the message, using the correct encoding. @param subject the subject text @throws MessagingException in case of errors
java
spring-context-support/src/main/java/org/springframework/mail/javamail/MimeMessageHelper.java
771
[ "subject" ]
void
true
2
6.56
spring-projects/spring-framework
59,386
javadoc
false
evaluate
private @Nullable Object evaluate(@Nullable Object cacheHit, CacheOperationInvoker invoker, Method method, CacheOperationContexts contexts) { // Re-invocation in reactive pipeline after late cache hit determination? if (contexts.processed) { return cacheHit; } Object cacheValue; Object returnValue; if (cacheHit != null && !hasCachePut(contexts)) { // If there are no put requests, just use the cache hit cacheValue = unwrapCacheValue(cacheHit); returnValue = wrapCacheValue(method, cacheValue); } else { // Invoke the method if we don't have a cache hit returnValue = invokeOperation(invoker); cacheValue = unwrapReturnValue(returnValue); } // Collect puts from any @Cacheable miss, if no cached value is found List<CachePutRequest> cachePutRequests = new ArrayList<>(1); if (cacheHit == null) { collectPutRequests(contexts.get(CacheableOperation.class), cacheValue, cachePutRequests); } // Collect any explicit @CachePuts collectPutRequests(contexts.get(CachePutOperation.class), cacheValue, cachePutRequests); // Process any collected put requests, either from @CachePut or a @Cacheable miss for (CachePutRequest cachePutRequest : cachePutRequests) { Object returnOverride = cachePutRequest.apply(cacheValue); if (returnOverride != null) { returnValue = returnOverride; } } // Process any late evictions Object returnOverride = processCacheEvicts( contexts.get(CacheEvictOperation.class), false, returnValue); if (returnOverride != null) { returnValue = returnOverride; } // Mark as processed for re-invocation after late cache hit determination contexts.processed = true; return returnValue; }
Find a cached value only for {@link CacheableOperation} that passes the condition. @param contexts the cacheable operations @return a {@link Cache.ValueWrapper} holding the cached value, or {@code null} if none is found
java
spring-context/src/main/java/org/springframework/cache/interceptor/CacheAspectSupport.java
571
[ "cacheHit", "invoker", "method", "contexts" ]
Object
true
7
8.08
spring-projects/spring-framework
59,386
javadoc
false
appendFixedWidthPadRight
public StrBuilder appendFixedWidthPadRight(final Object obj, final int width, final char padChar) { if (width > 0) { ensureCapacity(size + width); String str = ObjectUtils.toString(obj, this::getNullText); if (str == null) { str = StringUtils.EMPTY; } final int strLen = str.length(); if (strLen >= width) { str.getChars(0, width, buffer, size); } else { str.getChars(0, strLen, buffer, size); final int fromIndex = size + strLen; Arrays.fill(buffer, fromIndex, fromIndex + width - strLen, padChar); } size += width; } return this; }
Appends an object to the builder padding on the right to a fixed length. The {@code toString} of the object is used. If the object is larger than the length, the right-hand side side is lost. If the object is null, null text value is used. @param obj the object to append, null uses null text @param width the fixed field width, zero or negative has no effect @param padChar the pad character to use @return {@code this} instance.
java
src/main/java/org/apache/commons/lang3/text/StrBuilder.java
907
[ "obj", "width", "padChar" ]
StrBuilder
true
4
7.92
apache/commons-lang
2,896
javadoc
false
uniqWith
function uniqWith(array, comparator) { comparator = typeof comparator == 'function' ? comparator : undefined; return (array && array.length) ? baseUniq(array, undefined, comparator) : []; }
This method is like `_.uniq` except that it accepts `comparator` which is invoked to compare elements of `array`. The order of result values is determined by the order they occur in the array.The comparator is invoked with two arguments: (arrVal, othVal). @static @memberOf _ @since 4.0.0 @category Array @param {Array} array The array to inspect. @param {Function} [comparator] The comparator invoked per element. @returns {Array} Returns the new duplicate free array. @example var objects = [{ 'x': 1, 'y': 2 }, { 'x': 2, 'y': 1 }, { 'x': 1, 'y': 2 }]; _.uniqWith(objects, _.isEqual); // => [{ 'x': 1, 'y': 2 }, { 'x': 2, 'y': 1 }]
javascript
lodash.js
8,544
[ "array", "comparator" ]
false
4
7.44
lodash/lodash
61,490
jsdoc
false
generateHash
private byte[] generateHash(@Nullable Class<?> sourceClass) { ApplicationHome home = new ApplicationHome(sourceClass); MessageDigest digest; try { digest = MessageDigest.getInstance("SHA-1"); update(digest, home.getSource()); update(digest, home.getDir()); update(digest, System.getProperty("user.dir")); if (!NativeDetector.inNativeImage()) { update(digest, System.getProperty("java.home")); } update(digest, System.getProperty("java.class.path")); update(digest, System.getProperty("sun.java.command")); update(digest, System.getProperty("sun.boot.class.path")); return digest.digest(); } catch (Exception ex) { throw new IllegalStateException(ex); } }
Return a subdirectory of the application temp. @param subDir the subdirectory name @return a subdirectory
java
core/spring-boot/src/main/java/org/springframework/boot/system/ApplicationTemp.java
145
[ "sourceClass" ]
true
3
7.28
spring-projects/spring-boot
79,428
javadoc
false
rootLayer
function rootLayer(client: Client): CompositeProxyLayer { const prototype = Object.getPrototypeOf(client._originalClient) const allKeys = [...new Set(Object.getOwnPropertyNames(prototype))] return { getKeys() { return allKeys }, getPropertyValue(prop) { return client[prop] }, } }
Dynamically creates a model proxy interface for a give name. For each prop accessed on this proxy, it will lookup the dmmf to find if that model exists. If it is the case, it will create a proxy for that model via {@link applyModel}. @param client to create the proxy around @returns a proxy to access models
typescript
packages/client/src/runtime/core/model/applyModelsAndClientExtensions.ts
39
[ "client" ]
true
1
6.88
prisma/prisma
44,834
jsdoc
false
doInvoke
protected @Nullable Object doInvoke(@Nullable Object... args) { Object bean = getTargetBean(); // Detect package-protected NullBean instance through equals(null) check if (bean.equals(null)) { return null; } try { ReflectionUtils.makeAccessible(this.method); if (KotlinDetector.isSuspendingFunction(this.method)) { return CoroutinesUtils.invokeSuspendingFunction(this.method, bean, args); } return this.method.invoke(bean, args); } catch (IllegalArgumentException ex) { assertTargetBean(this.method, bean, args); throw new IllegalStateException(getInvocationErrorMessage(bean, ex.getMessage(), args), ex); } catch (IllegalAccessException | InaccessibleObjectException ex) { throw new IllegalStateException(getInvocationErrorMessage(bean, ex.getMessage(), args), ex); } catch (InvocationTargetException ex) { // Throw underlying exception Throwable targetException = ex.getTargetException(); if (targetException instanceof RuntimeException runtimeException) { throw runtimeException; } else { String msg = getInvocationErrorMessage(bean, "Failed to invoke event listener method", args); throw new UndeclaredThrowableException(targetException, msg); } } }
Invoke the event listener method with the given argument values.
java
spring-context/src/main/java/org/springframework/context/event/ApplicationListenerMethodAdapter.java
363
[]
Object
true
7
6.88
spring-projects/spring-framework
59,386
javadoc
false
parse
@Override public Date parse(String text, Locale locale) throws ParseException { try { return getDateFormat(locale).parse(text); } catch (ParseException ex) { Set<String> fallbackPatterns = new LinkedHashSet<>(); String isoPattern = ISO_FALLBACK_PATTERNS.get(this.iso); if (isoPattern != null) { fallbackPatterns.add(isoPattern); } if (!ObjectUtils.isEmpty(this.fallbackPatterns)) { Collections.addAll(fallbackPatterns, this.fallbackPatterns); } if (!fallbackPatterns.isEmpty()) { for (String pattern : fallbackPatterns) { try { DateFormat dateFormat = configureDateFormat(new SimpleDateFormat(pattern, locale)); // Align timezone for parsing format with printing format if ISO is set. if (this.iso != null && this.iso != ISO.NONE) { dateFormat.setTimeZone(UTC); } return dateFormat.parse(text); } catch (ParseException ignoredException) { // Ignore fallback parsing exceptions since the exception thrown below // will include information from the "source" if available -- for example, // the toString() of a @DateTimeFormat annotation. } } } if (this.source != null) { ParseException parseException = new ParseException( String.format("Unable to parse date time value \"%s\" using configuration from %s", text, this.source), ex.getErrorOffset()); parseException.initCause(ex); throw parseException; } // else rethrow original exception throw ex; } }
Specify whether parsing is to be lenient. Default is {@code false}. <p>With lenient parsing, the parser may allow inputs that do not precisely match the format. With strict parsing, inputs must match the format exactly.
java
spring-context/src/main/java/org/springframework/format/datetime/DateFormatter.java
208
[ "text", "locale" ]
Date
true
9
6
spring-projects/spring-framework
59,386
javadoc
false
translate
def translate(a, table, deletechars=None): """ For each element in `a`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. Calls :meth:`str.translate` element-wise. Parameters ---------- a : array-like, with `np.bytes_` or `np.str_` dtype table : str of length 256 deletechars : str Returns ------- out : ndarray Output array of str or unicode, depending on input type See Also -------- str.translate Examples -------- >>> import numpy as np >>> a = np.array(['a1b c', '1bca', 'bca1']) >>> table = a[0].maketrans('abc', '123') >>> deletechars = ' ' >>> np.char.translate(a, table, deletechars) array(['112 3', '1231', '2311'], dtype='<U5') """ a_arr = np.asarray(a) if issubclass(a_arr.dtype.type, np.str_): return _vec_string( a_arr, a_arr.dtype, 'translate', (table,)) else: return _vec_string( a_arr, a_arr.dtype, 'translate', [table] + _clean_args(deletechars) )
For each element in `a`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. Calls :meth:`str.translate` element-wise. Parameters ---------- a : array-like, with `np.bytes_` or `np.str_` dtype table : str of length 256 deletechars : str Returns ------- out : ndarray Output array of str or unicode, depending on input type See Also -------- str.translate Examples -------- >>> import numpy as np >>> a = np.array(['a1b c', '1bca', 'bca1']) >>> table = a[0].maketrans('abc', '123') >>> deletechars = ' ' >>> np.char.translate(a, table, deletechars) array(['112 3', '1231', '2311'], dtype='<U5')
python
numpy/_core/strings.py
1,681
[ "a", "table", "deletechars" ]
false
3
7.52
numpy/numpy
31,054
numpy
false
customizers
public SimpleAsyncTaskExecutorBuilder customizers( Iterable<? extends SimpleAsyncTaskExecutorCustomizer> customizers) { Assert.notNull(customizers, "'customizers' must not be null"); return new SimpleAsyncTaskExecutorBuilder(this.virtualThreads, this.threadNamePrefix, this.cancelRemainingTasksOnClose, this.rejectTasksWhenLimitReached, this.concurrencyLimit, this.taskDecorator, append(null, customizers), this.taskTerminationTimeout); }
Set the {@link SimpleAsyncTaskExecutorCustomizer customizers} that should be applied to the {@link SimpleAsyncTaskExecutor}. Customizers are applied in the order that they were added after builder configuration has been applied. Setting this value will replace any previously configured customizers. @param customizers the customizers to set @return a new builder instance @see #additionalCustomizers(Iterable)
java
core/spring-boot/src/main/java/org/springframework/boot/task/SimpleAsyncTaskExecutorBuilder.java
197
[ "customizers" ]
SimpleAsyncTaskExecutorBuilder
true
1
6.24
spring-projects/spring-boot
79,428
javadoc
false
_standardize_out_kwarg
def _standardize_out_kwarg(**kwargs) -> dict: """ If kwargs contain "out1" and "out2", replace that with a tuple "out" np.divmod, np.modf, np.frexp can have either `out=(out1, out2)` or `out1=out1, out2=out2)` """ if "out" not in kwargs and "out1" in kwargs and "out2" in kwargs: out1 = kwargs.pop("out1") out2 = kwargs.pop("out2") out = (out1, out2) kwargs["out"] = out return kwargs
If kwargs contain "out1" and "out2", replace that with a tuple "out" np.divmod, np.modf, np.frexp can have either `out=(out1, out2)` or `out1=out1, out2=out2)`
python
pandas/core/arraylike.py
421
[]
dict
true
4
6.88
pandas-dev/pandas
47,362
unknown
false
maybeExecuteRetention
Set<Index> maybeExecuteRetention( ProjectMetadata project, DataStream dataStream, TimeValue dataRetention, TimeValue failureRetention, Set<Index> indicesToExcludeForRemainingRun ) { if (dataRetention == null && failureRetention == null) { return Set.of(); } List<Index> backingIndicesOlderThanRetention = dataStream.getIndicesPastRetention( project::index, nowSupplier, dataRetention, false ); List<Index> failureIndicesOlderThanRetention = dataStream.getIndicesPastRetention( project::index, nowSupplier, failureRetention, true ); if (backingIndicesOlderThanRetention.isEmpty() && failureIndicesOlderThanRetention.isEmpty()) { return Set.of(); } Set<Index> indicesToBeRemoved = new HashSet<>(); if (backingIndicesOlderThanRetention.isEmpty() == false) { assert dataStream.getDataLifecycle() != null : "data stream should have data lifecycle if we have 'old' indices"; for (Index index : backingIndicesOlderThanRetention) { if (indicesToExcludeForRemainingRun.contains(index) == false) { IndexMetadata backingIndex = project.index(index); assert backingIndex != null : "the data stream backing indices must exist"; IndexMetadata.DownsampleTaskStatus downsampleStatus = INDEX_DOWNSAMPLE_STATUS.get(backingIndex.getSettings()); // we don't want to delete the source index if they have an in-progress downsampling operation because the // target downsample index will remain in the system as a standalone index if (downsampleStatus == STARTED) { // there's an opportunity here to cancel downsampling and delete the source index now logger.trace( "Data stream lifecycle skips deleting index [{}] even though its retention period [{}] has lapsed " + "because there's a downsampling operation currently in progress for this index. Current downsampling " + "status is [{}]. When downsampling completes, DSL will delete this index.", index.getName(), dataRetention, downsampleStatus ); } else { // UNKNOWN is the default value, and has no real use. So index should be deleted // SUCCESS meaning downsampling completed successfully and there is nothing in progress, so we can also delete indicesToBeRemoved.add(index); // there's an opportunity here to batch the delete requests (i.e. delete 100 indices / request) // let's start simple and reevaluate String indexName = backingIndex.getIndex().getName(); deleteIndexOnce(project.id(), indexName, "the lapsed [" + dataRetention + "] retention period"); } } } } if (failureIndicesOlderThanRetention.isEmpty() == false) { assert dataStream.getFailuresLifecycle() != null : "data stream should have failures lifecycle if we have 'old' indices"; for (Index index : failureIndicesOlderThanRetention) { if (indicesToExcludeForRemainingRun.contains(index) == false) { IndexMetadata failureIndex = project.index(index); assert failureIndex != null : "the data stream failure indices must exist"; indicesToBeRemoved.add(index); // there's an opportunity here to batch the delete requests (i.e. delete 100 indices / request) // let's start simple and reevaluate String indexName = failureIndex.getIndex().getName(); deleteIndexOnce(project.id(), indexName, "the lapsed [" + failureRetention + "] retention period"); } } } return indicesToBeRemoved; }
This method sends requests to delete any indices in the datastream that exceed its retention policy. It returns the set of indices it has sent delete requests for. @param project The project metadata from which to get index metadata @param dataStream The data stream @param indicesToExcludeForRemainingRun Indices to exclude from retention even if it would be time for them to be deleted @return The set of indices that delete requests have been sent for
java
modules/data-streams/src/main/java/org/elasticsearch/datastreams/lifecycle/DataStreamLifecycleService.java
933
[ "project", "dataStream", "dataRetention", "failureRetention", "indicesToExcludeForRemainingRun" ]
true
10
7.84
elastic/elasticsearch
75,680
javadoc
false
aggregate
def aggregate(self, func=None, *args, **kwargs): """ Aggregate using one or more operations over the specified axis. Parameters ---------- func : function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a Series/DataFrame or when passed to Series/DataFrame.apply. Accepted combinations are: - function - string function name - list of functions and/or function names, e.g. ``[np.sum, 'mean']`` - dict of axis labels -> functions, function names or list of such. *args Positional arguments to pass to `func`. **kwargs Keyword arguments to pass to `func`. Returns ------- scalar, Series or DataFrame The return can be: * scalar : when Series.agg is called with single function * Series : when DataFrame.agg is called with a single function * DataFrame : when DataFrame.agg is called with several functions See Also -------- DataFrame.aggregate : Similar DataFrame method. Series.aggregate : Similar Series method. Notes ----- The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`, `var`), where the default is to compute the aggregation of the flattened array, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d, axis=0)``. `agg` is an alias for `aggregate`. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See :ref:`gotchas.udf-mutation` for more details. A passed user-defined-function will be passed a Series for evaluation. If ``func`` defines an index relabeling, ``axis`` must be ``0`` or ``index``. Examples -------- >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]}) >>> df A B C 0 1 4 7 1 2 5 8 2 3 6 9 >>> df.rolling(2, win_type="boxcar").agg("mean") A B C 0 NaN NaN NaN 1 1.5 4.5 7.5 2 2.5 5.5 8.5 """ result = ResamplerWindowApply(self, func, args=args, kwargs=kwargs).agg() if result is None: # these must apply directly result = func(self) return result
Aggregate using one or more operations over the specified axis. Parameters ---------- func : function, str, list or dict Function to use for aggregating the data. If a function, must either work when passed a Series/DataFrame or when passed to Series/DataFrame.apply. Accepted combinations are: - function - string function name - list of functions and/or function names, e.g. ``[np.sum, 'mean']`` - dict of axis labels -> functions, function names or list of such. *args Positional arguments to pass to `func`. **kwargs Keyword arguments to pass to `func`. Returns ------- scalar, Series or DataFrame The return can be: * scalar : when Series.agg is called with single function * Series : when DataFrame.agg is called with a single function * DataFrame : when DataFrame.agg is called with several functions See Also -------- DataFrame.aggregate : Similar DataFrame method. Series.aggregate : Similar Series method. Notes ----- The aggregation operations are always performed over an axis, either the index (default) or the column axis. This behavior is different from `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`, `var`), where the default is to compute the aggregation of the flattened array, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d, axis=0)``. `agg` is an alias for `aggregate`. Use the alias. Functions that mutate the passed object can produce unexpected behavior or errors and are not supported. See :ref:`gotchas.udf-mutation` for more details. A passed user-defined-function will be passed a Series for evaluation. If ``func`` defines an index relabeling, ``axis`` must be ``0`` or ``index``. Examples -------- >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]}) >>> df A B C 0 1 4 7 1 2 5 8 2 3 6 9 >>> df.rolling(2, win_type="boxcar").agg("mean") A B C 0 NaN NaN NaN 1 1.5 4.5 7.5 2 2.5 5.5 8.5
python
pandas/core/window/rolling.py
1,218
[ "self", "func" ]
false
2
7.76
pandas-dev/pandas
47,362
numpy
false
_should_retry_on_error
def _should_retry_on_error(self, exception: BaseException) -> bool: """ Determine if an exception should trigger a retry. :param exception: The exception that occurred :return: True if the exception should trigger a retry, False otherwise """ if isinstance(exception, ClientError): error_code = exception.response.get("Error", {}).get("Code", "") retryable_errors = { "ThrottlingException", "RequestLimitExceeded", "ServiceUnavailable", "InternalFailure", "InternalServerError", "TooManyRequestsException", "RequestTimeout", "RequestTimeoutException", "HttpTimeoutException", } return error_code in retryable_errors return False
Determine if an exception should trigger a retry. :param exception: The exception that occurred :return: True if the exception should trigger a retry, False otherwise
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/glue.py
140
[ "self", "exception" ]
bool
true
2
8.08
apache/airflow
43,597
sphinx
false
write
@Override public long write(ByteBuffer[] srcs, int offset, int length) throws IOException { return socketChannel.write(srcs, offset, length); }
Writes a sequence of bytes to this channel from the subsequence of the given buffers. @param srcs The buffers from which bytes are to be retrieved @param offset The offset within the buffer array of the first buffer from which bytes are to be retrieved; must be non-negative and no larger than srcs.length. @param length - The maximum number of buffers to be accessed; must be non-negative and no larger than srcs.length - offset. @return returns no.of bytes written , possibly zero. @throws IOException If some other I/O error occurs
java
clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java
163
[ "srcs", "offset", "length" ]
true
1
6.96
apache/kafka
31,560
javadoc
false
toDashedCase
public static String toDashedCase(String value) { StringBuilder dashed = new StringBuilder(); Character previous = null; for (int i = 0; i < value.length(); i++) { char current = value.charAt(i); if (SEPARATORS.contains(current)) { dashed.append("-"); } else if (Character.isUpperCase(current) && previous != null && !SEPARATORS.contains(previous)) { dashed.append("-").append(current); } else { dashed.append(current); } previous = current; } return dashed.toString().toLowerCase(Locale.ENGLISH); }
Return the idiomatic metadata format for the given {@code value}. @param value a value @return the idiomatic format for the value, or the value itself if it already complies with the idiomatic metadata format.
java
configuration-metadata/spring-boot-configuration-processor/src/main/java/org/springframework/boot/configurationprocessor/support/ConventionUtils.java
47
[ "value" ]
String
true
6
7.92
spring-projects/spring-boot
79,428
javadoc
false
charsStartIndex
function charsStartIndex(strSymbols, chrSymbols) { var index = -1, length = strSymbols.length; while (++index < length && baseIndexOf(chrSymbols, strSymbols[index], 0) > -1) {} return index; }
Used by `_.trim` and `_.trimStart` to get the index of the first string symbol that is not found in the character symbols. @private @param {Array} strSymbols The string symbols to inspect. @param {Array} chrSymbols The character symbols to find. @returns {number} Returns the index of the first unmatched string symbol.
javascript
lodash.js
1,073
[ "strSymbols", "chrSymbols" ]
false
3
6.08
lodash/lodash
61,490
jsdoc
false
transformAccessorsToExpression
function transformAccessorsToExpression(receiver: LeftHandSideExpression, { firstAccessor, getAccessor, setAccessor }: AllAccessorDeclarations, container: Node, startsOnNewLine: boolean): Expression { // To align with source maps in the old emitter, the receiver and property name // arguments are both mapped contiguously to the accessor name. // TODO(rbuckton): Does this need to be parented? const target = setParent(setTextRange(factory.cloneNode(receiver), receiver), receiver.parent); setEmitFlags(target, EmitFlags.NoComments | EmitFlags.NoTrailingSourceMap); setSourceMapRange(target, firstAccessor.name); const visitedAccessorName = visitNode(firstAccessor.name, visitor, isPropertyName); Debug.assert(visitedAccessorName); if (isPrivateIdentifier(visitedAccessorName)) { return Debug.failBadSyntaxKind(visitedAccessorName, "Encountered unhandled private identifier while transforming ES2015."); } const propertyName = createExpressionForPropertyName(factory, visitedAccessorName); setEmitFlags(propertyName, EmitFlags.NoComments | EmitFlags.NoLeadingSourceMap); setSourceMapRange(propertyName, firstAccessor.name); const properties: ObjectLiteralElementLike[] = []; if (getAccessor) { const getterFunction = transformFunctionLikeToExpression(getAccessor, /*location*/ undefined, /*name*/ undefined, container); setSourceMapRange(getterFunction, getSourceMapRange(getAccessor)); setEmitFlags(getterFunction, EmitFlags.NoLeadingComments); const getter = factory.createPropertyAssignment("get", getterFunction); setCommentRange(getter, getCommentRange(getAccessor)); properties.push(getter); } if (setAccessor) { const setterFunction = transformFunctionLikeToExpression(setAccessor, /*location*/ undefined, /*name*/ undefined, container); setSourceMapRange(setterFunction, getSourceMapRange(setAccessor)); setEmitFlags(setterFunction, EmitFlags.NoLeadingComments); const setter = factory.createPropertyAssignment("set", setterFunction); setCommentRange(setter, getCommentRange(setAccessor)); properties.push(setter); } properties.push( factory.createPropertyAssignment("enumerable", getAccessor || setAccessor ? factory.createFalse() : factory.createTrue()), factory.createPropertyAssignment("configurable", factory.createTrue()), ); const call = factory.createCallExpression( factory.createPropertyAccessExpression(factory.createIdentifier("Object"), "defineProperty"), /*typeArguments*/ undefined, [ target, propertyName, factory.createObjectLiteralExpression(properties, /*multiLine*/ true), ], ); if (startsOnNewLine) { startOnNewLine(call); } return call; }
Transforms a set of related get/set accessors into an expression for either a class body function or an ObjectLiteralExpression with computed properties. @param receiver The receiver for the member.
typescript
src/compiler/transformers/es2015.ts
2,354
[ "receiver", "{ firstAccessor, getAccessor, setAccessor }", "container", "startsOnNewLine" ]
true
7
6.56
microsoft/TypeScript
107,154
jsdoc
false
exponential_backoff_retry
def exponential_backoff_retry( last_attempt_time: datetime, attempts_since_last_successful: int, callable_function: Callable, max_delay: int = 60 * 2, max_attempts: int = -1, exponent_base: int = 4, ) -> None: """ Retry a callable function with exponential backoff between attempts if it raises an exception. :param last_attempt_time: Timestamp of last attempt call. :param attempts_since_last_successful: Number of attempts since last success. :param callable_function: Callable function that will be called if enough time has passed. :param max_delay: Maximum delay in seconds between retries. Default 120. :param max_attempts: Maximum number of attempts before giving up. Default -1 (no limit). :param exponent_base: Exponent base to calculate delay. Default 4. """ if max_attempts != -1 and attempts_since_last_successful >= max_attempts: log.error("Max attempts reached. Exiting.") return next_retry_time = last_attempt_time + calculate_next_attempt_delay( attempt_number=attempts_since_last_successful, max_delay=max_delay, exponent_base=exponent_base ) current_time = timezone.utcnow() if current_time >= next_retry_time: try: callable_function() except Exception: log.exception("Error calling %r", callable_function.__name__) next_delay = calculate_next_attempt_delay( attempts_since_last_successful + 1, max_delay, exponent_base ) log.info("Waiting for %s seconds before retrying.", next_delay)
Retry a callable function with exponential backoff between attempts if it raises an exception. :param last_attempt_time: Timestamp of last attempt call. :param attempts_since_last_successful: Number of attempts since last success. :param callable_function: Callable function that will be called if enough time has passed. :param max_delay: Maximum delay in seconds between retries. Default 120. :param max_attempts: Maximum number of attempts before giving up. Default -1 (no limit). :param exponent_base: Exponent base to calculate delay. Default 4.
python
providers/amazon/src/airflow/providers/amazon/aws/executors/utils/exponential_backoff_retry.py
46
[ "last_attempt_time", "attempts_since_last_successful", "callable_function", "max_delay", "max_attempts", "exponent_base" ]
None
true
4
6.72
apache/airflow
43,597
sphinx
false
createCollection
abstract Collection<V> createCollection();
Creates the collection of values for a single key. <p>Collections with weak, soft, or phantom references are not supported. Each call to {@code createCollection} should create a new instance. <p>The returned collection class determines whether duplicate key-value pairs are allowed. @return an empty collection of values
java
android/guava/src/com/google/common/collect/AbstractMapBasedMultimap.java
155
[]
true
1
6.48
google/guava
51,352
javadoc
false