function_name
stringlengths
1
57
function_code
stringlengths
20
4.99k
documentation
stringlengths
50
2k
language
stringclasses
5 values
file_path
stringlengths
8
166
line_number
int32
4
16.7k
parameters
listlengths
0
20
return_type
stringlengths
0
131
has_type_hints
bool
2 classes
complexity
int32
1
51
quality_score
float32
6
9.68
repo_name
stringclasses
34 values
repo_stars
int32
2.9k
242k
docstring_style
stringclasses
7 values
is_async
bool
2 classes
yield
inline void yield() { auto fm = FiberManager::getFiberManagerUnsafe(); if (fm) { fm->yield(); } else { std::this_thread::yield(); } }
Returns a refference to a fiber-local context for given Fiber. Should be always called with the same T for each fiber. Fiber-local context is lazily default-constructed on first request. When new task is scheduled via addTask / addTaskRemote from a fiber its fiber-local context is copied into the new fiber.
cpp
folly/fibers/FiberManagerInternal.h
720
[]
true
3
6.72
facebook/folly
30,157
doxygen
false
fsync
public static void fsync(final Path fileToSync, final boolean isDir, final boolean metaData) throws IOException { if (isDir && WINDOWS) { // opening a directory on Windows fails, directories can not be fsynced there if (Files.exists(fileToSync) == false) { // yet do not suppress trying to fsync directories that do not exist throw new NoSuchFileException(fileToSync.toString()); } return; } try (FileChannel file = FileChannel.open(fileToSync, isDir ? StandardOpenOption.READ : StandardOpenOption.WRITE)) { try { file.force(metaData); } catch (final IOException e) { if (isDir) { assert (LINUX || MAC_OS_X) == false : "on Linux and MacOSX fsyncing a directory should not throw IOException, " + "we just don't want to rely on that in production (undocumented); got: " + e; // ignore exception if it is a directory return; } // throw original exception throw e; } } }
Ensure that any writes to the given file is written to the storage device that contains it. The {@code isDir} parameter specifies whether or not the path to sync is a directory. This is needed because we open for read and ignore an {@link IOException} since not all filesystems and operating systems support fsyncing on a directory. For regular files we must open for write for the fsync to have an effect. @param fileToSync the file to fsync @param isDir if true, the given file is a directory (we open for read and ignore {@link IOException}s, because not all file systems and operating systems allow to fsync on a directory) @param metaData if {@code true} both the file's content and metadata will be sync, otherwise only the file's content will be sync
java
libs/core/src/main/java/org/elasticsearch/core/IOUtils.java
288
[ "fileToSync", "isDir", "metaData" ]
void
true
8
6.88
elastic/elasticsearch
75,680
javadoc
false
forMethod
public static AutowiredMethodArgumentsResolver forMethod(String methodName, Class<?>... parameterTypes) { return new AutowiredMethodArgumentsResolver(methodName, parameterTypes, false, null); }
Create a new {@link AutowiredMethodArgumentsResolver} for the specified method where injection is optional. @param methodName the method name @param parameterTypes the factory method parameter types @return a new {@link AutowiredFieldValueResolver} instance
java
spring-beans/src/main/java/org/springframework/beans/factory/aot/AutowiredMethodArgumentsResolver.java
87
[ "methodName" ]
AutowiredMethodArgumentsResolver
true
1
6
spring-projects/spring-framework
59,386
javadoc
false
convertToCreatableTopic
CreatableTopic convertToCreatableTopic() { CreatableTopic creatableTopic = new CreatableTopic(). setName(name). setNumPartitions(numPartitions.orElse(CreateTopicsRequest.NO_NUM_PARTITIONS)). setReplicationFactor(replicationFactor.orElse(CreateTopicsRequest.NO_REPLICATION_FACTOR)); if (replicasAssignments != null) { for (Entry<Integer, List<Integer>> entry : replicasAssignments.entrySet()) { creatableTopic.assignments().add( new CreatableReplicaAssignment(). setPartitionIndex(entry.getKey()). setBrokerIds(entry.getValue())); } } if (configs != null) { for (Entry<String, String> entry : configs.entrySet()) { creatableTopic.configs().add( new CreatableTopicConfig(). setName(entry.getKey()). setValue(entry.getValue())); } } return creatableTopic; }
The configuration for the new topic or null if no configs ever specified.
java
clients/src/main/java/org/apache/kafka/clients/admin/NewTopic.java
125
[]
CreatableTopic
true
3
6.88
apache/kafka
31,560
javadoc
false
_split
def _split(self, X): """Generate indices to split data into training and test set. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. Yields ------ train : ndarray The training set indices for that split. test : ndarray The testing set indices for that split. """ (X,) = indexable(X) n_samples = _num_samples(X) n_splits = self.n_splits n_folds = n_splits + 1 gap = self.gap test_size = ( self.test_size if self.test_size is not None else n_samples // n_folds ) # Make sure we have enough samples for the given split parameters if n_folds > n_samples: raise ValueError( f"Cannot have number of folds={n_folds} greater" f" than the number of samples={n_samples}." ) if n_samples - gap - (test_size * n_splits) <= 0: raise ValueError( f"Too many splits={n_splits} for number of samples" f"={n_samples} with test_size={test_size} and gap={gap}." ) indices = np.arange(n_samples) test_starts = range(n_samples - n_splits * test_size, n_samples, test_size) for test_start in test_starts: train_end = test_start - gap if self.max_train_size and self.max_train_size < train_end: yield ( indices[train_end - self.max_train_size : train_end], indices[test_start : test_start + test_size], ) else: yield ( indices[:train_end], indices[test_start : test_start + test_size], )
Generate indices to split data into training and test set. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data, where `n_samples` is the number of samples and `n_features` is the number of features. Yields ------ train : ndarray The training set indices for that split. test : ndarray The testing set indices for that split.
python
sklearn/model_selection/_split.py
1,268
[ "self", "X" ]
false
8
6.24
scikit-learn/scikit-learn
64,340
numpy
false
escape
private @Nullable String escape(@Nullable String name) { if (name == null) { return null; } for (String escape : ESCAPED) { name = name.replace(escape, "\\" + escape); } return name; }
Return a string representation of the path without any escaping. @return the unescaped string representation
java
core/spring-boot/src/main/java/org/springframework/boot/json/JsonWriter.java
842
[ "name" ]
String
true
2
6.4
spring-projects/spring-boot
79,428
javadoc
false
find_path_from_directory
def find_path_from_directory( base_dir_path: str | os.PathLike[str], ignore_file_name: str, ignore_file_syntax: str = conf.get_mandatory_value("core", "DAG_IGNORE_FILE_SYNTAX", fallback="glob"), ) -> Generator[str, None, None]: """ Recursively search the base path for a list of file paths that should not be ignored. :param base_dir_path: the base path to be searched :param ignore_file_name: the file name in which specifies the patterns of files/dirs to be ignored :param ignore_file_syntax: the syntax of patterns in the ignore file: regexp or glob :return: a generator of file paths. """ if ignore_file_syntax == "glob" or not ignore_file_syntax: return _find_path_from_directory(base_dir_path, ignore_file_name, _GlobIgnoreRule) if ignore_file_syntax == "regexp": return _find_path_from_directory(base_dir_path, ignore_file_name, _RegexpIgnoreRule) raise ValueError(f"Unsupported ignore_file_syntax: {ignore_file_syntax}")
Recursively search the base path for a list of file paths that should not be ignored. :param base_dir_path: the base path to be searched :param ignore_file_name: the file name in which specifies the patterns of files/dirs to be ignored :param ignore_file_syntax: the syntax of patterns in the ignore file: regexp or glob :return: a generator of file paths.
python
airflow-core/src/airflow/utils/file.py
224
[ "base_dir_path", "ignore_file_name", "ignore_file_syntax" ]
Generator[str, None, None]
true
4
8.08
apache/airflow
43,597
sphinx
false
visualize_comparison
def visualize_comparison( profiling_results: dict[str, list[Performance]], title: Optional[str] = None, output_path: Optional[str] = None, ) -> None: """ Create a single memory_bandwidth comparison plot from profiling results. Args: profiling_results: Dict mapping backend names to lists of Performance objects output_path: Path to save the plot (optional) """ # Get backend colors backend_colors = get_backend_colors() # Extract settings from eager backend which runs all settings all_settings = [] for perf in profiling_results["eager"]: all_settings.append(perf.setting) # Create single plot fig, ax = plt.subplots(1, 1, figsize=(12, 8)) for backend in profiling_results: backend_perfs = profiling_results[backend] perf_dict = {perf.setting: perf for perf in backend_perfs} x_vals = [] y_vals = [] for i, setting in enumerate(all_settings): if setting in perf_dict: x_vals.append(i) y_vals.append(perf_dict[setting].memory_bandwidth) if x_vals: # Only plot if we have data color = backend_colors.get(backend, backend_colors["default"]) ax.plot( x_vals, y_vals, "o-", label=backend, color=color, linewidth=2, markersize=8, alpha=0.8, ) # Configure the plot ax.set_title(title or "Memory Bandwidth Comparison", fontsize=16) ax.set_xlabel("Shape", fontsize=12) ax.set_ylabel("memory bandwidth (GB/s)", fontsize=12) ax.set_xticks(range(len(all_settings))) ax.set_xticklabels( [ s.replace("shape: ", "").replace("[", "").replace("]", "") for s in all_settings ], rotation=45, ha="right", ) ax.legend(fontsize=10) ax.grid(True, alpha=0.3) plt.tight_layout() # Save the plot if output path is provided if output_path: # Save as PNG os.makedirs("pics", exist_ok=True) full_path = os.path.join("pics", output_path + ".png") plt.savefig(full_path, dpi=300, bbox_inches="tight", facecolor="white") print(f"Chart saved to {full_path}") plt.close()
Create a single memory_bandwidth comparison plot from profiling results. Args: profiling_results: Dict mapping backend names to lists of Performance objects output_path: Path to save the plot (optional)
python
benchmarks/dynamo/genai_layers/utils.py
243
[ "profiling_results", "title", "output_path" ]
None
true
8
6.16
pytorch/pytorch
96,034
google
false
apply_async
def apply_async(self, args=None, kwargs=None, route_name=None, **options): """Apply this task asynchronously. Arguments: args (Tuple): Partial args to be prepended to the existing args. kwargs (Dict): Partial kwargs to be merged with existing kwargs. options (Dict): Partial options to be merged with existing options. Returns: ~@AsyncResult: promise of future evaluation. See also: :meth:`~@Task.apply_async` and the :ref:`guide-calling` guide. """ args = args if args else () kwargs = kwargs if kwargs else {} # Extra options set to None are dismissed options = {k: v for k, v in options.items() if v is not None} try: _apply = self._apply_async except IndexError: # pragma: no cover # no tasks for chain, etc to find type return # For callbacks: extra args are prepended to the stored args. if args or kwargs or options: args, kwargs, options = self._merge(args, kwargs, options) else: args, kwargs, options = self.args, self.kwargs, self.options # pylint: disable=too-many-function-args # Works on this, as it's a property return _apply(args, kwargs, **options)
Apply this task asynchronously. Arguments: args (Tuple): Partial args to be prepended to the existing args. kwargs (Dict): Partial kwargs to be merged with existing kwargs. options (Dict): Partial options to be merged with existing options. Returns: ~@AsyncResult: promise of future evaluation. See also: :meth:`~@Task.apply_async` and the :ref:`guide-calling` guide.
python
celery/canvas.py
369
[ "self", "args", "kwargs", "route_name" ]
false
7
6.96
celery/celery
27,741
google
false
string_column_to_ndarray
def string_column_to_ndarray(col: Column) -> tuple[np.ndarray, Any]: """ Convert a column holding string data to a NumPy array. Parameters ---------- col : Column Returns ------- tuple Tuple of np.ndarray holding the data and the memory owner object that keeps the memory alive. """ null_kind, sentinel_val = col.describe_null if null_kind not in ( ColumnNullType.NON_NULLABLE, ColumnNullType.USE_BITMASK, ColumnNullType.USE_BYTEMASK, ): raise NotImplementedError( f"{null_kind} null kind is not yet supported for string columns." ) buffers = col.get_buffers() assert buffers["offsets"], "String buffers must contain offsets" # Retrieve the data buffer containing the UTF-8 code units data_buff, _ = buffers["data"] # We're going to reinterpret the buffer as uint8, so make sure we can do it safely assert col.dtype[2] in ( ArrowCTypes.STRING, ArrowCTypes.LARGE_STRING, ) # format_str == utf-8 # Convert the buffers to NumPy arrays. In order to go from STRING to # an equivalent ndarray, we claim that the buffer is uint8 (i.e., a byte array) data_dtype = ( DtypeKind.UINT, 8, ArrowCTypes.UINT8, Endianness.NATIVE, ) # Specify zero offset as we don't want to chunk the string data data = buffer_to_ndarray(data_buff, data_dtype, offset=0, length=data_buff.bufsize) # Retrieve the offsets buffer containing the index offsets demarcating # the beginning and the ending of each string offset_buff, offset_dtype = buffers["offsets"] # Offsets buffer contains start-stop positions of strings in the data buffer, # meaning that it has more elements than in the data buffer, do `col.size() + 1` # here to pass a proper offsets buffer size offsets = buffer_to_ndarray( offset_buff, offset_dtype, offset=col.offset, length=col.size() + 1 ) null_pos = None if null_kind in (ColumnNullType.USE_BITMASK, ColumnNullType.USE_BYTEMASK): validity = buffers["validity"] if validity is not None: valid_buff, valid_dtype = validity null_pos = buffer_to_ndarray( valid_buff, valid_dtype, offset=col.offset, length=col.size() ) if sentinel_val == 0: null_pos = ~null_pos # Assemble the strings from the code units str_list: list[None | float | str] = [None] * col.size() for i in range(col.size()): # Check for missing values if null_pos is not None and null_pos[i]: str_list[i] = np.nan continue # Extract a range of code units units = data[offsets[i] : offsets[i + 1]] # Convert the list of code units to bytes str_bytes = bytes(units) # Create the string string = str_bytes.decode(encoding="utf-8") # Add to our list of strings str_list[i] = string if using_string_dtype(): res = pd.Series(str_list, dtype="str") else: res = np.asarray(str_list, dtype="object") # type: ignore[assignment] return res, buffers # type: ignore[return-value]
Convert a column holding string data to a NumPy array. Parameters ---------- col : Column Returns ------- tuple Tuple of np.ndarray holding the data and the memory owner object that keeps the memory alive.
python
pandas/core/interchange/from_dataframe.py
301
[ "col" ]
tuple[np.ndarray, Any]
true
10
6.8
pandas-dev/pandas
47,362
numpy
false
_downsample
def _downsample(self, how, **kwargs): """ Downsample the cython defined function. Parameters ---------- how : string / cython mapped function **kwargs : kw args passed to how function """ ax = self.ax # Excludes `on` column when provided obj = self._obj_with_exclusions if not len(ax): # reset to the new freq obj = obj.copy() obj.index = obj.index._with_freq(self.freq) assert obj.index.freq == self.freq, (obj.index.freq, self.freq) return obj # we are downsampling # we want to call the actual grouper method here result = obj.groupby(self._grouper).aggregate(how, **kwargs) return self._wrap_result(result)
Downsample the cython defined function. Parameters ---------- how : string / cython mapped function **kwargs : kw args passed to how function
python
pandas/core/resample.py
2,072
[ "self", "how" ]
false
2
6.4
pandas-dev/pandas
47,362
numpy
false
toHtml
private static String toHtml() { final StringBuilder b = new StringBuilder(); b.append("<table class=\"data-table\"><tbody>\n"); b.append("<tr>"); b.append("<th>Error</th>\n"); b.append("<th>Code</th>\n"); b.append("<th>Retriable</th>\n"); b.append("<th>Description</th>\n"); b.append("</tr>\n"); for (Errors error : Errors.values()) { b.append("<tr>"); b.append("<td>"); b.append(error.name()); b.append("</td>"); b.append("<td>"); b.append(error.code()); b.append("</td>"); b.append("<td>"); b.append(error.exception() != null && error.exception() instanceof RetriableException ? "True" : "False"); b.append("</td>"); b.append("<td>"); b.append(error.exception() != null ? error.exception().getMessage() : ""); b.append("</td>"); b.append("</tr>\n"); } b.append("</tbody></table>\n"); return b.toString(); }
Check if a Throwable is a commonly wrapped exception type (e.g. `CompletionException`) and return the cause if so. This is useful to handle cases where exceptions may be raised from a future or a completion stage (as might be the case for requests sent to the controller in `ControllerApis`). @param t The Throwable to check @return The throwable itself or its cause if it is an instance of a commonly wrapped exception type
java
clients/src/main/java/org/apache/kafka/common/protocol/Errors.java
549
[]
String
true
4
8.08
apache/kafka
31,560
javadoc
false
_route_params
def _route_params(self, *, params, method, parent, caller): """Prepare the given metadata to be passed to the method. This is used when a router is used as a child object of another router. The parent router then passes all parameters understood by the child object to it and delegates their validation to the child. The output of this method can be used directly as the input to the corresponding method as **kwargs. Parameters ---------- params : dict A dictionary of provided metadata. method : str The name of the method for which the metadata is requested and routed. parent : object Parent class object, that routes the metadata. caller : str Method from the parent class object, where the metadata is routed from. Returns ------- params : Bunch A :class:`~sklearn.utils.Bunch` of {metadata: value} which can be given to the corresponding method. """ res = Bunch() if self._self_request: res.update( self._self_request._route_params( params=params, method=method, parent=parent, caller=caller, ) ) param_names = self._get_param_names( method=method, return_alias=True, ignore_self_request=True ) child_params = { key: value for key, value in params.items() if key in param_names } for key in set(res.keys()).intersection(child_params.keys()): # conflicts are okay if the passed objects are the same, but it's # an issue if they're different objects. if child_params[key] is not res[key]: raise ValueError( f"In {_routing_repr(self.owner)}, there is a conflict on {key}" " between what is requested for this estimator and what is" " requested by its children. You can resolve this conflict by" " using an alias for the child estimators' requested metadata." ) res.update(child_params) return res
Prepare the given metadata to be passed to the method. This is used when a router is used as a child object of another router. The parent router then passes all parameters understood by the child object to it and delegates their validation to the child. The output of this method can be used directly as the input to the corresponding method as **kwargs. Parameters ---------- params : dict A dictionary of provided metadata. method : str The name of the method for which the metadata is requested and routed. parent : object Parent class object, that routes the metadata. caller : str Method from the parent class object, where the metadata is routed from. Returns ------- params : Bunch A :class:`~sklearn.utils.Bunch` of {metadata: value} which can be given to the corresponding method.
python
sklearn/utils/_metadata_requests.py
1,007
[ "self", "params", "method", "parent", "caller" ]
false
4
6
scikit-learn/scikit-learn
64,340
numpy
false
walk
def walk(self, where: str = "/") -> Iterator[tuple[str, list[str], list[str]]]: """ Walk the pytables group hierarchy for pandas objects. This generator will yield the group path, subgroups and pandas object names for each group. Any non-pandas PyTables objects that are not a group will be ignored. The `where` group itself is listed first (preorder), then each of its child groups (following an alphanumerical order) is also traversed, following the same procedure. Parameters ---------- where : str, default "/" Group where to start walking. Yields ------ path : str Full path to a group (without trailing '/'). groups : list Names (strings) of the groups contained in `path`. leaves : list Names (strings) of the pandas objects contained in `path`. See Also -------- HDFStore.info : Prints detailed information on the store. Examples -------- >>> df1 = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"]) >>> store = pd.HDFStore("store.h5", "w") # doctest: +SKIP >>> store.put("data", df1, format="table") # doctest: +SKIP >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=["A", "B"]) >>> store.append("data", df2) # doctest: +SKIP >>> store.close() # doctest: +SKIP >>> for group in store.walk(): # doctest: +SKIP ... print(group) # doctest: +SKIP >>> store.close() # doctest: +SKIP """ _tables() self._check_if_open() assert self._handle is not None # for mypy assert _table_mod is not None # for mypy for g in self._handle.walk_groups(where): if getattr(g._v_attrs, "pandas_type", None) is not None: continue groups = [] leaves = [] for child in g._v_children.values(): pandas_type = getattr(child._v_attrs, "pandas_type", None) if pandas_type is None: if isinstance(child, _table_mod.group.Group): groups.append(child._v_name) else: leaves.append(child._v_name) yield (g._v_pathname.rstrip("/"), groups, leaves)
Walk the pytables group hierarchy for pandas objects. This generator will yield the group path, subgroups and pandas object names for each group. Any non-pandas PyTables objects that are not a group will be ignored. The `where` group itself is listed first (preorder), then each of its child groups (following an alphanumerical order) is also traversed, following the same procedure. Parameters ---------- where : str, default "/" Group where to start walking. Yields ------ path : str Full path to a group (without trailing '/'). groups : list Names (strings) of the groups contained in `path`. leaves : list Names (strings) of the pandas objects contained in `path`. See Also -------- HDFStore.info : Prints detailed information on the store. Examples -------- >>> df1 = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"]) >>> store = pd.HDFStore("store.h5", "w") # doctest: +SKIP >>> store.put("data", df1, format="table") # doctest: +SKIP >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=["A", "B"]) >>> store.append("data", df2) # doctest: +SKIP >>> store.close() # doctest: +SKIP >>> for group in store.walk(): # doctest: +SKIP ... print(group) # doctest: +SKIP >>> store.close() # doctest: +SKIP
python
pandas/io/pytables.py
1,598
[ "self", "where" ]
Iterator[tuple[str, list[str], list[str]]]
true
7
8.4
pandas-dev/pandas
47,362
numpy
false
compare
private static int compare(ByteBuffer buffer, DataBlock dataBlock, long pos, int len, CharSequence charSequence, CompareType compareType) throws IOException { if (charSequence.isEmpty()) { return 0; } boolean addSlash = compareType == CompareType.MATCHES_ADDING_SLASH && !endsWith(charSequence, '/'); int charSequenceIndex = 0; int maxCharSequenceLength = (!addSlash) ? charSequence.length() : charSequence.length() + 1; int result = 0; byte[] bytes = buffer.array(); int codePointSize = 1; while (len > 0) { int count = readInBuffer(dataBlock, pos, buffer, len, codePointSize); for (int byteIndex = 0; byteIndex < count;) { codePointSize = getCodePointSize(bytes, byteIndex); if (!hasEnoughBytes(byteIndex, codePointSize, count)) { break; } int codePoint = getCodePoint(bytes, byteIndex, codePointSize); if (codePoint <= 0xFFFF) { char ch = (char) (codePoint & 0xFFFF); if (charSequenceIndex >= maxCharSequenceLength || getChar(charSequence, charSequenceIndex++) != ch) { return -1; } } else { char ch = Character.highSurrogate(codePoint); if (charSequenceIndex >= maxCharSequenceLength || getChar(charSequence, charSequenceIndex++) != ch) { return -1; } ch = Character.lowSurrogate(codePoint); if (charSequenceIndex >= charSequence.length() || getChar(charSequence, charSequenceIndex++) != ch) { return -1; } } byteIndex += codePointSize; pos += codePointSize; len -= codePointSize; result += codePointSize; codePointSize = 1; if (compareType == CompareType.STARTS_WITH && charSequenceIndex >= charSequence.length()) { return result; } } } return (charSequenceIndex >= charSequence.length()) ? result : -1; }
Returns if the bytes read from a {@link DataBlock} starts with the given {@link CharSequence}. @param buffer the buffer to use or {@code null} @param dataBlock the source data block @param pos the position in the data block where the string starts @param len the number of bytes to read from the block @param charSequence the required starting chars @return {@code -1} if the data block does not start with the char sequence, or a positive number indicating the number of bytes that contain the starting chars
java
loader/spring-boot-loader/src/main/java/org/springframework/boot/loader/zip/ZipString.java
191
[ "buffer", "dataBlock", "pos", "len", "charSequence", "compareType" ]
true
17
6.56
spring-projects/spring-boot
79,428
javadoc
false
createMaybeNavigableKeySet
final Set<K> createMaybeNavigableKeySet() { if (map instanceof NavigableMap) { return new NavigableKeySet((NavigableMap<K, Collection<V>>) map); } else if (map instanceof SortedMap) { return new SortedKeySet((SortedMap<K, Collection<V>>) map); } else { return new KeySet(map); } }
List decorator that stays in sync with the multimap values for a key and supports rapid random access.
java
android/guava/src/com/google/common/collect/AbstractMapBasedMultimap.java
921
[]
true
3
7.04
google/guava
51,352
javadoc
false
npmLink
function npmLink(text, title) { return ( '<% if (name == "templateSettings" || !/^(?:methods|properties|seq)$/i.test(category)) {' + 'print(' + '"[' + text + '](https://www.npmjs.com/package/lodash." + name.toLowerCase() + ' + '"' + (title == null ? '' : ' \\"' + title + '\\"') + ')"' + ');' + '} %>' ); }
Composes a npm link from `text` and optional `title`. @private @param {string} text The link text. @param {string} [title] The link title. @returns {string} Returns the composed npm link.
javascript
lib/main/build-doc.js
45
[ "text", "title" ]
false
2
6.24
lodash/lodash
61,490
jsdoc
false
generateBitVectors
@SafeVarargs public static <E extends Enum<E>> long[] generateBitVectors(final Class<E> enumClass, final E... values) { asEnum(enumClass); Validate.noNullElements(values); final EnumSet<E> condensed = EnumSet.noneOf(enumClass); Collections.addAll(condensed, values); final long[] result = new long[(enumClass.getEnumConstants().length - 1) / Long.SIZE + 1]; for (final E value : condensed) { result[value.ordinal() / Long.SIZE] |= 1L << value.ordinal() % Long.SIZE; } ArrayUtils.reverse(result); return result; }
Creates a bit vector representation of the given subset of an Enum using as many {@code long}s as needed. <p>This generates a value that is usable by {@link EnumUtils#processBitVectors}.</p> <p>Use this method if you have more than 64 values in your Enum.</p> @param enumClass the class of the enum we are working with, not {@code null}. @param values the values we want to convert, not {@code null}, neither containing {@code null}. @param <E> the type of the enumeration. @return a long[] whose values provide a binary representation of the given set of enum values with the least significant digits rightmost. @throws NullPointerException if {@code enumClass} or {@code values} is {@code null}. @throws IllegalArgumentException if {@code enumClass} is not an enum class, or if any {@code values} {@code null}. @since 3.2
java
src/main/java/org/apache/commons/lang3/EnumUtils.java
148
[ "enumClass" ]
true
1
6.88
apache/commons-lang
2,896
javadoc
false
default_device
def default_device(self) -> L["cpu"]: """ The default device used for new Dask arrays. For Dask, this always returns ``'cpu'``. See Also -------- __array_namespace_info__.capabilities, __array_namespace_info__.default_dtypes, __array_namespace_info__.dtypes, __array_namespace_info__.devices Returns ------- device : Device The default device used for new Dask arrays. Examples -------- >>> info = xp.__array_namespace_info__() >>> info.default_device() 'cpu' """ return "cpu"
The default device used for new Dask arrays. For Dask, this always returns ``'cpu'``. See Also -------- __array_namespace_info__.capabilities, __array_namespace_info__.default_dtypes, __array_namespace_info__.dtypes, __array_namespace_info__.devices Returns ------- device : Device The default device used for new Dask arrays. Examples -------- >>> info = xp.__array_namespace_info__() >>> info.default_device() 'cpu'
python
sklearn/externals/array_api_compat/dask/array/_info.py
145
[ "self" ]
L["cpu"]
true
1
6.48
scikit-learn/scikit-learn
64,340
unknown
false
printRects
function printRects(rects: SuspenseNode['rects']): string { if (rects === null) { return ' rects={null}'; } else { return ` rects={[${rects.map(rect => `{x:${rect.x},y:${rect.y},width:${rect.width},height:${rect.height}}`).join(', ')}]}`; } }
Copyright (c) Meta Platforms, Inc. and affiliates. This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree. @flow
javascript
packages/react-devtools-shared/src/devtools/utils.js
57
[]
false
3
6.24
facebook/react
241,750
jsdoc
false
findParentPackageJSON
function findParentPackageJSON(checkPath) { const enabledPermission = permission.isEnabled(); const rootSeparatorIndex = StringPrototypeIndexOf(checkPath, path.sep); let separatorIndex; do { separatorIndex = StringPrototypeLastIndexOf(checkPath, path.sep); checkPath = StringPrototypeSlice(checkPath, 0, separatorIndex); if (enabledPermission && !permission.has('fs.read', checkPath + path.sep)) { return undefined; } if (StringPrototypeEndsWith(checkPath, path.sep + 'node_modules')) { return undefined; } const maybePackageJSONPath = checkPath + path.sep + 'package.json'; const stat = internalFsBinding.internalModuleStat(checkPath + path.sep + 'package.json'); const packageJSONExists = stat === 0; if (packageJSONExists) { return maybePackageJSONPath; } } while (separatorIndex > rootSeparatorIndex); return undefined; }
Given a file path, walk the filesystem upwards until we find its closest parent `package.json` file, stopping when: 1. we find a `package.json` file; 2. we find a path that we do not have permission to read; 3. we find a containing `node_modules` directory; 4. or, we reach the filesystem root @returns {undefined | string}
javascript
lib/internal/modules/package_json_reader.js
142
[ "checkPath" ]
false
5
6.08
nodejs/node
114,839
jsdoc
false
open
public static FileRecords open(File file, boolean mutable, boolean fileAlreadyExists, int initFileSize, boolean preallocate) throws IOException { FileChannel channel = openChannel(file, mutable, fileAlreadyExists, initFileSize, preallocate); int end = (!fileAlreadyExists && preallocate) ? 0 : Integer.MAX_VALUE; return new FileRecords(file, channel, end); }
Get an iterator over the record batches in the file, starting at a specific position. This is similar to {@link #batches()} except that callers specify a particular position to start reading the batches from. This method must be used with caution: the start position passed in must be a known start of a batch. @param start The position to start record iteration from; must be a known position for start of a batch @return An iterator over batches starting from {@code start}
java
clients/src/main/java/org/apache/kafka/common/record/FileRecords.java
444
[ "file", "mutable", "fileAlreadyExists", "initFileSize", "preallocate" ]
FileRecords
true
3
8.24
apache/kafka
31,560
javadoc
false
hierarchy
public static Iterable<Class<?>> hierarchy(final Class<?> type) { return hierarchy(type, Interfaces.EXCLUDE); }
Gets an {@link Iterable} that can iterate over a class hierarchy in ascending (subclass to superclass) order, excluding interfaces. @param type the type to get the class hierarchy from. @return Iterable an Iterable over the class hierarchy of the given class. @since 3.2
java
src/main/java/org/apache/commons/lang3/ClassUtils.java
1,171
[ "type" ]
true
1
6.96
apache/commons-lang
2,896
javadoc
false
check_async_run_results
def check_async_run_results( results: list[ApplyResult], success_message: str, outputs: list[Output], include_success_outputs: bool, poll_time_seconds: float = 0.2, skip_cleanup: bool = False, summarize_on_ci: SummarizeAfter = SummarizeAfter.NO_SUMMARY, summary_start_regexp: str | None = None, terminated_on_timeout: bool = False, ): """ Check if all async results were success. Exits with error if: * exit code 1: some tasks failed * exit code 2: some tasks were terminated on timeout :param results: results of parallel runs (expected in the form of Tuple: (return_code, info) :param outputs: outputs where results are written to :param success_message: Success string printed when everything is OK :param include_success_outputs: include outputs of successful parallel runs :param poll_time_seconds: what's the poll time between checks :param skip_cleanup: whether to skip cleanup of temporary files. :param summarize_on_ci: determines when to summarize the parallel jobs when they are completed in CI, outside the folded CI output :param summary_start_regexp: the regexp that determines line after which outputs should be printed as summary, so that you do not have to look at the folded details of the run in CI :param terminated_on_timeout: whether the run was terminated on timeout """ if terminated_on_timeout: print_outputs_on_timeout(outputs, results, include_success_outputs) sys.exit(2) completed_list = wait_for_all_tasks_completed(poll_time_seconds, results) print_async_result_status(completed_list) print_logs_on_completion(include_success_outputs, outputs, results) summarize_results_outside_of_folded_logs(outputs, results, summarize_on_ci, summary_start_regexp) if finalize_async_tasks(outputs, results, skip_cleanup, success_message): sys.exit(1)
Check if all async results were success. Exits with error if: * exit code 1: some tasks failed * exit code 2: some tasks were terminated on timeout :param results: results of parallel runs (expected in the form of Tuple: (return_code, info) :param outputs: outputs where results are written to :param success_message: Success string printed when everything is OK :param include_success_outputs: include outputs of successful parallel runs :param poll_time_seconds: what's the poll time between checks :param skip_cleanup: whether to skip cleanup of temporary files. :param summarize_on_ci: determines when to summarize the parallel jobs when they are completed in CI, outside the folded CI output :param summary_start_regexp: the regexp that determines line after which outputs should be printed as summary, so that you do not have to look at the folded details of the run in CI :param terminated_on_timeout: whether the run was terminated on timeout
python
dev/breeze/src/airflow_breeze/utils/parallel.py
375
[ "results", "success_message", "outputs", "include_success_outputs", "poll_time_seconds", "skip_cleanup", "summarize_on_ci", "summary_start_regexp", "terminated_on_timeout" ]
true
3
6.4
apache/airflow
43,597
sphinx
false
predict
def predict(self, X): """Predict the class labels for the provided data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_queries, n_features), \ or (n_queries, n_indexed) if metric == 'precomputed', or None Test samples. If `None`, predictions for all indexed points are returned; in this case, points are not considered their own neighbors. Returns ------- y : ndarray of shape (n_queries,) or (n_queries, n_outputs) Class labels for each data sample. """ check_is_fitted(self, "_fit_method") if self.weights == "uniform": if self._fit_method == "brute" and ArgKminClassMode.is_usable_for( X, self._fit_X, self.metric ): probabilities = self.predict_proba(X) if self.outputs_2d_: return np.stack( [ self.classes_[idx][np.argmax(probas, axis=1)] for idx, probas in enumerate(probabilities) ], axis=1, ) return self.classes_[np.argmax(probabilities, axis=1)] # In that case, we do not need the distances to perform # the weighting so we do not compute them. neigh_ind = self.kneighbors(X, return_distance=False) neigh_dist = None else: neigh_dist, neigh_ind = self.kneighbors(X) classes_ = self.classes_ _y = self._y if not self.outputs_2d_: _y = self._y.reshape((-1, 1)) classes_ = [self.classes_] n_outputs = len(classes_) n_queries = _num_samples(self._fit_X if X is None else X) weights = _get_weights(neigh_dist, self.weights) if weights is not None and _all_with_any_reduction_axis_1(weights, value=0): raise ValueError( "All neighbors of some sample is getting zero weights. " "Please modify 'weights' to avoid this case if you are " "using a user-defined function." ) y_pred = np.empty((n_queries, n_outputs), dtype=classes_[0].dtype) for k, classes_k in enumerate(classes_): if weights is None: mode, _ = _mode(_y[neigh_ind, k], axis=1) else: mode, _ = weighted_mode(_y[neigh_ind, k], weights, axis=1) mode = np.asarray(mode.ravel(), dtype=np.intp) y_pred[:, k] = classes_k.take(mode) if not self.outputs_2d_: y_pred = y_pred.ravel() return y_pred
Predict the class labels for the provided data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_queries, n_features), \ or (n_queries, n_indexed) if metric == 'precomputed', or None Test samples. If `None`, predictions for all indexed points are returned; in this case, points are not considered their own neighbors. Returns ------- y : ndarray of shape (n_queries,) or (n_queries, n_outputs) Class labels for each data sample.
python
sklearn/neighbors/_classification.py
245
[ "self", "X" ]
false
14
6
scikit-learn/scikit-learn
64,340
numpy
false
getClass
public static Class<?> getClass(final String className, final boolean initialize) throws ClassNotFoundException { final ClassLoader contextCL = Thread.currentThread().getContextClassLoader(); final ClassLoader loader = contextCL == null ? ClassUtils.class.getClassLoader() : contextCL; return getClass(loader, className, initialize); }
Gets the class represented by {@code className} using the current thread's context class loader. This implementation supports the syntaxes "{@code java.util.Map.Entry[]}", "{@code java.util.Map$Entry[]}", "{@code [Ljava.util.Map.Entry;}", and "{@code [Ljava.util.Map$Entry;}". <p> The provided class name is normalized by removing all whitespace. This is especially helpful when handling XML element values in which whitespace has not been collapsed. </p> @param className the class name. @param initialize whether the class must be initialized. @return the class represented by {@code className} using the current thread's context class loader. @throws NullPointerException if the className is null. @throws ClassNotFoundException if the class is not found. @throws IllegalArgumentException Thrown if the class name represents an array with more dimensions than the JVM supports, 255. @throws IllegalArgumentException Thrown if the class name length is greater than 65,535. @see Class#forName(String, boolean, ClassLoader) @see <a href="https://docs.oracle.com/javase/specs/jvms/se25/html/jvms-4.html#jvms-4.4.1">JVM: Array dimension limits in JVM Specification CONSTANT_Class_info</a> @see <a href="https://docs.oracle.com/javase/specs/jls/se25/html/jls-6.html#jls-6.7">JLS: Fully Qualified Names and Canonical Names</a> @see <a href="https://docs.oracle.com/javase/specs/jls/se25/html/jls-13.html#jls-13.1">JLS: The Form of a Binary</a>
java
src/main/java/org/apache/commons/lang3/ClassUtils.java
648
[ "className", "initialize" ]
true
2
7.44
apache/commons-lang
2,896
javadoc
false
toString
@Override public String toString() { // enclose IPv6 hosts in square brackets for readability String hostString = host.contains(":") ? "[" + host + "]" : host; return listener + "://" + hostString + ":" + port; }
@deprecated Since 4.1. Use {@link #listener()} instead. This function will be removed in 5.0.
java
clients/src/main/java/org/apache/kafka/clients/admin/RaftVoterEndpoint.java
103
[]
String
true
2
7.2
apache/kafka
31,560
javadoc
false
between
public boolean between(final A b, final A c) { return betweenOrdered(b, c) || betweenOrdered(c, b); }
Tests if {@code [b <= a <= c]} or {@code [b >= a >= c]} where the {@code a} is object passed to {@link #is}. @param b the object to compare to the base object @param c the object to compare to the base object @return true if the base object is between b and c
java
src/main/java/org/apache/commons/lang3/compare/ComparableUtils.java
54
[ "b", "c" ]
true
2
8.16
apache/commons-lang
2,896
javadoc
false
remove
function remove(array, predicate) { var result = []; if (!(array && array.length)) { return result; } var index = -1, indexes = [], length = array.length; predicate = getIteratee(predicate, 3); while (++index < length) { var value = array[index]; if (predicate(value, index, array)) { result.push(value); indexes.push(index); } } basePullAt(array, indexes); return result; }
Removes all elements from `array` that `predicate` returns truthy for and returns an array of the removed elements. The predicate is invoked with three arguments: (value, index, array). **Note:** Unlike `_.filter`, this method mutates `array`. Use `_.pull` to pull elements from an array by value. @static @memberOf _ @since 2.0.0 @category Array @param {Array} array The array to modify. @param {Function} [predicate=_.identity] The function invoked per iteration. @returns {Array} Returns the new array of removed elements. @example var array = [1, 2, 3, 4]; var evens = _.remove(array, function(n) { return n % 2 == 0; }); console.log(array); // => [1, 3] console.log(evens); // => [2, 4]
javascript
lodash.js
7,950
[ "array", "predicate" ]
false
5
7.52
lodash/lodash
61,490
jsdoc
false
unknownTaggedFields
List<RawTaggedField> unknownTaggedFields();
Returns a list of tagged fields which this software can't understand. @return The raw tagged fields.
java
clients/src/main/java/org/apache/kafka/common/protocol/Message.java
96
[]
true
1
6.32
apache/kafka
31,560
javadoc
false
value
public ByteBuffer value() { return Utils.sizeDelimited(buffer, valueSizeOffset()); }
A ByteBuffer containing the value of this record @return the value or null if the value for this record is null
java
clients/src/main/java/org/apache/kafka/common/record/LegacyRecord.java
250
[]
ByteBuffer
true
1
6.16
apache/kafka
31,560
javadoc
false
compilePattern
static CommonPattern compilePattern(String pattern) { Preconditions.checkNotNull(pattern); return patternCompiler.compile(pattern); }
Returns the string if it is not empty, or a null string otherwise. @param string the string to test and possibly return @return {@code string} if it is not empty; {@code null} otherwise
java
android/guava/src/com/google/common/base/Platform.java
89
[ "pattern" ]
CommonPattern
true
1
6.96
google/guava
51,352
javadoc
false
_gotitem
def _gotitem( self, key: IndexLabel, ndim: int, subset: DataFrame | Series | None = None, ) -> DataFrame | Series: """ Sub-classes to define. Return a sliced object. Parameters ---------- key : string / list of selections ndim : {1, 2} requested ndim of result subset : object, default None subset to act on """ if subset is None: subset = self elif subset.ndim == 1: # is Series return subset # TODO: _shallow_copy(subset)? return subset[key]
Sub-classes to define. Return a sliced object. Parameters ---------- key : string / list of selections ndim : {1, 2} requested ndim of result subset : object, default None subset to act on
python
pandas/core/frame.py
11,333
[ "self", "key", "ndim", "subset" ]
DataFrame | Series
true
3
6.88
pandas-dev/pandas
47,362
numpy
false
toString
@Override public String toString() { return name; }
The string value is overridden to return the standard name. <p> For example, {@code "1.5"}. </p> @return the name, not null.
java
src/main/java/org/apache/commons/lang3/JavaVersion.java
413
[]
String
true
1
6.96
apache/commons-lang
2,896
javadoc
false
wrapperPlant
function wrapperPlant(value) { var result, parent = this; while (parent instanceof baseLodash) { var clone = wrapperClone(parent); clone.__index__ = 0; clone.__values__ = undefined; if (result) { previous.__wrapped__ = clone; } else { result = clone; } var previous = clone; parent = parent.__wrapped__; } previous.__wrapped__ = value; return result; }
Creates a clone of the chain sequence planting `value` as the wrapped value. @name plant @memberOf _ @since 3.2.0 @category Seq @param {*} value The value to plant. @returns {Object} Returns the new `lodash` wrapper instance. @example function square(n) { return n * n; } var wrapped = _([1, 2]).map(square); var other = wrapped.plant([3, 4]); other.value(); // => [9, 16] wrapped.value(); // => [1, 4]
javascript
lodash.js
9,080
[ "value" ]
false
4
7.68
lodash/lodash
61,490
jsdoc
false
compareTo
@Override public int compareTo(final Fraction other) { if (this == other) { return 0; } if (numerator == other.numerator && denominator == other.denominator) { return 0; } // otherwise see which is less final long first = (long) numerator * (long) other.denominator; final long second = (long) other.numerator * (long) denominator; return Long.compare(first, second); }
Compares this object to another based on size. <p> Note: this class has a natural ordering that is inconsistent with equals, because, for example, equals treats 1/2 and 2/4 as different, whereas compareTo treats them as equal. </p> @param other the object to compare to @return -1 if this is less, 0 if equal, +1 if greater @throws ClassCastException if the object is not a {@link Fraction} @throws NullPointerException if the object is {@code null}
java
src/main/java/org/apache/commons/lang3/math/Fraction.java
588
[ "other" ]
true
4
7.92
apache/commons-lang
2,896
javadoc
false
corrwith
def corrwith( self, other: DataFrame | Series, axis: Axis = 0, drop: bool = False, method: CorrelationMethod = "pearson", numeric_only: bool = False, min_periods: int | None = None, ) -> Series: """ Compute pairwise correlation. Pairwise correlation is computed between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations. Parameters ---------- other : DataFrame, Series Object with which to compute correlations. axis : {0 or 'index', 1 or 'columns'}, default 0 The axis to use. 0 or 'index' to compute row-wise, 1 or 'columns' for column-wise. drop : bool, default False Drop missing indices from result. method : {'pearson', 'kendall', 'spearman'} or callable Method of correlation: * pearson : standard correlation coefficient * kendall : Kendall Tau correlation coefficient * spearman : Spearman rank correlation * callable: callable with input two 1d ndarrays and returning a float. numeric_only : bool, default False Include only `float`, `int` or `boolean` data. min_periods : int, optional Minimum number of observations needed to have a valid result. .. versionchanged:: 2.0.0 The default value of ``numeric_only`` is now ``False``. Returns ------- Series Pairwise correlations. See Also -------- DataFrame.corr : Compute pairwise correlation of columns. Examples -------- >>> index = ["a", "b", "c", "d", "e"] >>> columns = ["one", "two", "three", "four"] >>> df1 = pd.DataFrame( ... np.arange(20).reshape(5, 4), index=index, columns=columns ... ) >>> df2 = pd.DataFrame( ... np.arange(16).reshape(4, 4), index=index[:4], columns=columns ... ) >>> df1.corrwith(df2) one 1.0 two 1.0 three 1.0 four 1.0 dtype: float64 >>> df2.corrwith(df1, axis=1) a 1.0 b 1.0 c 1.0 d 1.0 e NaN dtype: float64 """ axis = self._get_axis_number(axis) this = self._get_numeric_data() if numeric_only else self if isinstance(other, Series): return this.apply( lambda x: other.corr(x, method=method, min_periods=min_periods), axis=axis, ) if numeric_only: other = other._get_numeric_data() left, right = this.align(other, join="inner") if axis == 1: left = left.T right = right.T if method == "pearson": # mask missing values left = left + right * 0 right = right + left * 0 # demeaned data ldem = left - left.mean(numeric_only=numeric_only) rdem = right - right.mean(numeric_only=numeric_only) num = (ldem * rdem).sum() dom = ( (left.count() - 1) * left.std(numeric_only=numeric_only) * right.std(numeric_only=numeric_only) ) correl = num / dom elif method in ["kendall", "spearman"] or callable(method): def c(x): return nanops.nancorr(x[0], x[1], method=method) correl = self._constructor_sliced( map(c, zip(left.values.T, right.values.T, strict=True)), index=left.columns, copy=False, ) else: raise ValueError( f"Invalid method {method} was passed, " "valid methods are: 'pearson', 'kendall', " "'spearman', or callable" ) if not drop: # Find non-matching labels along the given axis # and append missing correlations (GH 22375) raxis: AxisInt = 1 if axis == 0 else 0 result_index = this._get_axis(raxis).union(other._get_axis(raxis)) idx_diff = result_index.difference(correl.index) if len(idx_diff) > 0: correl = correl._append_internal( Series([np.nan] * len(idx_diff), index=idx_diff) ) return correl
Compute pairwise correlation. Pairwise correlation is computed between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations. Parameters ---------- other : DataFrame, Series Object with which to compute correlations. axis : {0 or 'index', 1 or 'columns'}, default 0 The axis to use. 0 or 'index' to compute row-wise, 1 or 'columns' for column-wise. drop : bool, default False Drop missing indices from result. method : {'pearson', 'kendall', 'spearman'} or callable Method of correlation: * pearson : standard correlation coefficient * kendall : Kendall Tau correlation coefficient * spearman : Spearman rank correlation * callable: callable with input two 1d ndarrays and returning a float. numeric_only : bool, default False Include only `float`, `int` or `boolean` data. min_periods : int, optional Minimum number of observations needed to have a valid result. .. versionchanged:: 2.0.0 The default value of ``numeric_only`` is now ``False``. Returns ------- Series Pairwise correlations. See Also -------- DataFrame.corr : Compute pairwise correlation of columns. Examples -------- >>> index = ["a", "b", "c", "d", "e"] >>> columns = ["one", "two", "three", "four"] >>> df1 = pd.DataFrame( ... np.arange(20).reshape(5, 4), index=index, columns=columns ... ) >>> df2 = pd.DataFrame( ... np.arange(16).reshape(4, 4), index=index[:4], columns=columns ... ) >>> df1.corrwith(df2) one 1.0 two 1.0 three 1.0 four 1.0 dtype: float64 >>> df2.corrwith(df1, axis=1) a 1.0 b 1.0 c 1.0 d 1.0 e NaN dtype: float64
python
pandas/core/frame.py
12,555
[ "self", "other", "axis", "drop", "method", "numeric_only", "min_periods" ]
Series
true
12
8.24
pandas-dev/pandas
47,362
numpy
false
flags
def flags(self) -> Flags: """ Get the properties associated with this pandas object. The available flags are * :attr:`Flags.allows_duplicate_labels` See Also -------- Flags : Flags that apply to pandas objects. DataFrame.attrs : Global metadata applying to this dataset. Notes ----- "Flags" differ from "metadata". Flags reflect properties of the pandas object (the Series or DataFrame). Metadata refer to properties of the dataset, and should be stored in :attr:`DataFrame.attrs`. Examples -------- >>> df = pd.DataFrame({"A": [1, 2]}) >>> df.flags <Flags(allows_duplicate_labels=True)> Flags can be get or set using ``.`` >>> df.flags.allows_duplicate_labels True >>> df.flags.allows_duplicate_labels = False Or by slicing with a key >>> df.flags["allows_duplicate_labels"] False >>> df.flags["allows_duplicate_labels"] = True """ return self._flags
Get the properties associated with this pandas object. The available flags are * :attr:`Flags.allows_duplicate_labels` See Also -------- Flags : Flags that apply to pandas objects. DataFrame.attrs : Global metadata applying to this dataset. Notes ----- "Flags" differ from "metadata". Flags reflect properties of the pandas object (the Series or DataFrame). Metadata refer to properties of the dataset, and should be stored in :attr:`DataFrame.attrs`. Examples -------- >>> df = pd.DataFrame({"A": [1, 2]}) >>> df.flags <Flags(allows_duplicate_labels=True)> Flags can be get or set using ``.`` >>> df.flags.allows_duplicate_labels True >>> df.flags.allows_duplicate_labels = False Or by slicing with a key >>> df.flags["allows_duplicate_labels"] False >>> df.flags["allows_duplicate_labels"] = True
python
pandas/core/generic.py
363
[ "self" ]
Flags
true
1
6.8
pandas-dev/pandas
47,362
unknown
false
get_rocm_target_archs
def get_rocm_target_archs() -> list[str]: """ Get target architectures from environment or config. Returns: List of architecture strings (e.g., ['gfx90a', 'gfx942']) """ # Check PYTORCH_ROCM_ARCH environment variable env_archs = os.environ.get("PYTORCH_ROCM_ARCH", "").strip() if env_archs: archs = [arch.strip() for arch in env_archs.replace(";", ",").split(",")] archs = [arch for arch in archs if arch] if archs: return archs # Try to get from inductor config try: from torch._inductor import config if hasattr(config, "rocm") and hasattr(config.rocm, "target_archs"): archs = config.rocm.target_archs if archs: return archs except Exception: pass return torch.cuda.get_arch_list()
Get target architectures from environment or config. Returns: List of architecture strings (e.g., ['gfx90a', 'gfx942'])
python
torch/_inductor/rocm_multiarch_utils.py
71
[]
list[str]
true
6
6.88
pytorch/pytorch
96,034
unknown
false
toByteArray
static byte[] toByteArray(InputStream in, long expectedSize) throws IOException { checkArgument(expectedSize >= 0, "expectedSize (%s) must be non-negative", expectedSize); if (expectedSize > MAX_ARRAY_LEN) { throw new OutOfMemoryError(expectedSize + " bytes is too large to fit in a byte array"); } byte[] bytes = new byte[(int) expectedSize]; int remaining = (int) expectedSize; while (remaining > 0) { int off = (int) expectedSize - remaining; int read = in.read(bytes, off, remaining); if (read == -1) { // end of stream before reading expectedSize bytes // just return the bytes read so far return Arrays.copyOf(bytes, off); } remaining -= read; } // bytes is now full int b = in.read(); if (b == -1) { return bytes; } // the stream was longer, so read the rest normally Queue<byte[]> bufs = new ArrayDeque<>(TO_BYTE_ARRAY_DEQUE_SIZE + 2); bufs.add(bytes); bufs.add(new byte[] {(byte) b}); return toByteArrayInternal(in, bufs, bytes.length + 1); }
Reads all bytes from an input stream into a byte array. The given expected size is used to create an initial byte array, but if the actual number of bytes read from the stream differs, the correct result will be returned anyway.
java
android/guava/src/com/google/common/io/ByteStreams.java
250
[ "in", "expectedSize" ]
true
5
6
google/guava
51,352
javadoc
false
_finalize_columns_and_data
def _finalize_columns_and_data( content: np.ndarray, # ndim == 2 columns: Index | None, dtype: DtypeObj | None, ) -> tuple[list[ArrayLike], Index]: """ Ensure we have valid columns, cast object dtypes if possible. """ contents = list(content.T) try: columns = _validate_or_indexify_columns(contents, columns) except AssertionError as err: # GH#26429 do not raise user-facing AssertionError raise ValueError(err) from err if contents and contents[0].dtype == np.object_: contents = convert_object_array(contents, dtype=dtype) return contents, columns
Ensure we have valid columns, cast object dtypes if possible.
python
pandas/core/internals/construction.py
872
[ "content", "columns", "dtype" ]
tuple[list[ArrayLike], Index]
true
3
6
pandas-dev/pandas
47,362
unknown
false
merge
static ReleasableExponentialHistogram merge( int maxBucketCount, ExponentialHistogramCircuitBreaker breaker, ExponentialHistogram... histograms ) { return merge(maxBucketCount, breaker, List.of(histograms).iterator()); }
Merges the provided exponential histograms to a new, single histogram with at most the given amount of buckets. @param maxBucketCount the maximum number of buckets the result histogram is allowed to have @param breaker the circuit breaker to use to limit memory allocations @param histograms the histograms to merge @return the merged histogram
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialHistogram.java
291
[ "maxBucketCount", "breaker" ]
ReleasableExponentialHistogram
true
1
6.4
elastic/elasticsearch
75,680
javadoc
false
equals
static boolean equals(ExponentialHistogram a, ExponentialHistogram b) { if (a == b) return true; if (a == null) return false; if (b == null) return false; return a.scale() == b.scale() && a.sum() == b.sum() && equalsIncludingNaN(a.min(), b.min()) && equalsIncludingNaN(a.max(), b.max()) && a.zeroBucket().equals(b.zeroBucket()) && bucketIteratorsEqual(a.negativeBuckets().iterator(), b.negativeBuckets().iterator()) && bucketIteratorsEqual(a.positiveBuckets().iterator(), b.positiveBuckets().iterator()); }
Value-based equality for exponential histograms. @param a the first histogram (can be null) @param b the second histogram (can be null) @return true, if both histograms are equal
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialHistogram.java
169
[ "a", "b" ]
true
10
8.08
elastic/elasticsearch
75,680
javadoc
false
hermvander3d
def hermvander3d(x, y, z, deg): """Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points ``(x, y, z)``. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then The pseudo-Vandermonde matrix is defined by .. math:: V[..., (m+1)(n+1)i + (n+1)j + k] = H_i(x)*H_j(y)*H_k(z), where ``0 <= i <= l``, ``0 <= j <= m``, and ``0 <= j <= n``. The leading indices of `V` index the points ``(x, y, z)`` and the last index encodes the degrees of the Hermite polynomials. If ``V = hermvander3d(x, y, z, [xdeg, ydeg, zdeg])``, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order .. math:: c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},... and ``np.dot(V, c.flat)`` and ``hermval3d(x, y, z, c)`` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Hermite series of the same degrees and sample points. Parameters ---------- x, y, z : array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. deg : list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns ------- vander3d : ndarray The shape of the returned matrix is ``x.shape + (order,)``, where :math:`order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)`. The dtype will be the same as the converted `x`, `y`, and `z`. See Also -------- hermvander, hermvander3d, hermval2d, hermval3d Examples -------- >>> from numpy.polynomial.hermite import hermvander3d >>> x = np.array([-1, 0, 1]) >>> y = np.array([-1, 0, 1]) >>> z = np.array([-1, 0, 1]) >>> hermvander3d(x, y, z, [0, 1, 2]) array([[ 1., -2., 2., -2., 4., -4.], [ 1., 0., -2., 0., 0., -0.], [ 1., 2., 2., 2., 4., 4.]]) """ return pu._vander_nd_flat((hermvander, hermvander, hermvander), (x, y, z), deg)
Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points ``(x, y, z)``. If `l`, `m`, `n` are the given degrees in `x`, `y`, `z`, then The pseudo-Vandermonde matrix is defined by .. math:: V[..., (m+1)(n+1)i + (n+1)j + k] = H_i(x)*H_j(y)*H_k(z), where ``0 <= i <= l``, ``0 <= j <= m``, and ``0 <= j <= n``. The leading indices of `V` index the points ``(x, y, z)`` and the last index encodes the degrees of the Hermite polynomials. If ``V = hermvander3d(x, y, z, [xdeg, ydeg, zdeg])``, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order .. math:: c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},... and ``np.dot(V, c.flat)`` and ``hermval3d(x, y, z, c)`` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Hermite series of the same degrees and sample points. Parameters ---------- x, y, z : array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. deg : list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns ------- vander3d : ndarray The shape of the returned matrix is ``x.shape + (order,)``, where :math:`order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)`. The dtype will be the same as the converted `x`, `y`, and `z`. See Also -------- hermvander, hermvander3d, hermval2d, hermval3d Examples -------- >>> from numpy.polynomial.hermite import hermvander3d >>> x = np.array([-1, 0, 1]) >>> y = np.array([-1, 0, 1]) >>> z = np.array([-1, 0, 1]) >>> hermvander3d(x, y, z, [0, 1, 2]) array([[ 1., -2., 2., -2., 4., -4.], [ 1., 0., -2., 0., 0., -0.], [ 1., 2., 2., 2., 4., 4.]])
python
numpy/polynomial/hermite.py
1,244
[ "x", "y", "z", "deg" ]
false
1
6.48
numpy/numpy
31,054
numpy
false
nextBoolean
@Deprecated public static boolean nextBoolean() { return secure().randomBoolean(); }
Generates a random boolean value. @return the random boolean. @since 3.5 @deprecated Use {@link #secure()}, {@link #secureStrong()}, or {@link #insecure()}.
java
src/main/java/org/apache/commons/lang3/RandomUtils.java
113
[]
true
1
6.16
apache/commons-lang
2,896
javadoc
false
set_default_printstyle
def set_default_printstyle(style): """ Set the default format for the string representation of polynomials. Values for ``style`` must be valid inputs to ``__format__``, i.e. 'ascii' or 'unicode'. Parameters ---------- style : str Format string for default printing style. Must be either 'ascii' or 'unicode'. Notes ----- The default format depends on the platform: 'unicode' is used on Unix-based systems and 'ascii' on Windows. This determination is based on default font support for the unicode superscript and subscript ranges. Examples -------- >>> p = np.polynomial.Polynomial([1, 2, 3]) >>> c = np.polynomial.Chebyshev([1, 2, 3]) >>> np.polynomial.set_default_printstyle('unicode') >>> print(p) 1.0 + 2.0·x + 3.0·x² >>> print(c) 1.0 + 2.0·T₁(x) + 3.0·T₂(x) >>> np.polynomial.set_default_printstyle('ascii') >>> print(p) 1.0 + 2.0 x + 3.0 x**2 >>> print(c) 1.0 + 2.0 T_1(x) + 3.0 T_2(x) >>> # Formatting supersedes all class/package-level defaults >>> print(f"{p:unicode}") 1.0 + 2.0·x + 3.0·x² """ if style not in ('unicode', 'ascii'): raise ValueError( f"Unsupported format string '{style}'. Valid options are 'ascii' " f"and 'unicode'" ) _use_unicode = True if style == 'ascii': _use_unicode = False from ._polybase import ABCPolyBase ABCPolyBase._use_unicode = _use_unicode
Set the default format for the string representation of polynomials. Values for ``style`` must be valid inputs to ``__format__``, i.e. 'ascii' or 'unicode'. Parameters ---------- style : str Format string for default printing style. Must be either 'ascii' or 'unicode'. Notes ----- The default format depends on the platform: 'unicode' is used on Unix-based systems and 'ascii' on Windows. This determination is based on default font support for the unicode superscript and subscript ranges. Examples -------- >>> p = np.polynomial.Polynomial([1, 2, 3]) >>> c = np.polynomial.Chebyshev([1, 2, 3]) >>> np.polynomial.set_default_printstyle('unicode') >>> print(p) 1.0 + 2.0·x + 3.0·x² >>> print(c) 1.0 + 2.0·T₁(x) + 3.0·T₂(x) >>> np.polynomial.set_default_printstyle('ascii') >>> print(p) 1.0 + 2.0 x + 3.0 x**2 >>> print(c) 1.0 + 2.0 T_1(x) + 3.0 T_2(x) >>> # Formatting supersedes all class/package-level defaults >>> print(f"{p:unicode}") 1.0 + 2.0·x + 3.0·x²
python
numpy/polynomial/__init__.py
135
[ "style" ]
false
3
7.52
numpy/numpy
31,054
numpy
false
geoAzDistanceRads
LatLng geoAzDistanceRads(double az, double distance) { az = Vec2d.posAngleRads(az); // from https://www.movable-type.co.uk/scripts/latlong-vectors.html // N = {0,0,1} – vector representing north pole // d̂e = N×a – east vector at a // dn = a×de – north vector at a // d = dn·cosθ + de·sinθ – direction vector in dir’n of θ // b = a·cosδ + d·sinδ // east direction vector @ n1 (Gade's k_e_E) final double magnitude = magnitude(this.x, this.y, 0); final double deX = -this.y / magnitude; final double deY = this.x / magnitude; // north direction vector @ n1 (Gade's (k_n_E) final double dnX = -this.z * deY; final double dnY = this.z * deX; final double dnZ = this.x * deY - this.y * deX; final double sinAz = FastMath.sin(az); final double cosAz = FastMath.cos(az); final double sinDistance = FastMath.sin(distance); final double cosDistance = FastMath.cos(distance); // direction vector @ n1 (≡ C×n1; C = great circle) final double dX = dnX * cosAz + deX * sinAz; final double dY = dnY * cosAz + deY * sinAz; final double dZ = dnZ * cosAz; // Gade's n_EB_E = component of n2 parallel to n1 + component of n2 perpendicular to n1 final double n2X = this.x * cosDistance + dX * sinDistance; final double n2Y = this.y * cosDistance + dY * sinDistance; final double n2Z = this.z * cosDistance + dZ * sinDistance; return new LatLng(FastMath.asin(n2Z), FastMath.atan2(n2Y, n2X)); }
Computes the point on the sphere with a specified azimuth and distance from this point. @param az The desired azimuth. @param distance The desired distance. @return The LatLng point.
java
libs/h3/src/main/java/org/elasticsearch/h3/Vec3d.java
170
[ "az", "distance" ]
LatLng
true
1
7.04
elastic/elasticsearch
75,680
javadoc
false
_can_add_to_bucket
def _can_add_to_bucket( self, bucket_info: CollBucket, candidate: fx.Node, ) -> bool: """ Check if candidate can be added to bucket without breaking comm/compute overlap. Strategy: Try all timeline positions - combinations of [existing_start, candidate_start] x [existing_wait, candidate_wait]. For each position, verify: 1. Hiding intervals preserved - for any (start, hiding_compute, wait) interval, no other collective's (start, wait) pair falls between start and hiding_compute, which would force realization and break overlap due to LIFO semantics 2. Topologically valid (no dependency cycles) Return True if any timeline position satisfies both constraints. """ existing_coll = bucket_info.collectives[0] why = WhyNoBucket(existing_coll, candidate) candidate_info = self.collective_info[candidate] if ( candidate in self.all_hiding_nodes or candidate_info.wait_node in self.all_hiding_nodes ): why("nyi: bucketing collective used for overlap") return False # Step 1: Quick check using precomputed ancestors # These ancestors are computed prior to adding augmented dependencies and not updated, # so if any of these checks fail then the merge will not be topologically valid # even ignoring comm/compute overlap if self._has_ancestor_conflicts(bucket_info, candidate): why("has ancestor conflicts") return False # Step 2: Try different rail positions existing_wait = self.collective_info[existing_coll].wait_node candidate_start = candidate candidate_wait = candidate_info.wait_node # Try combinations in order of likelihood to succeed # (early start, later wait is most likely to work) combinations = [ ( existing_coll, candidate_wait, ), # Move candidate start early, keep wait late ( existing_coll, existing_wait, ), # Move candidate start early, move wait early (candidate_start, candidate_wait), # Keep both in place (candidate_start, existing_wait), # Keep start in place, move wait early ] for i, (start_pos, wait_pos) in enumerate(combinations): if self._try_timeline_position( bucket_info, candidate, start_pos, wait_pos, why ): bucket_log.debug( "bucketed %s with %s using timeline position %d: (start=%s, wait=%s)", candidate.name, existing_coll.name, i + 1, start_pos.name, wait_pos.name, ) return True why("all timeline positions failed") return False
Check if candidate can be added to bucket without breaking comm/compute overlap. Strategy: Try all timeline positions - combinations of [existing_start, candidate_start] x [existing_wait, candidate_wait]. For each position, verify: 1. Hiding intervals preserved - for any (start, hiding_compute, wait) interval, no other collective's (start, wait) pair falls between start and hiding_compute, which would force realization and break overlap due to LIFO semantics 2. Topologically valid (no dependency cycles) Return True if any timeline position satisfies both constraints.
python
torch/_inductor/fx_passes/overlap_preserving_bucketer.py
795
[ "self", "bucket_info", "candidate" ]
bool
true
6
6.32
pytorch/pytorch
96,034
unknown
false
_matchesSubString
function _matchesSubString(word: string, wordToMatchAgainst: string, i: number, j: number): IMatch[] | null { if (i === word.length) { return []; } else if (j === wordToMatchAgainst.length) { return null; } else { if (word[i] === wordToMatchAgainst[j]) { let result: IMatch[] | null = null; if (result = _matchesSubString(word, wordToMatchAgainst, i + 1, j + 1)) { return join({ start: j, end: j + 1 }, result); } return null; } return _matchesSubString(word, wordToMatchAgainst, i, j + 1); } }
@returns A filter which combines the provided set of filters with an or. The *first* filters that matches defined the return value of the returned filter.
typescript
src/vs/base/common/filters.ts
106
[ "word", "wordToMatchAgainst", "i", "j" ]
true
7
7.04
microsoft/vscode
179,840
jsdoc
false
setIgnoredMatcher
public StrTokenizer setIgnoredMatcher(final StrMatcher ignored) { if (ignored != null) { this.ignoredMatcher = ignored; } return this; }
Sets the matcher for characters to ignore. <p> These characters are ignored when parsing the String, unless they are within a quoted region. </p> @param ignored the ignored matcher to use, null ignored. @return {@code this} instance.
java
src/main/java/org/apache/commons/lang3/text/StrTokenizer.java
978
[ "ignored" ]
StrTokenizer
true
2
8.24
apache/commons-lang
2,896
javadoc
false
retrieve_configuration_description
def retrieve_configuration_description( include_airflow: bool = True, include_providers: bool = True, selected_provider: str | None = None, ) -> dict[str, dict[str, Any]]: """ Read Airflow configuration description from YAML file. :param include_airflow: Include Airflow configs :param include_providers: Include provider configs :param selected_provider: If specified, include selected provider only :return: Python dictionary containing configs & their info """ base_configuration_description: dict[str, dict[str, Any]] = {} if include_airflow: with open(_default_config_file_path("config.yml")) as config_file: base_configuration_description.update(yaml.safe_load(config_file)) if include_providers: from airflow.providers_manager import ProvidersManager for provider, config in ProvidersManager().provider_configs: if not selected_provider or provider == selected_provider: base_configuration_description.update(config) return base_configuration_description
Read Airflow configuration description from YAML file. :param include_airflow: Include Airflow configs :param include_providers: Include provider configs :param selected_provider: If specified, include selected provider only :return: Python dictionary containing configs & their info
python
airflow-core/src/airflow/configuration.py
149
[ "include_airflow", "include_providers", "selected_provider" ]
dict[str, dict[str, Any]]
true
6
7.6
apache/airflow
43,597
sphinx
false
destroy
@Override public void destroy() throws Exception { if (isSingleton()) { destroyInstance(this.singletonInstance); } }
Destroy the singleton instance, if any. @see #destroyInstance(Object)
java
spring-beans/src/main/java/org/springframework/beans/factory/config/AbstractFactoryBean.java
191
[]
void
true
2
6.08
spring-projects/spring-framework
59,386
javadoc
false
unwatchFile
function unwatchFile(filename, listener) { filename = getValidatedPath(filename); filename = pathModule.resolve(filename); const stat = statWatchers.get(filename); if (stat === undefined) return; const watchers = require('internal/fs/watchers'); if (typeof listener === 'function') { const beforeListenerCount = stat.listenerCount('change'); stat.removeListener('change', listener); if (stat.listenerCount('change') < beforeListenerCount) stat[watchers.kFSStatWatcherAddOrCleanRef]('clean'); } else { stat.removeAllListeners('change'); stat[watchers.kFSStatWatcherAddOrCleanRef]('cleanAll'); } if (stat.listenerCount('change') === 0) { stat.stop(); statWatchers.delete(filename); } }
Stops watching for changes on `filename`. @param {string | Buffer | URL} filename @param {() => any} [listener] @returns {void}
javascript
lib/fs.js
2,600
[ "filename", "listener" ]
false
6
6.24
nodejs/node
114,839
jsdoc
false
lookupScope
ApiRequestScope lookupScope(T key);
Define the scope of a given key for lookup. Key lookups are complicated by the need to accommodate different batching mechanics. For example, a `Metadata` request supports arbitrary batching of topic partitions in order to discover partitions leaders. This can be supported by returning a single scope object for all keys. On the other hand, `FindCoordinator` requests only support lookup of a single key. This can be supported by returning a different scope object for each lookup key. Note that if the {@link ApiRequestScope#destinationBrokerId()} maps to a specific brokerId, then lookup will be skipped. See the use of {@link StaticBrokerStrategy} in {@link DescribeProducersHandler} for an example of this usage. @param key the lookup key @return request scope indicating how lookup requests can be batched together
java
clients/src/main/java/org/apache/kafka/clients/admin/internals/AdminApiLookupStrategy.java
54
[ "key" ]
ApiRequestScope
true
1
6.48
apache/kafka
31,560
javadoc
false
appendExportsOfImportEqualsDeclaration
function appendExportsOfImportEqualsDeclaration(statements: Statement[] | undefined, decl: ImportEqualsDeclaration): Statement[] | undefined { if (moduleInfo.exportEquals) { return statements; } return appendExportsOfDeclaration(statements, decl); }
Appends the export of an ImportEqualsDeclaration to a statement list, returning the statement list. @param statements A statement list to which the down-level export statements are to be appended. If `statements` is `undefined`, a new array is allocated if statements are appended. @param decl The declaration whose exports are to be recorded.
typescript
src/compiler/transformers/module/system.ts
1,055
[ "statements", "decl" ]
true
2
6.72
microsoft/TypeScript
107,154
jsdoc
false
from
static SpringConfigurationPropertySource from(PropertySource<?> source) { Assert.notNull(source, "'source' must not be null"); boolean systemEnvironmentSource = isSystemEnvironmentPropertySource(source); PropertyMapper[] mappers = (!systemEnvironmentSource) ? DEFAULT_MAPPERS : SYSTEM_ENVIRONMENT_MAPPERS; return (!isFullEnumerable(source)) ? new SpringConfigurationPropertySource(source, systemEnvironmentSource, mappers) : new SpringIterableConfigurationPropertySource((EnumerablePropertySource<?>) source, systemEnvironmentSource, mappers); }
Create a new {@link SpringConfigurationPropertySource} for the specified {@link PropertySource}. @param source the source Spring {@link PropertySource} @return a {@link SpringConfigurationPropertySource} or {@link SpringIterableConfigurationPropertySource} instance
java
core/spring-boot/src/main/java/org/springframework/boot/context/properties/source/SpringConfigurationPropertySource.java
171
[ "source" ]
SpringConfigurationPropertySource
true
3
7.12
spring-projects/spring-boot
79,428
javadoc
false
nullToEmpty
public static double[] nullToEmpty(final double[] array) { return isEmpty(array) ? EMPTY_DOUBLE_ARRAY : array; }
Defensive programming technique to change a {@code null} reference to an empty one. <p> This method returns an empty array for a {@code null} input array. </p> <p> As a memory optimizing technique an empty array passed in will be overridden with the empty {@code public static} references in this class. </p> @param array the array to check for {@code null} or empty. @return the same array, {@code public static} empty array if {@code null} or empty input. @since 2.5
java
src/main/java/org/apache/commons/lang3/ArrayUtils.java
4,413
[ "array" ]
true
2
8.16
apache/commons-lang
2,896
javadoc
false
unmodifiableNavigableSet
public static <E extends @Nullable Object> NavigableSet<E> unmodifiableNavigableSet( NavigableSet<E> set) { if (set instanceof ImmutableCollection || set instanceof UnmodifiableNavigableSet) { return set; } return new UnmodifiableNavigableSet<>(set); }
Returns an unmodifiable view of the specified navigable set. This method allows modules to provide users with "read-only" access to internal navigable sets. Query operations on the returned set "read through" to the specified set, and attempts to modify the returned set, whether direct or via its collection views, result in an {@code UnsupportedOperationException}. <p>The returned navigable set will be serializable if the specified navigable set is serializable. <p><b>Java 8+ users and later:</b> Prefer {@link Collections#unmodifiableNavigableSet}. @param set the navigable set for which an unmodifiable view is to be returned @return an unmodifiable view of the specified navigable set @since 12.0
java
android/guava/src/com/google/common/collect/Sets.java
1,906
[ "set" ]
true
3
7.76
google/guava
51,352
javadoc
false
trimToNull
public static String trimToNull(final String str) { final String ts = trim(str); return isEmpty(ts) ? null : ts; }
Removes control characters (char &lt;= 32) from both ends of this String returning {@code null} if the String is empty ("") after the trim or if it is {@code null}. <p> The String is trimmed using {@link String#trim()}. Trim removes start and end characters &lt;= 32. To strip whitespace use {@link #stripToNull(String)}. </p> <pre> StringUtils.trimToNull(null) = null StringUtils.trimToNull("") = null StringUtils.trimToNull(" ") = null StringUtils.trimToNull("abc") = "abc" StringUtils.trimToNull(" abc ") = "abc" </pre> @param str the String to be trimmed, may be null. @return the trimmed String, {@code null} if only chars &lt;= 32, empty or null String input. @since 2.0
java
src/main/java/org/apache/commons/lang3/StringUtils.java
8,772
[ "str" ]
String
true
2
7.84
apache/commons-lang
2,896
javadoc
false
parsePropertyNameWorker
function parsePropertyNameWorker(allowComputedPropertyNames: boolean): PropertyName { if (token() === SyntaxKind.StringLiteral || token() === SyntaxKind.NumericLiteral || token() === SyntaxKind.BigIntLiteral) { const node = parseLiteralNode() as StringLiteral | NumericLiteral | BigIntLiteral; node.text = internIdentifier(node.text); return node; } if (allowComputedPropertyNames && token() === SyntaxKind.OpenBracketToken) { return parseComputedPropertyName(); } if (token() === SyntaxKind.PrivateIdentifier) { return parsePrivateIdentifier(); } return parseIdentifierName(); }
Reports a diagnostic error for the current token being an invalid name. @param blankDiagnostic Diagnostic to report for the case of the name being blank (matched tokenIfBlankName). @param nameDiagnostic Diagnostic to report for all other cases. @param tokenIfBlankName Current token if the name was invalid for being blank (not provided / skipped).
typescript
src/compiler/parser.ts
2,714
[ "allowComputedPropertyNames" ]
true
7
6.72
microsoft/TypeScript
107,154
jsdoc
false
map
def map(self, func: Callable, subset: Subset | None = None, **kwargs) -> Styler: """ Apply a CSS-styling function elementwise. Updates the HTML representation with the result. Parameters ---------- func : function ``func`` should take a scalar and return a string. %(subset)s **kwargs : dict Pass along to ``func``. Returns ------- Styler Instance of class with CSS-styling function applied elementwise. See Also -------- Styler.map_index: Apply a CSS-styling function to headers elementwise. Styler.apply_index: Apply a CSS-styling function to headers level-wise. Styler.apply: Apply a CSS-styling function column-wise, row-wise, or table-wise. Notes ----- The elements of the output of ``func`` should be CSS styles as strings, in the format 'attribute: value; attribute2: value2; ...' or, if nothing is to be applied to that element, an empty string or ``None``. Examples -------- >>> def color_negative(v, color): ... return f"color: {color};" if v < 0 else None >>> df = pd.DataFrame(np.random.randn(5, 2), columns=["A", "B"]) >>> df.style.map(color_negative, color="red") # doctest: +SKIP Using ``subset`` to restrict application to a single column or multiple columns >>> df.style.map(color_negative, color="red", subset="A") ... # doctest: +SKIP >>> df.style.map(color_negative, color="red", subset=["A", "B"]) ... # doctest: +SKIP Using a 2d input to ``subset`` to select rows in addition to columns >>> df.style.map( ... color_negative, color="red", subset=([0, 1, 2], slice(None)) ... ) # doctest: +SKIP >>> df.style.map(color_negative, color="red", subset=(slice(0, 5, 2), "A")) ... # doctest: +SKIP See `Table Visualization <../../user_guide/style.ipynb>`_ user guide for more details. """ self._todo.append((lambda instance: instance._map, (func, subset), kwargs)) return self
Apply a CSS-styling function elementwise. Updates the HTML representation with the result. Parameters ---------- func : function ``func`` should take a scalar and return a string. %(subset)s **kwargs : dict Pass along to ``func``. Returns ------- Styler Instance of class with CSS-styling function applied elementwise. See Also -------- Styler.map_index: Apply a CSS-styling function to headers elementwise. Styler.apply_index: Apply a CSS-styling function to headers level-wise. Styler.apply: Apply a CSS-styling function column-wise, row-wise, or table-wise. Notes ----- The elements of the output of ``func`` should be CSS styles as strings, in the format 'attribute: value; attribute2: value2; ...' or, if nothing is to be applied to that element, an empty string or ``None``. Examples -------- >>> def color_negative(v, color): ... return f"color: {color};" if v < 0 else None >>> df = pd.DataFrame(np.random.randn(5, 2), columns=["A", "B"]) >>> df.style.map(color_negative, color="red") # doctest: +SKIP Using ``subset`` to restrict application to a single column or multiple columns >>> df.style.map(color_negative, color="red", subset="A") ... # doctest: +SKIP >>> df.style.map(color_negative, color="red", subset=["A", "B"]) ... # doctest: +SKIP Using a 2d input to ``subset`` to select rows in addition to columns >>> df.style.map( ... color_negative, color="red", subset=([0, 1, 2], slice(None)) ... ) # doctest: +SKIP >>> df.style.map(color_negative, color="red", subset=(slice(0, 5, 2), "A")) ... # doctest: +SKIP See `Table Visualization <../../user_guide/style.ipynb>`_ user guide for more details.
python
pandas/io/formats/style.py
2,138
[ "self", "func", "subset" ]
Styler
true
1
6.8
pandas-dev/pandas
47,362
numpy
false
to_dense
def to_dense(self) -> Series: """ Convert a Series from sparse values to dense. Returns ------- Series: A Series with the same values, stored as a dense array. Examples -------- >>> series = pd.Series(pd.arrays.SparseArray([0, 1, 0])) >>> series 0 0 1 1 2 0 dtype: Sparse[int64, 0] >>> series.sparse.to_dense() 0 0 1 1 2 0 dtype: int64 """ from pandas import Series return Series( self._parent.array.to_dense(), index=self._parent.index, name=self._parent.name, copy=False, )
Convert a Series from sparse values to dense. Returns ------- Series: A Series with the same values, stored as a dense array. Examples -------- >>> series = pd.Series(pd.arrays.SparseArray([0, 1, 0])) >>> series 0 0 1 1 2 0 dtype: Sparse[int64, 0] >>> series.sparse.to_dense() 0 0 1 1 2 0 dtype: int64
python
pandas/core/arrays/sparse/accessor.py
244
[ "self" ]
Series
true
1
7.28
pandas-dev/pandas
47,362
unknown
false
pct_change
def pct_change( self, periods: int = 1, fill_method: None = None, freq=None, ): """ Calculate pct_change of each value to previous entry in group. Parameters ---------- periods : int, default 1 Periods to shift for calculating percentage change. Comparing with a period of 1 means adjacent elements are compared, whereas a period of 2 compares every other element. fill_method : None Must be None. This argument will be removed in a future version of pandas. freq : str, pandas offset object, or None, default None The frequency increment for time series data (e.g., 'M' for month-end). If None, the frequency is inferred from the index. Relevant for time series data only. Returns ------- Series or DataFrame Percentage changes within each group. %(see_also)s Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b", "b"] >>> ser = pd.Series([1, 2, 3, 4], index=lst) >>> ser a 1 a 2 b 3 b 4 dtype: int64 >>> ser.groupby(level=0).pct_change() a NaN a 1.000000 b NaN b 0.333333 dtype: float64 For DataFrameGroupBy: >>> data = [[1, 2, 3], [1, 5, 6], [2, 5, 8], [2, 6, 9]] >>> df = pd.DataFrame( ... data, ... columns=["a", "b", "c"], ... index=["tuna", "salmon", "catfish", "goldfish"], ... ) >>> df a b c tuna 1 2 3 salmon 1 5 6 catfish 2 5 8 goldfish 2 6 9 >>> df.groupby("a").pct_change() b c tuna NaN NaN salmon 1.5 1.000 catfish NaN NaN goldfish 0.2 0.125 """ # GH#53491 if fill_method is not None: raise ValueError(f"fill_method must be None; got {fill_method=}.") # TODO(GH#23918): Remove this conditional for SeriesGroupBy when # GH#23918 is fixed if freq is not None: f = lambda x: x.pct_change( periods=periods, freq=freq, axis=0, ) return self._python_apply_general(f, self._selected_obj, is_transform=True) if fill_method is None: # GH30463 op = "ffill" else: op = fill_method filled = getattr(self, op)(limit=0) fill_grp = filled.groupby(self._grouper.codes, group_keys=self.group_keys) shifted = fill_grp.shift(periods=periods, freq=freq) return (filled / shifted) - 1
Calculate pct_change of each value to previous entry in group. Parameters ---------- periods : int, default 1 Periods to shift for calculating percentage change. Comparing with a period of 1 means adjacent elements are compared, whereas a period of 2 compares every other element. fill_method : None Must be None. This argument will be removed in a future version of pandas. freq : str, pandas offset object, or None, default None The frequency increment for time series data (e.g., 'M' for month-end). If None, the frequency is inferred from the index. Relevant for time series data only. Returns ------- Series or DataFrame Percentage changes within each group. %(see_also)s Examples -------- For SeriesGroupBy: >>> lst = ["a", "a", "b", "b"] >>> ser = pd.Series([1, 2, 3, 4], index=lst) >>> ser a 1 a 2 b 3 b 4 dtype: int64 >>> ser.groupby(level=0).pct_change() a NaN a 1.000000 b NaN b 0.333333 dtype: float64 For DataFrameGroupBy: >>> data = [[1, 2, 3], [1, 5, 6], [2, 5, 8], [2, 6, 9]] >>> df = pd.DataFrame( ... data, ... columns=["a", "b", "c"], ... index=["tuna", "salmon", "catfish", "goldfish"], ... ) >>> df a b c tuna 1 2 3 salmon 1 5 6 catfish 2 5 8 goldfish 2 6 9 >>> df.groupby("a").pct_change() b c tuna NaN NaN salmon 1.5 1.000 catfish NaN NaN goldfish 0.2 0.125
python
pandas/core/groupby/groupby.py
5,350
[ "self", "periods", "fill_method", "freq" ]
true
5
8.56
pandas-dev/pandas
47,362
numpy
false
ping
def ping(self, destination=None, timeout=1.0, **kwargs): """Ping all (or specific) workers. >>> app.control.ping() [{'celery@node1': {'ok': 'pong'}}, {'celery@node2': {'ok': 'pong'}}] >>> app.control.ping(destination=['celery@node2']) [{'celery@node2': {'ok': 'pong'}}] Returns: List[Dict]: List of ``{HOSTNAME: {'ok': 'pong'}}`` dictionaries. See Also: :meth:`broadcast` for supported keyword arguments. """ return self.broadcast( 'ping', reply=True, arguments={}, destination=destination, timeout=timeout, **kwargs)
Ping all (or specific) workers. >>> app.control.ping() [{'celery@node1': {'ok': 'pong'}}, {'celery@node2': {'ok': 'pong'}}] >>> app.control.ping(destination=['celery@node2']) [{'celery@node2': {'ok': 'pong'}}] Returns: List[Dict]: List of ``{HOSTNAME: {'ok': 'pong'}}`` dictionaries. See Also: :meth:`broadcast` for supported keyword arguments.
python
celery/app/control.py
558
[ "self", "destination", "timeout" ]
false
1
7.04
celery/celery
27,741
unknown
false
count
def count(self, axis=None, keepdims=np._NoValue): """ Count the non-masked elements of the array along the given axis. Parameters ---------- axis : None or int or tuple of ints, optional Axis or axes along which the count is performed. The default, None, performs the count over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. If this is a tuple of ints, the count is performed on multiple axes, instead of a single axis or all the axes as before. keepdims : bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns ------- result : ndarray or scalar An array with the same shape as the input array, with the specified axis removed. If the array is a 0-d array, or if `axis` is None, a scalar is returned. See Also -------- ma.count_masked : Count masked elements in array or along a given axis. Examples -------- >>> import numpy.ma as ma >>> a = ma.arange(6).reshape((2, 3)) >>> a[1, :] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, --, --]], mask=[[False, False, False], [ True, True, True]], fill_value=999999) >>> a.count() 3 When the `axis` keyword is specified an array of appropriate size is returned. >>> a.count(axis=0) array([1, 1, 1]) >>> a.count(axis=1) array([3, 0]) """ kwargs = {} if keepdims is np._NoValue else {'keepdims': keepdims} m = self._mask # special case for matrices (we assume no other subclasses modify # their dimensions) if isinstance(self.data, np.matrix): if m is nomask: m = np.zeros(self.shape, dtype=np.bool) m = m.view(type(self.data)) if m is nomask: # compare to _count_reduce_items in _methods.py if self.shape == (): if axis not in (None, 0): raise np.exceptions.AxisError(axis=axis, ndim=self.ndim) return 1 elif axis is None: if kwargs.get('keepdims'): return np.array(self.size, dtype=np.intp, ndmin=self.ndim) return self.size axes = normalize_axis_tuple(axis, self.ndim) items = 1 for ax in axes: items *= self.shape[ax] if kwargs.get('keepdims'): out_dims = list(self.shape) for a in axes: out_dims[a] = 1 else: out_dims = [d for n, d in enumerate(self.shape) if n not in axes] # make sure to return a 0-d array if axis is supplied return np.full(out_dims, items, dtype=np.intp) # take care of the masked singleton if self is masked: return 0 return (~m).sum(axis=axis, dtype=np.intp, **kwargs)
Count the non-masked elements of the array along the given axis. Parameters ---------- axis : None or int or tuple of ints, optional Axis or axes along which the count is performed. The default, None, performs the count over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. If this is a tuple of ints, the count is performed on multiple axes, instead of a single axis or all the axes as before. keepdims : bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns ------- result : ndarray or scalar An array with the same shape as the input array, with the specified axis removed. If the array is a 0-d array, or if `axis` is None, a scalar is returned. See Also -------- ma.count_masked : Count masked elements in array or along a given axis. Examples -------- >>> import numpy.ma as ma >>> a = ma.arange(6).reshape((2, 3)) >>> a[1, :] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, --, --]], mask=[[False, False, False], [ True, True, True]], fill_value=999999) >>> a.count() 3 When the `axis` keyword is specified an array of appropriate size is returned. >>> a.count(axis=0) array([1, 1, 1]) >>> a.count(axis=1) array([3, 0])
python
numpy/ma/core.py
4,593
[ "self", "axis", "keepdims" ]
false
14
7.76
numpy/numpy
31,054
numpy
false
parse
@Override public Date parse(final String source) throws ParseException { final ParsePosition pp = new ParsePosition(0); final Date date = parse(source, pp); if (date == null) { // Add a note regarding supported date range if (locale.equals(JAPANESE_IMPERIAL)) { throw new ParseException("(The " + locale + " locale does not support dates before 1868 AD)\nUnparseable date: \"" + source, pp.getErrorIndex()); } throw new ParseException("Unparseable date: " + source, pp.getErrorIndex()); } return date; }
Initializes derived fields from defining fields. This is called from constructor and from readObject (de-serialization) @param definingCalendar the {@link java.util.Calendar} instance used to initialize this FastDateParser
java
src/main/java/org/apache/commons/lang3/time/FastDateParser.java
1,020
[ "source" ]
Date
true
3
6.08
apache/commons-lang
2,896
javadoc
false
list_nodegroups
def list_nodegroups( self, clusterName: str, verbose: bool = False, ) -> list: """ List all Amazon EKS managed node groups associated with the specified cluster. .. seealso:: - :external+boto3:py:meth:`EKS.Client.list_nodegroups` :param clusterName: The name of the Amazon EKS Cluster containing nodegroups to list. :param verbose: Provides additional logging if set to True. Defaults to False. :return: A List of nodegroup names within the given cluster. """ eks_client = self.conn list_nodegroups_call = partial(eks_client.list_nodegroups, clusterName=clusterName) return self._list_all(api_call=list_nodegroups_call, response_key="nodegroups", verbose=verbose)
List all Amazon EKS managed node groups associated with the specified cluster. .. seealso:: - :external+boto3:py:meth:`EKS.Client.list_nodegroups` :param clusterName: The name of the Amazon EKS Cluster containing nodegroups to list. :param verbose: Provides additional logging if set to True. Defaults to False. :return: A List of nodegroup names within the given cluster.
python
providers/amazon/src/airflow/providers/amazon/aws/hooks/eks.py
480
[ "self", "clusterName", "verbose" ]
list
true
1
6.4
apache/airflow
43,597
sphinx
false
check_symmetric
def check_symmetric(array, *, tol=1e-10, raise_warning=True, raise_exception=False): """Make sure that array is 2D, square and symmetric. If the array is not symmetric, then a symmetrized version is returned. Optionally, a warning or exception is raised if the matrix is not symmetric. Parameters ---------- array : {ndarray, sparse matrix} Input object to check / convert. Must be two-dimensional and square, otherwise a ValueError will be raised. tol : float, default=1e-10 Absolute tolerance for equivalence of arrays. Default = 1E-10. raise_warning : bool, default=True If True then raise a warning if conversion is required. raise_exception : bool, default=False If True then raise an exception if array is not symmetric. Returns ------- array_sym : {ndarray, sparse matrix} Symmetrized version of the input array, i.e. the average of array and array.transpose(). If sparse, then duplicate entries are first summed and zeros are eliminated. Examples -------- >>> import numpy as np >>> from sklearn.utils.validation import check_symmetric >>> symmetric_array = np.array([[0, 1, 2], [1, 0, 1], [2, 1, 0]]) >>> check_symmetric(symmetric_array) array([[0, 1, 2], [1, 0, 1], [2, 1, 0]]) >>> from scipy.sparse import csr_matrix >>> sparse_symmetric_array = csr_matrix(symmetric_array) >>> check_symmetric(sparse_symmetric_array) <Compressed Sparse Row sparse matrix of dtype 'int64' with 6 stored elements and shape (3, 3)> """ if (array.ndim != 2) or (array.shape[0] != array.shape[1]): raise ValueError( "array must be 2-dimensional and square. shape = {0}".format(array.shape) ) if sp.issparse(array): diff = array - array.T # only csr, csc, and coo have `data` attribute if diff.format not in ["csr", "csc", "coo"]: diff = diff.tocsr() symmetric = np.all(abs(diff.data) < tol) else: symmetric = np.allclose(array, array.T, atol=tol) if not symmetric: if raise_exception: raise ValueError("Array must be symmetric") if raise_warning: warnings.warn( ( "Array is not symmetric, and will be converted " "to symmetric by average with its transpose." ), stacklevel=2, ) if sp.issparse(array): conversion = "to" + array.format array = getattr(0.5 * (array + array.T), conversion)() else: array = 0.5 * (array + array.T) return array
Make sure that array is 2D, square and symmetric. If the array is not symmetric, then a symmetrized version is returned. Optionally, a warning or exception is raised if the matrix is not symmetric. Parameters ---------- array : {ndarray, sparse matrix} Input object to check / convert. Must be two-dimensional and square, otherwise a ValueError will be raised. tol : float, default=1e-10 Absolute tolerance for equivalence of arrays. Default = 1E-10. raise_warning : bool, default=True If True then raise a warning if conversion is required. raise_exception : bool, default=False If True then raise an exception if array is not symmetric. Returns ------- array_sym : {ndarray, sparse matrix} Symmetrized version of the input array, i.e. the average of array and array.transpose(). If sparse, then duplicate entries are first summed and zeros are eliminated. Examples -------- >>> import numpy as np >>> from sklearn.utils.validation import check_symmetric >>> symmetric_array = np.array([[0, 1, 2], [1, 0, 1], [2, 1, 0]]) >>> check_symmetric(symmetric_array) array([[0, 1, 2], [1, 0, 1], [2, 1, 0]]) >>> from scipy.sparse import csr_matrix >>> sparse_symmetric_array = csr_matrix(symmetric_array) >>> check_symmetric(sparse_symmetric_array) <Compressed Sparse Row sparse matrix of dtype 'int64' with 6 stored elements and shape (3, 3)>
python
sklearn/utils/validation.py
1,505
[ "array", "tol", "raise_warning", "raise_exception" ]
false
11
7.6
scikit-learn/scikit-learn
64,340
numpy
false
toString
@Override public String toString() { return (this.pid != null) ? String.valueOf(this.pid) : "???"; }
Return the application PID as a {@link Long}. @return the application PID or {@code null} @since 3.4.0
java
core/spring-boot/src/main/java/org/springframework/boot/system/ApplicationPid.java
96
[]
String
true
2
8.16
spring-projects/spring-boot
79,428
javadoc
false
checkWeightWithWeigher
private void checkWeightWithWeigher() { if (weigher == null) { checkState(maximumWeight == UNSET_INT, "maximumWeight requires weigher"); } else { if (strictParsing) { checkState(maximumWeight != UNSET_INT, "weigher requires maximumWeight"); } else { if (maximumWeight == UNSET_INT) { LoggerHolder.logger.log( Level.WARNING, "ignoring weigher specified without maximumWeight"); } } } }
Builds a cache which does not automatically load values when keys are requested. <p>Consider {@link #build(CacheLoader)} instead, if it is feasible to implement a {@code CacheLoader}. <p>This method does not alter the state of this {@code CacheBuilder} instance, so it can be invoked again to create multiple independent caches. @return a cache having the requested features @since 11.0
java
android/guava/src/com/google/common/cache/CacheBuilder.java
1,064
[]
void
true
4
7.92
google/guava
51,352
javadoc
false
removeAll
@CanIgnoreReturnValue public static boolean removeAll(Iterable<?> removeFrom, Collection<?> elementsToRemove) { return (removeFrom instanceof Collection) ? ((Collection<?>) removeFrom).removeAll(checkNotNull(elementsToRemove)) : Iterators.removeAll(removeFrom.iterator(), elementsToRemove); }
Removes, from an iterable, every element that belongs to the provided collection. <p>This method calls {@link Collection#removeAll} if {@code iterable} is a collection, and {@link Iterators#removeAll} otherwise. @param removeFrom the iterable to (potentially) remove elements from @param elementsToRemove the elements to remove @return {@code true} if any element was removed from {@code iterable}
java
android/guava/src/com/google/common/collect/Iterables.java
147
[ "removeFrom", "elementsToRemove" ]
true
2
7.28
google/guava
51,352
javadoc
false
startHeartbeatThreadIfNeeded
private synchronized void startHeartbeatThreadIfNeeded() { if (heartbeatThread == null) { heartbeatThread = heartbeatThreadSupplier.orElse(HeartbeatThread::new).get(); heartbeatThread.start(); } }
Ensure the group is active (i.e., joined and synced) @param timer Timer bounding how long this method can block @throws KafkaException if the callback throws exception @return true iff the group is active
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
424
[]
void
true
2
7.6
apache/kafka
31,560
javadoc
false
onSendError
public void onSendError(ProducerRecord<K, V> record, TopicPartition interceptTopicPartition, Exception exception) { for (Plugin<ProducerInterceptor<K, V>> interceptorPlugin : this.interceptorPlugins) { try { Headers headers = record != null ? record.headers() : new RecordHeaders(); if (headers instanceof RecordHeaders && !((RecordHeaders) headers).isReadOnly()) { // make a copy of the headers to make sure we don't change the state of origin record's headers. // original headers are still writable because client might want to mutate them before retrying. RecordHeaders recordHeaders = (RecordHeaders) headers; headers = new RecordHeaders(recordHeaders); ((RecordHeaders) headers).setReadOnly(); } if (record == null && interceptTopicPartition == null) { interceptorPlugin.get().onAcknowledgement(null, exception, headers); } else { if (interceptTopicPartition == null) { interceptTopicPartition = extractTopicPartition(record); } interceptorPlugin.get().onAcknowledgement(new RecordMetadata(interceptTopicPartition, -1, -1, RecordBatch.NO_TIMESTAMP, -1, -1), exception, headers); } } catch (Exception e) { // do not propagate interceptor exceptions, just log log.warn("Error executing interceptor onAcknowledgement callback", e); } } }
This method is called when sending the record fails in {@link ProducerInterceptor#onSend (ProducerRecord)} method. This method calls {@link ProducerInterceptor#onAcknowledgement(RecordMetadata, Exception, Headers)} method for each interceptor @param record The record from client @param interceptTopicPartition The topic/partition for the record if an error occurred after partition gets assigned; the topic part of interceptTopicPartition is the same as in record. @param exception The exception thrown during processing of this record.
java
clients/src/main/java/org/apache/kafka/clients/producer/internals/ProducerInterceptors.java
113
[ "record", "interceptTopicPartition", "exception" ]
void
true
8
6.4
apache/kafka
31,560
javadoc
false
putmask
def putmask(self, mask, new) -> list[Block]: """ putmask the data to the block; it is possible that we may create a new dtype of block Return the resulting block(s). Parameters ---------- mask : np.ndarray[bool], SparseArray[bool], or BooleanArray new : an ndarray/object Returns ------- List[Block] """ orig_mask = mask values = cast(np.ndarray, self.values) mask, noop = validate_putmask(values.T, mask) assert not isinstance(new, (ABCIndex, ABCSeries, ABCDataFrame)) if new is lib.no_default: new = self.fill_value new = self._standardize_fill_value(new) new = extract_array(new, extract_numpy=True) if noop: return [self.copy(deep=False)] try: casted = np_can_hold_element(values.dtype, new) self = self._maybe_copy(inplace=True) values = cast(np.ndarray, self.values) putmask_without_repeat(values.T, mask, casted) return [self] except LossySetitemError: if self.ndim == 1 or self.shape[0] == 1: # no need to split columns if not is_list_like(new): # using just new[indexer] can't save us the need to cast return self.coerce_to_target_dtype( new, raise_on_upcast=True ).putmask(mask, new) else: indexer = mask.nonzero()[0] nb = self.setitem(indexer, new[indexer]) return [nb] else: is_array = isinstance(new, np.ndarray) res_blocks = [] for i, nb in enumerate(self._split()): n = new if is_array: # we have a different value per-column n = new[:, i : i + 1] submask = orig_mask[:, i : i + 1] rbs = nb.putmask(submask, n) res_blocks.extend(rbs) return res_blocks
putmask the data to the block; it is possible that we may create a new dtype of block Return the resulting block(s). Parameters ---------- mask : np.ndarray[bool], SparseArray[bool], or BooleanArray new : an ndarray/object Returns ------- List[Block]
python
pandas/core/internals/blocks.py
1,144
[ "self", "mask", "new" ]
list[Block]
true
10
6.64
pandas-dev/pandas
47,362
numpy
false
resolve
private List<StandardConfigDataResource> resolve(Set<StandardConfigDataReference> references) { List<StandardConfigDataResource> resolved = new ArrayList<>(); for (StandardConfigDataReference reference : references) { resolved.addAll(resolve(reference)); } if (resolved.isEmpty()) { resolved.addAll(resolveEmptyDirectories(references)); } return resolved; }
Create a new {@link StandardConfigDataLocationResolver} instance. @param logFactory the factory for loggers to use @param binder a binder backed by the initial {@link Environment} @param resourceLoader a {@link ResourceLoader} used to load resources
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/StandardConfigDataLocationResolver.java
259
[ "references" ]
true
2
6.08
spring-projects/spring-boot
79,428
javadoc
false
registerShutdownHookIfNecessary
private void registerShutdownHookIfNecessary(Environment environment, LoggingSystem loggingSystem) { if (environment.getProperty(REGISTER_SHUTDOWN_HOOK_PROPERTY, Boolean.class, true)) { Runnable shutdownHandler = loggingSystem.getShutdownHandler(); if (shutdownHandler != null && shutdownHookRegistered.compareAndSet(false, true)) { registerShutdownHook(shutdownHandler); } } }
Set logging levels based on relevant {@link Environment} properties. @param system the logging system @param environment the environment @since 2.2.0
java
core/spring-boot/src/main/java/org/springframework/boot/context/logging/LoggingApplicationListener.java
425
[ "environment", "loggingSystem" ]
void
true
4
6.56
spring-projects/spring-boot
79,428
javadoc
false
create
public static NodeApiVersions create(Collection<ApiVersion> overrides) { List<ApiVersion> apiVersions = new LinkedList<>(overrides); for (ApiKeys apiKey : ApiKeys.clientApis()) { boolean exists = false; for (ApiVersion apiVersion : apiVersions) { if (apiVersion.apiKey() == apiKey.id) { exists = true; break; } } if (!exists) apiVersions.add(ApiVersionsResponse.toApiVersion(apiKey)); } return new NodeApiVersions(apiVersions, Collections.emptyList(), Collections.emptyList(), -1); }
Create a NodeApiVersions object. @param overrides API versions to override. Any ApiVersion not specified here will be set to the current client value. @return A new NodeApiVersions object.
java
clients/src/main/java/org/apache/kafka/clients/NodeApiVersions.java
72
[ "overrides" ]
NodeApiVersions
true
3
8.08
apache/kafka
31,560
javadoc
false
atMost
public boolean atMost(final JavaVersion requiredVersion) { return this.value <= requiredVersion.value; }
Tests whether this version of Java is at most the version of Java passed in. <p> For example: </p> <pre> {@code myVersion.atMost(JavaVersion.JAVA_1_4) }</pre> @param requiredVersion the version to check against, not null. @return true if this version is equal to or greater than the specified version. @since 3.9
java
src/main/java/org/apache/commons/lang3/JavaVersion.java
400
[ "requiredVersion" ]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
isEnabled
private static boolean isEnabled() { if (enabled == Enabled.DETECT) { if (ansiCapable == null) { ansiCapable = detectIfAnsiCapable(); } return ansiCapable; } return enabled == Enabled.ALWAYS; }
Create a new ANSI string from the specified elements. Any {@link AnsiElement}s will be encoded as required. @param elements the elements to encode @return a string of the encoded elements
java
core/spring-boot/src/main/java/org/springframework/boot/ansi/AnsiOutput.java
146
[]
true
3
7.76
spring-projects/spring-boot
79,428
javadoc
false
toAddrString
public static String toAddrString(InetAddress ip) { checkNotNull(ip); if (ip instanceof Inet4Address) { // For IPv4, Java's formatting is good enough. // requireNonNull accommodates Android's @RecentlyNullable annotation on getHostAddress return requireNonNull(ip.getHostAddress()); } byte[] bytes = ip.getAddress(); int[] hextets = new int[IPV6_PART_COUNT]; for (int i = 0; i < hextets.length; i++) { hextets[i] = Ints.fromBytes((byte) 0, (byte) 0, bytes[2 * i], bytes[2 * i + 1]); } compressLongestRunOfZeroes(hextets); return hextetsToIPv6String(hextets) + scopeWithDelimiter((Inet6Address) ip); }
Returns the string representation of an {@link InetAddress}. <p>For IPv4 addresses, this is identical to {@link InetAddress#getHostAddress()}, but for IPv6 addresses, the output follows <a href="http://tools.ietf.org/html/rfc5952">RFC 5952</a> section 4. The main difference is that this method uses "::" for zero compression, while Java's version uses the uncompressed form (except on Android, where the zero compression is also done). The other difference is that this method outputs any scope ID in the format that it was provided at creation time, while Android may always output it as an interface name, even if it was supplied as a numeric ID. <p>This method uses hexadecimal for all IPv6 addresses, including IPv4-mapped IPv6 addresses such as "::c000:201". @param ip {@link InetAddress} to be converted to an address string @return {@code String} containing the text-formatted IP address @since 10.0
java
android/guava/src/com/google/common/net/InetAddresses.java
466
[ "ip" ]
String
true
3
8.08
google/guava
51,352
javadoc
false
southPolarH3Address
public static String southPolarH3Address(int res) { return h3ToString(southPolarH3(res)); }
Find the h3 address containing the South Pole at the given resolution. @param res the provided resolution. @return the h3 address containing the South Pole.
java
libs/h3/src/main/java/org/elasticsearch/h3/H3.java
573
[ "res" ]
String
true
1
6.96
elastic/elasticsearch
75,680
javadoc
false
builder
public static <O> Builder<O> builder() { return new Builder<>(); }
Creates a new builder. @param <O> the wrapped object type. @return a new builder. @since 3.18.0
java
src/main/java/org/apache/commons/lang3/concurrent/locks/LockingVisitors.java
512
[]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
underNodeModules
function underNodeModules(url) { if (url.protocol !== 'file:') { return false; } // We determine module types for other protocols based on MIME header return StringPrototypeIncludes(url.pathname, '/node_modules/'); }
Determine whether the given file URL is under a `node_modules` folder. This function assumes that the input has already been verified to be a `file:` URL, and is a file rather than a folder. @param {URL} url @returns {boolean}
javascript
lib/internal/modules/esm/get_format.js
87
[ "url" ]
false
2
6.16
nodejs/node
114,839
jsdoc
false
_remove_empty_lines
def _remove_empty_lines(self, lines: list[list[T]]) -> list[list[T]]: """ Iterate through the lines and remove any that are either empty or contain only one whitespace value Parameters ---------- lines : list of list of Scalars The array of lines that we are to filter. Returns ------- filtered_lines : list of list of Scalars The same array of lines with the "empty" ones removed. """ # Remove empty lines and lines with only one whitespace value ret = [ line for line in lines if ( len(line) > 1 or ( len(line) == 1 and (not isinstance(line[0], str) or line[0].strip()) ) ) ] return ret
Iterate through the lines and remove any that are either empty or contain only one whitespace value Parameters ---------- lines : list of list of Scalars The array of lines that we are to filter. Returns ------- filtered_lines : list of list of Scalars The same array of lines with the "empty" ones removed.
python
pandas/io/parsers/python_parser.py
1,041
[ "self", "lines" ]
list[list[T]]
true
4
6.88
pandas-dev/pandas
47,362
numpy
false
create
public static ExponentialHistogramGenerator create(int maxBucketCount, ExponentialHistogramCircuitBreaker circuitBreaker) { long size = estimateBaseSize(maxBucketCount); circuitBreaker.adjustBreaker(size); try { return new ExponentialHistogramGenerator(maxBucketCount, circuitBreaker); } catch (RuntimeException e) { circuitBreaker.adjustBreaker(-size); throw e; } }
Creates a new instance with the specified maximum number of buckets. @param maxBucketCount the maximum number of buckets for the generated histogram @param circuitBreaker the circuit breaker to use to limit memory allocations
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ExponentialHistogramGenerator.java
63
[ "maxBucketCount", "circuitBreaker" ]
ExponentialHistogramGenerator
true
2
6.4
elastic/elasticsearch
75,680
javadoc
false
create
public static ZeroBucket create(long index, int scale, long count) { if (index == MINIMAL_EMPTY.index && scale == MINIMAL_EMPTY.scale) { return minimalWithCount(count); } return new ZeroBucket(index, scale, count); }
Creates a zero bucket from the given threshold represented as exponentially scaled number. @param index the index of the exponentially scaled number defining the zero threshold @param scale the corresponding scale for the index @param count the number of values in the bucket @return the new {@link ZeroBucket}
java
libs/exponential-histogram/src/main/java/org/elasticsearch/exponentialhistogram/ZeroBucket.java
142
[ "index", "scale", "count" ]
ZeroBucket
true
3
7.6
elastic/elasticsearch
75,680
javadoc
false
addCopies
@CanIgnoreReturnValue @Override public Builder<E> addCopies(E element, int occurrences) { super.addCopies(element, occurrences); return this; }
Adds a number of occurrences of an element to this {@code ImmutableSortedMultiset}. @param element the element to add @param occurrences the number of occurrences of the element to add. May be zero, in which case no change will be made. @return this {@code Builder} object @throws NullPointerException if {@code element} is null @throws IllegalArgumentException if {@code occurrences} is negative, or if this operation would result in more than {@link Integer#MAX_VALUE} occurrences of the element
java
guava/src/com/google/common/collect/ImmutableSortedMultiset.java
504
[ "element", "occurrences" ]
true
1
6.56
google/guava
51,352
javadoc
false
generate_value_label
def generate_value_label(self, byteorder: str) -> bytes: """ Generate the binary representation of the value labels. Parameters ---------- byteorder : str Byte order of the output Returns ------- value_label : bytes Bytes containing the formatted value label """ encoding = self._encoding bio = BytesIO() null_byte = b"\x00" # len bio.write(struct.pack(byteorder + "i", self.len)) # labname labname = str(self.labname)[:32].encode(encoding) lab_len = 32 if encoding not in ("utf-8", "utf8") else 128 labname = _pad_bytes(labname, lab_len + 1) bio.write(labname) # padding - 3 bytes for i in range(3): bio.write(struct.pack("c", null_byte)) # value_label_table # n - int32 bio.write(struct.pack(byteorder + "i", self.n)) # textlen - int32 bio.write(struct.pack(byteorder + "i", self.text_len)) # off - int32 array (n elements) for offset in self.off: bio.write(struct.pack(byteorder + "i", offset)) # val - int32 array (n elements) for value in self.val: bio.write(struct.pack(byteorder + "i", value)) # txt - Text labels, null terminated for text in self.txt: bio.write(text + null_byte) return bio.getvalue()
Generate the binary representation of the value labels. Parameters ---------- byteorder : str Byte order of the output Returns ------- value_label : bytes Bytes containing the formatted value label
python
pandas/io/stata.py
631
[ "self", "byteorder" ]
bytes
true
6
6.56
pandas-dev/pandas
47,362
numpy
false
split
public static String[] split(final String str, final char separatorChar) { return splitWorker(str, separatorChar, false); }
Splits the provided text into an array, separator specified. This is an alternative to using StringTokenizer. <p> The separator is not included in the returned String array. Adjacent separators are treated as one separator. For more control over the split use the StrTokenizer class. </p> <p> A {@code null} input String returns {@code null}. </p> <pre> StringUtils.split(null, *) = null StringUtils.split("", *) = [] StringUtils.split("a.b.c", '.') = ["a", "b", "c"] StringUtils.split("a..b.c", '.') = ["a", "b", "c"] StringUtils.split("a:b:c", '.') = ["a:b:c"] StringUtils.split("a b c", ' ') = ["a", "b", "c"] </pre> @param str the String to parse, may be null. @param separatorChar the character used as the delimiter. @return an array of parsed Strings, {@code null} if null String input. @since 2.0
java
src/main/java/org/apache/commons/lang3/StringUtils.java
7,064
[ "str", "separatorChar" ]
true
1
6.8
apache/commons-lang
2,896
javadoc
false
always
static PropertySourceOptions always(Option... options) { return always(Options.of(options)); }
Create a new {@link PropertySourceOptions} instance that always returns the same options regardless of the property source. @param options the options to return @return a new {@link PropertySourceOptions} instance
java
core/spring-boot/src/main/java/org/springframework/boot/context/config/ConfigData.java
133
[]
PropertySourceOptions
true
1
6.16
spring-projects/spring-boot
79,428
javadoc
false
offsetsForTimes
@Override public Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes(Map<TopicPartition, Long> timestampsToSearch) { return delegate.offsetsForTimes(timestampsToSearch); }
Look up the offsets for the given partitions by timestamp. The returned offset for each partition is the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition. This is a blocking call. The consumer does not have to be assigned the partitions. If the message format version in a partition is before 0.10.0, i.e. the messages do not have timestamps, null will be returned for that partition. @param timestampsToSearch the mapping from partition to the timestamp to look up. @return a mapping from partition to the timestamp and offset of the first message with timestamp greater than or equal to the target timestamp. If the timestamp and offset for a specific partition cannot be found within the default timeout, and no corresponding message exists, the entry in the returned map will be {@code null} @throws org.apache.kafka.common.errors.AuthenticationException if authentication fails. See the exception for more details @throws org.apache.kafka.common.errors.AuthorizationException if not authorized to the topic(s). See the exception for more details @throws IllegalArgumentException if the target timestamp is negative @throws org.apache.kafka.common.errors.TimeoutException if the offset metadata could not be fetched before the amount of time allocated by {@code default.api.timeout.ms} expires. @throws org.apache.kafka.common.errors.UnsupportedVersionException if the broker does not support looking up the offsets by timestamp
java
clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java
1,586
[ "timestampsToSearch" ]
true
1
6.32
apache/kafka
31,560
javadoc
false
english_lower
def english_lower(s): """ Apply English case rules to convert ASCII strings to all lower case. This is an internal utility function to replace calls to str.lower() such that we can avoid changing behavior with changing locales. In particular, Turkish has distinct dotted and dotless variants of the Latin letter "I" in both lowercase and uppercase. Thus, "I".lower() != "i" in a "tr" locale. Parameters ---------- s : str Returns ------- lowered : str Examples -------- >>> from numpy._core.numerictypes import english_lower >>> english_lower('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz0123456789_' >>> english_lower('') '' """ lowered = s.translate(LOWER_TABLE) return lowered
Apply English case rules to convert ASCII strings to all lower case. This is an internal utility function to replace calls to str.lower() such that we can avoid changing behavior with changing locales. In particular, Turkish has distinct dotted and dotless variants of the Latin letter "I" in both lowercase and uppercase. Thus, "I".lower() != "i" in a "tr" locale. Parameters ---------- s : str Returns ------- lowered : str Examples -------- >>> from numpy._core.numerictypes import english_lower >>> english_lower('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz0123456789_' >>> english_lower('') ''
python
numpy/_core/_string_helpers.py
16
[ "s" ]
false
1
6.16
numpy/numpy
31,054
numpy
false
replace_by_example
def replace_by_example( self, replacement_fn: ReplaceFn, args: Sequence[Any], trace_fn: Optional[TraceFn] = None, run_functional_passes: bool = True, ) -> None: """Replace with a graph generated by tracing the replacement_fn. Args: run_functional_passes (bool). If we should run passes that assume functional IR (like DCE, remove_noop_ops), on the replacement graph. """ from torch._inductor.virtualized import NullHandler, V context = ( V.fake_mode if (not isinstance(V.fake_mode, NullHandler) or (V.fake_mode is None)) else contextlib.nullcontext() ) def should_propagate_eager_input_vals(nodes: list[torch.fx.Node]) -> bool: if len(nodes) != 1: return False node = nodes[0] if "eager_input_vals" not in node.meta: return False return node.target in OrderedSet( [ torch.ops.higher_order.triton_kernel_wrapper_functional, torch.ops.higher_order.auto_functionalized, torch.ops.higher_order.auto_functionalized_v2, ] ) # pyrefly: ignore [bad-context-manager] with context: if trace_fn is None: trace_fn = functools.partial( fwd_only, run_functional_passes=run_functional_passes ) if should_propagate_eager_input_vals(self.nodes): # Our strategy is: # 1) trace out the graph with eager_input_vals (which have accurate eager-mode metadata) # 2) trace out the graph with vals (which have the accurate Inductor metadata) # 3) Propagate the eager_input_vals from the first graph to the second. # 4) Use the second graph as the replacement graph. # Construct a map of node -> FakeTensor val in eager_input_vals node_to_val = {} fake_args, fake_kwargs = self.nodes[0].meta["eager_input_vals"] fake_kwargs = {**fake_kwargs} match_args, match_kwargs = tuple(self.args), self.kwargs def record(node: torch.fx.Node, val: Any) -> None: if isinstance(node, torch.fx.Node): node_to_val[node] = val torch.utils._pytree.tree_map( record, (match_args, match_kwargs), (fake_args, fake_kwargs) ) # map args to their FakeTensor val in eager_input_vals example_vals = torch.fx.map_arg(args, lambda arg: node_to_val[arg]) # first graph graph_with_eager_vals = trace_fn(replacement_fn, example_vals) # second graph example_vals = torch.fx.map_arg(args, lambda arg: arg.meta["val"]) # pyrefly: ignore [bad-argument-type] replacement = trace_fn(graph_with_eager_vals, example_vals) # propagate metadata from first graph to second # NB: This assertion might not be true in general, but it is true for # the two use cases we have # (triton_kernel_wrapper_functional, auto_functionalized) assert len(graph_with_eager_vals.graph.nodes) == len( replacement.graph.nodes ) for old_node, new_node in zip( graph_with_eager_vals.graph.nodes, replacement.graph.nodes ): if "eager_input_vals" in old_node.meta: new_node.meta["eager_input_vals"] = old_node.meta[ "eager_input_vals" ] else: example_vals = torch.fx.map_arg( args, lambda arg: arg.meta["val"] if "val" in arg.meta else arg.meta["example_value"], ) replacement = trace_fn(replacement_fn, example_vals) if len(self.nodes) == 1: for n in replacement.graph.nodes: _transfer_meta( new_meta=n.meta, old_node=self.nodes[0], pass_name="replace_by_example", ) ReplacementPatternEntry.replace_with_graph( self, self.ctx.graph, replacement, args, )
Replace with a graph generated by tracing the replacement_fn. Args: run_functional_passes (bool). If we should run passes that assume functional IR (like DCE, remove_noop_ops), on the replacement graph.
python
torch/_inductor/pattern_matcher.py
236
[ "self", "replacement_fn", "args", "trace_fn", "run_functional_passes" ]
None
true
14
6.48
pytorch/pytorch
96,034
google
false
register_module_as_pytree_input_node
def register_module_as_pytree_input_node(cls: type[torch.nn.Module]) -> None: """ Registers a module as a valid input type for :func:`torch.export.export`. Args: mod: the module instance serialized_type_name: The serialized name for the module. This is required if you want to serialize the pytree TreeSpec containing this module. Example:: import torch class Module(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(3, 3) def forward(self, x): return self.linear(x) torch._export.utils.register_module_as_pytree_node(InputDataClass) class Mod(torch.nn.Module): def forward(self, x, m): return m(x) + x ep = torch.export.export(Mod(), (torch.randn(3), Module())) print(ep) """ assert issubclass(cls, torch.nn.Module) import weakref class PrototypeModule(weakref.ref): def __init__(self, m, *args, **kwargs): super().__init__(m, *args, **kwargs) # type: ignore[call-arg] assert isinstance(m, torch.nn.Module) assert not hasattr(self, "_proto_cls") self._proto_cls = cls def __eq__(self, other): return self._proto_cls == other._proto_cls def __deepcopy__(self, memo): return PrototypeModule(self()) def default_flatten_fn(obj: Any) -> tuple[list[Any], Context]: named_parameters = dict(obj.named_parameters()) named_buffers = dict(obj.named_buffers()) params_buffers = {**named_parameters, **named_buffers} return list(params_buffers.values()), [ list(params_buffers.keys()), PrototypeModule(obj), ] def default_unflatten_fn(values: Iterable[Any], context: Context) -> Any: flat_names, ref = context if ref is None or ref() is None: raise RuntimeError("Module has been garbage collected") obj = ref() assert flatten_fn is not None flattened, _ = flatten_fn(obj) # NOTE: This helper function will replicate an nn.Module in the exactly same # structure to be used together with _reparameterize_module. This will # create a clone of the module with the new parameters and buffers without # affecting the original module. def copy_module(mod: torch.nn.Module): ret = copy.copy(mod) ret.__dict__ = {copy.copy(k): copy.copy(v) for k, v in mod.__dict__.items()} for name, child in ret.named_children(): setattr(ret, name, copy_module(child)) return ret if any(v is not o for v, o in zip(values, flattened)): with torch.nn.utils.stateless._reparametrize_module( obj, dict(zip(flat_names, values)), tie_weights=True, strict=True ): ret = copy_module(obj) else: ret = obj return ret def default_flatten_fn_with_keys(obj: Any) -> tuple[list[Any], Context]: flattened, [flat_names, *args] = flatten_fn(obj) # type: ignore[misc] return [(MappingKey(k), v) for k, v in zip(flat_names, flattened)], [ flat_names, *args, ] flatten_fn = default_flatten_fn unflatten_fn = default_unflatten_fn serialized_type_name = cls.__module__ + "." + cls.__qualname__ def to_dumpable_context(context): keys, *_ = context return json.dumps([keys, *([None] * len(_))]) def from_dumpable_context(dumpable): s = json.loads(dumpable) s[1] = PrototypeModule(torch.nn.Module()) return s _register_pytree_node( cls, flatten_fn, unflatten_fn, serialized_type_name=serialized_type_name, flatten_with_keys_fn=default_flatten_fn_with_keys, to_dumpable_context=to_dumpable_context, from_dumpable_context=from_dumpable_context, ) def default_flatten_fn_spec(obj, spec) -> list[Any]: flats, context = flatten_fn(obj) assert context == spec.context return flats register_pytree_flatten_spec( cls, default_flatten_fn_spec, )
Registers a module as a valid input type for :func:`torch.export.export`. Args: mod: the module instance serialized_type_name: The serialized name for the module. This is required if you want to serialize the pytree TreeSpec containing this module. Example:: import torch class Module(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(3, 3) def forward(self, x): return self.linear(x) torch._export.utils.register_module_as_pytree_node(InputDataClass) class Mod(torch.nn.Module): def forward(self, x, m): return m(x) + x ep = torch.export.export(Mod(), (torch.randn(3), Module())) print(ep)
python
torch/_export/utils.py
1,430
[ "cls" ]
None
true
6
6.16
pytorch/pytorch
96,034
google
false
visitFunctionExpression
function visitFunctionExpression(node: FunctionExpression): Expression { // Currently, we only support generators that were originally async functions. if (node.asteriskToken) { node = setOriginalNode( setTextRange( factory.createFunctionExpression( /*modifiers*/ undefined, /*asteriskToken*/ undefined, node.name, /*typeParameters*/ undefined, visitParameterList(node.parameters, visitor, context), /*type*/ undefined, transformGeneratorFunctionBody(node.body), ), /*location*/ node, ), node, ); } else { const savedInGeneratorFunctionBody = inGeneratorFunctionBody; const savedInStatementContainingYield = inStatementContainingYield; inGeneratorFunctionBody = false; inStatementContainingYield = false; node = visitEachChild(node, visitor, context); inGeneratorFunctionBody = savedInGeneratorFunctionBody; inStatementContainingYield = savedInStatementContainingYield; } return node; }
Visits a function expression. This will be called when one of the following conditions are met: - The function expression is a generator function. - The function expression is contained within the body of a generator function. @param node The node to visit.
typescript
src/compiler/transformers/generators.ts
601
[ "node" ]
true
3
6.88
microsoft/TypeScript
107,154
jsdoc
true
enableSubstitutionForAsyncMethodsWithSuper
function enableSubstitutionForAsyncMethodsWithSuper() { if ((enabledSubstitutions & ES2017SubstitutionFlags.AsyncMethodsWithSuper) === 0) { enabledSubstitutions |= ES2017SubstitutionFlags.AsyncMethodsWithSuper; // We need to enable substitutions for call, property access, and element access // if we need to rewrite super calls. context.enableSubstitution(SyntaxKind.CallExpression); context.enableSubstitution(SyntaxKind.PropertyAccessExpression); context.enableSubstitution(SyntaxKind.ElementAccessExpression); // We need to be notified when entering and exiting declarations that bind super. context.enableEmitNotification(SyntaxKind.ClassDeclaration); context.enableEmitNotification(SyntaxKind.MethodDeclaration); context.enableEmitNotification(SyntaxKind.GetAccessor); context.enableEmitNotification(SyntaxKind.SetAccessor); context.enableEmitNotification(SyntaxKind.Constructor); // We need to be notified when entering the generated accessor arrow functions. context.enableEmitNotification(SyntaxKind.VariableStatement); } }
Visits an ArrowFunction. This function will be called when one of the following conditions are met: - The node is marked async @param node The node to visit.
typescript
src/compiler/transformers/es2017.ts
898
[]
false
2
6.08
microsoft/TypeScript
107,154
jsdoc
false
apply_patch_with_update_mask
def apply_patch_with_update_mask( model: DeclarativeMeta, patch_body: BaseModel, update_mask: list[str] | None, non_update_fields: set[str] | None = None, ) -> DeclarativeMeta: """ Apply a patch to the given model using the provided update mask. :param model: The SQLAlchemy model instance to update. :param patch_body: Pydantic model containing patch data. :param update_mask: Optional list of fields to update. :param non_update_fields: Fields that should not be updated. :return: The updated SQLAlchemy model instance. :raises HTTPException: If invalid fields are provided in update_mask. """ # Always dump without aliases for internal validation raw_data = patch_body.model_dump(by_alias=False) fields_to_update = set(patch_body.model_fields_set) non_update_fields = non_update_fields or set() if update_mask: restricted_in_mask = set(update_mask).intersection(non_update_fields) if restricted_in_mask: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail=f"Update not allowed: the following fields are immutable and cannot be modified: {restricted_in_mask}", ) fields_to_update = fields_to_update.intersection(update_mask) if non_update_fields: fields_to_update = fields_to_update - non_update_fields validated_data = {key: raw_data[key] for key in fields_to_update if key in raw_data} data = patch_body.model_dump(include=set(validated_data.keys()), by_alias=True) # Update the model with the validated data for key, value in data.items(): setattr(model, key, value) return model
Apply a patch to the given model using the provided update mask. :param model: The SQLAlchemy model instance to update. :param patch_body: Pydantic model containing patch data. :param update_mask: Optional list of fields to update. :param non_update_fields: Fields that should not be updated. :return: The updated SQLAlchemy model instance. :raises HTTPException: If invalid fields are provided in update_mask.
python
airflow-core/src/airflow/api_fastapi/core_api/services/public/common.py
80
[ "model", "patch_body", "update_mask", "non_update_fields" ]
DeclarativeMeta
true
6
8.08
apache/airflow
43,597
sphinx
false
poll
@SuppressWarnings("UnusedReturnValue") ShareCompletedFetch poll() { lock.lock(); try { return completedFetches.poll(); } finally { lock.unlock(); } }
Returns {@code true} if there are no completed fetches pending to return to the user. @return {@code true} if the buffer is empty, {@code false} otherwise
java
clients/src/main/java/org/apache/kafka/clients/consumer/internals/ShareFetchBuffer.java
113
[]
ShareCompletedFetch
true
1
7.04
apache/kafka
31,560
javadoc
false
getTimeZone
public static TimeZone getTimeZone(final String id) { final TimeZone tz = getGmtTimeZone(id); if (tz != null) { return tz; } return TimeZones.getTimeZone(id); }
Gets a TimeZone, looking first for GMT custom ids, then falling back to Olson ids. A GMT custom id can be 'Z', or 'UTC', or has an optional prefix of GMT, followed by sign, hours digit(s), optional colon(':'), and optional minutes digits. i.e. <em>[GMT] (+|-) Hours [[:] Minutes]</em> @param id A GMT custom id (or Olson id @return A time zone
java
src/main/java/org/apache/commons/lang3/time/FastTimeZone.java
75
[ "id" ]
TimeZone
true
2
7.84
apache/commons-lang
2,896
javadoc
false
firstInFlightSequence
synchronized int firstInFlightSequence(TopicPartition topicPartition) { if (!hasInflightBatches(topicPartition)) return RecordBatch.NO_SEQUENCE; ProducerBatch batch = nextBatchBySequence(topicPartition); return batch == null ? RecordBatch.NO_SEQUENCE : batch.baseSequence(); }
Returns the first inflight sequence for a given partition. This is the base sequence of an inflight batch with the lowest sequence number. @return the lowest inflight sequence if the transaction manager is tracking inflight requests for this partition. If there are no inflight requests being tracked for this partition, this method will return RecordBatch.NO_SEQUENCE.
java
clients/src/main/java/org/apache/kafka/clients/producer/internals/TransactionManager.java
712
[ "topicPartition" ]
true
3
6.72
apache/kafka
31,560
javadoc
false