repository stringclasses 11
values | repo_id stringlengths 1 3 | target_module_path stringlengths 16 72 | prompt stringlengths 298 21.7k | relavent_test_path stringlengths 50 99 | full_function stringlengths 336 33.8k | function_name stringlengths 2 51 | content_class stringclasses 3
values | external_dependencies stringclasses 2
values |
|---|---|---|---|---|---|---|---|---|
datasets | 2 | src/datasets/features/features.py | def int2str(self, values: Union[int, Iterable]) -> Union[str, Iterable]:
"""Conversion `integer` => class name `string`.
Regarding unknown/missing labels: passing negative integers raises `ValueError`.
Example:
```py
>>> from datasets import load_dataset
>>> ds = l... | /usr/src/app/target_test_cases/failed_tests_ClassLabel.int2str.txt | def int2str(self, values: Union[int, Iterable]) -> Union[str, Iterable]:
"""Conversion `integer` => class name `string`.
Regarding unknown/missing labels: passing negative integers raises `ValueError`.
Example:
```py
>>> from datasets import load_dataset
>>> ds = l... | ClassLabel.int2str | file-level | external |
datasets | 3 | src/datasets/dataset_dict.py | def flatten(self, max_depth=16) -> "DatasetDict":
"""Flatten the Apache Arrow Table of each split (nested features are flatten).
Each column with a struct type is flattened into one column per struct field.
Other columns are left unchanged.
Example:
```py
>>> from d... | /usr/src/app/target_test_cases/failed_tests_DatasetDict.flatten.txt | def flatten(self, max_depth=16) -> "DatasetDict":
"""Flatten the Apache Arrow Table of each split (nested features are flatten).
Each column with a struct type is flattened into one column per struct field.
Other columns are left unchanged.
Example:
```py
>>> from d... | DatasetDict.flatten | file-level | non_external |
datasets | 4 | src/datasets/dataset_dict.py | def push_to_hub(
self,
repo_id,
config_name: str = "default",
set_default: Optional[bool] = None,
data_dir: Optional[str] = None,
commit_message: Optional[str] = None,
commit_description: Optional[str] = None,
private: Optional[bool] = False,
t... | /usr/src/app/target_test_cases/failed_tests_DatasetDict.push_to_hub.txt | def push_to_hub(
self,
repo_id,
config_name: str = "default",
set_default: Optional[bool] = None,
data_dir: Optional[str] = None,
commit_message: Optional[str] = None,
commit_description: Optional[str] = None,
private: Optional[bool] = False,
t... | DatasetDict.push_to_hub | repository-level | external |
datasets | 5 | src/datasets/dataset_dict.py | def save_to_disk(
self,
dataset_dict_path: PathLike,
max_shard_size: Optional[Union[str, int]] = None,
num_shards: Optional[Dict[str, int]] = None,
num_proc: Optional[int] = None,
storage_options: Optional[dict] = None,
):
"""
Saves a dataset dict ... | /usr/src/app/target_test_cases/failed_tests_DatasetDict.save_to_disk.txt | def save_to_disk(
self,
dataset_dict_path: PathLike,
max_shard_size: Optional[Union[str, int]] = None,
num_shards: Optional[Dict[str, int]] = None,
num_proc: Optional[int] = None,
storage_options: Optional[dict] = None,
):
"""
Saves a dataset dict ... | DatasetDict.save_to_disk | repository-level | external |
datasets | 6 | src/datasets/info.py | def write_to_directory(self, dataset_info_dir, pretty_print=False, storage_options: Optional[dict] = None):
"""Write `DatasetInfo` and license (if present) as JSON files to `dataset_info_dir`.
Args:
dataset_info_dir (`str`):
Destination directory.
pretty_prin... | /usr/src/app/target_test_cases/failed_tests_DatasetInfo.write_to_directory.txt | def write_to_directory(self, dataset_info_dir, pretty_print=False, storage_options: Optional[dict] = None):
"""Write `DatasetInfo` and license (if present) as JSON files to `dataset_info_dir`.
Args:
dataset_info_dir (`str`):
Destination directory.
pretty_prin... | DatasetInfo.write_to_directory | repository-level | external |
datasets | 7 | src/datasets/download/download_manager.py | def download(self, url_or_urls):
"""Download given URL(s).
By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior.
Args:
url_or_urls (`str` or `list` or `dict`):
URL or `list` or `dict` of URLs to do... | /usr/src/app/target_test_cases/failed_tests_DownloadManager.download.txt | def download(self, url_or_urls):
"""Download given URL(s).
By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior.
Args:
url_or_urls (`str` or `list` or `dict`):
URL or `list` or `dict` of URLs to do... | DownloadManager.download | repository-level | external |
datasets | 8 | src/datasets/download/download_manager.py | def extract(self, path_or_paths):
"""Extract given path(s).
Args:
path_or_paths (path or `list` or `dict`):
Path of file to extract. Each path is a `str`.
Returns:
extracted_path(s): `str`, The extracted paths matching the given input
pat... | /usr/src/app/target_test_cases/failed_tests_DownloadManager.extract.txt | def extract(self, path_or_paths):
"""Extract given path(s).
Args:
path_or_paths (path or `list` or `dict`):
Path of file to extract. Each path is a `str`.
Returns:
extracted_path(s): `str`, The extracted paths matching the given input
pat... | DownloadManager.extract | repository-level | external |
datasets | 9 | src/datasets/download/download_manager.py | def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
"""Iterate over files within an archive.
Args:
path_or_buf (`str` or `io.BufferedReader`):
Archive path or archive binary file object.
Yields:
`tuple[str, io.BufferedReader]`:
... | /usr/src/app/target_test_cases/failed_tests_DownloadManager.iter_archive.txt | def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
"""Iterate over files within an archive.
Args:
path_or_buf (`str` or `io.BufferedReader`):
Archive path or archive binary file object.
Yields:
`tuple[str, io.BufferedReader]`:
... | DownloadManager.iter_archive | repository-level | external |
datasets | 10 | src/datasets/download/download_manager.py | def iter_files(self, paths: Union[str, List[str]]):
"""Iterate over file paths.
Args:
paths (`str` or `list` of `str`):
Root paths.
Yields:
`str`: File path.
Example:
```py
>>> files = dl_manager.download_and_extract('https:... | /usr/src/app/target_test_cases/failed_tests_DownloadManager.iter_files.txt | def iter_files(self, paths: Union[str, List[str]]):
"""Iterate over file paths.
Args:
paths (`str` or `list` of `str`):
Root paths.
Yields:
`str`: File path.
Example:
```py
>>> files = dl_manager.download_and_extract('https:... | DownloadManager.iter_files | repository-level | external |
datasets | 11 | src/datasets/features/features.py | def copy(self) -> "Features":
"""
Make a deep copy of [`Features`].
Returns:
[`Features`]
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train")
>>> copy_of_features = ds.features.cop... | /usr/src/app/target_test_cases/failed_tests_Features.copy.txt | def copy(self) -> "Features":
"""
Make a deep copy of [`Features`].
Returns:
[`Features`]
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train")
>>> copy_of_features = ds.features.cop... | Features.copy | file-level | external |
datasets | 12 | src/datasets/features/features.py | def encode_column(self, column, column_name: str):
"""
Encode column into a format for Arrow.
Args:
column (`list[Any]`):
Data in a Dataset column.
column_name (`str`):
Dataset column name.
Returns:
`list[Any]`
... | /usr/src/app/target_test_cases/failed_tests_Features.encode_column.txt | def encode_column(self, column, column_name: str):
"""
Encode column into a format for Arrow.
Args:
column (`list[Any]`):
Data in a Dataset column.
column_name (`str`):
Dataset column name.
Returns:
`list[Any]`
... | Features.encode_column | file-level | non_external |
datasets | 13 | src/datasets/features/features.py | def flatten(self, max_depth=16) -> "Features":
"""Flatten the features. Every dictionary column is removed and is replaced by
all the subfields it contains. The new fields are named by concatenating the
name of the original column and the subfield name like this: `<original>.<subfield>`.
... | /usr/src/app/target_test_cases/failed_tests_Features.flatten.txt | def flatten(self, max_depth=16) -> "Features":
"""Flatten the features. Every dictionary column is removed and is replaced by
all the subfields it contains. The new fields are named by concatenating the
name of the original column and the subfield name like this: `<original>.<subfield>`.
... | Features.flatten | file-level | non_external |
datasets | 14 | src/datasets/features/features.py | def reorder_fields_as(self, other: "Features") -> "Features":
"""
Reorder Features fields to match the field order of other [`Features`].
The order of the fields is important since it matters for the underlying arrow data.
Re-ordering the fields allows to make the underlying arrow d... | /usr/src/app/target_test_cases/failed_tests_Features.reorder_fields_as.txt | def reorder_fields_as(self, other: "Features") -> "Features":
"""
Reorder Features fields to match the field order of other [`Features`].
The order of the fields is important since it matters for the underlying arrow data.
Re-ordering the fields allows to make the underlying arrow d... | Features.reorder_fields_as | file-level | non_external |
datasets | 15 | src/datasets/table.py | def cast(self, *args, **kwargs):
"""
Cast table values to another schema.
Args:
target_schema (`Schema`):
Schema to cast to, the names and order of fields must match.
safe (`bool`, defaults to `True`):
Check for overflows or other unsa... | /usr/src/app/target_test_cases/failed_tests_InMemoryTable.cast.txt | def cast(self, *args, **kwargs):
"""
Cast table values to another schema.
Args:
target_schema (`Schema`):
Schema to cast to, the names and order of fields must match.
safe (`bool`, defaults to `True`):
Check for overflows or other unsa... | InMemoryTable.cast | file-level | non_external |
datasets | 16 | src/datasets/table.py | def slice(self, offset=0, length=None):
"""
Compute zero-copy slice of this Table.
Args:
offset (`int`, defaults to `0`):
Offset from start of table to slice.
length (`int`, defaults to `None`):
Length of slice (default is until end of... | /usr/src/app/target_test_cases/failed_tests_InMemoryTable.slice.txt | def slice(self, offset=0, length=None):
"""
Compute zero-copy slice of this Table.
Args:
offset (`int`, defaults to `0`):
Offset from start of table to slice.
length (`int`, defaults to `None`):
Length of slice (default is until end of... | InMemoryTable.slice | file-level | non_external |
datasets | 17 | src/datasets/iterable_dataset.py | def cast(
self,
features: Features,
) -> "IterableDataset":
"""
Cast the dataset to a new set of features.
Args:
features ([`Features`]):
New features to cast the dataset to.
The name of the fields in the features must match th... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.cast.txt | def cast(
self,
features: Features,
) -> "IterableDataset":
"""
Cast the dataset to a new set of features.
Args:
features ([`Features`]):
New features to cast the dataset to.
The name of the fields in the features must match th... | IterableDataset.cast | repository-level | external |
datasets | 18 | src/datasets/iterable_dataset.py | def cast_column(self, column: str, feature: FeatureType) -> "IterableDataset":
"""Cast column to feature for decoding.
Args:
column (`str`):
Column name.
feature (`Feature`):
Target feature.
Returns:
`IterableDataset`
... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.cast_column.txt | def cast_column(self, column: str, feature: FeatureType) -> "IterableDataset":
"""Cast column to feature for decoding.
Args:
column (`str`):
Column name.
feature (`Feature`):
Target feature.
Returns:
`IterableDataset`
... | IterableDataset.cast_column | repository-level | external |
datasets | 19 | src/datasets/iterable_dataset.py | def filter(
self,
function: Optional[Callable] = None,
with_indices=False,
input_columns: Optional[Union[str, List[str]]] = None,
batched: bool = False,
batch_size: Optional[int] = 1000,
fn_kwargs: Optional[dict] = None,
) -> "IterableDataset":
"""... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.filter.txt | def filter(
self,
function: Optional[Callable] = None,
with_indices=False,
input_columns: Optional[Union[str, List[str]]] = None,
batched: bool = False,
batch_size: Optional[int] = 1000,
fn_kwargs: Optional[dict] = None,
) -> "IterableDataset":
"""... | IterableDataset.filter | file-level | external |
datasets | 20 | src/datasets/iterable_dataset.py | def map(
self,
function: Optional[Callable] = None,
with_indices: bool = False,
input_columns: Optional[Union[str, List[str]]] = None,
batched: bool = False,
batch_size: Optional[int] = 1000,
drop_last_batch: bool = False,
remove_columns: Optional[Unio... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.map.txt | def map(
self,
function: Optional[Callable] = None,
with_indices: bool = False,
input_columns: Optional[Union[str, List[str]]] = None,
batched: bool = False,
batch_size: Optional[int] = 1000,
drop_last_batch: bool = False,
remove_columns: Optional[Unio... | IterableDataset.map | repository-level | external |
datasets | 21 | src/datasets/iterable_dataset.py | def remove_columns(self, column_names: Union[str, List[str]]) -> "IterableDataset":
"""
Remove one or several column(s) in the dataset and the features associated to them.
The removal is done on-the-fly on the examples when iterating over the dataset.
Args:
column_names... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.remove_columns.txt | def remove_columns(self, column_names: Union[str, List[str]]) -> "IterableDataset":
"""
Remove one or several column(s) in the dataset and the features associated to them.
The removal is done on-the-fly on the examples when iterating over the dataset.
Args:
column_names... | IterableDataset.remove_columns | file-level | external |
datasets | 22 | src/datasets/iterable_dataset.py | def rename_column(self, original_column_name: str, new_column_name: str) -> "IterableDataset":
"""
Rename a column in the dataset, and move the features associated to the original column under the new column
name.
Args:
original_column_name (`str`):
Name ... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.rename_column.txt | def rename_column(self, original_column_name: str, new_column_name: str) -> "IterableDataset":
"""
Rename a column in the dataset, and move the features associated to the original column under the new column
name.
Args:
original_column_name (`str`):
Name ... | IterableDataset.rename_column | file-level | non_external |
datasets | 23 | src/datasets/iterable_dataset.py | def select_columns(self, column_names: Union[str, List[str]]) -> "IterableDataset":
"""Select one or several column(s) in the dataset and the features
associated to them. The selection is done on-the-fly on the examples
when iterating over the dataset.
Args:
column_name... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.select_columns.txt | def select_columns(self, column_names: Union[str, List[str]]) -> "IterableDataset":
"""Select one or several column(s) in the dataset and the features
associated to them. The selection is done on-the-fly on the examples
when iterating over the dataset.
Args:
column_name... | IterableDataset.select_columns | repository-level | external |
datasets | 24 | src/datasets/iterable_dataset.py | def shuffle(
self, seed=None, generator: Optional[np.random.Generator] = None, buffer_size: int = 1000
) -> "IterableDataset":
"""
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this bu... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.shuffle.txt | def shuffle(
self, seed=None, generator: Optional[np.random.Generator] = None, buffer_size: int = 1000
) -> "IterableDataset":
"""
Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this bu... | IterableDataset.shuffle | file-level | external |
datasets | 25 | src/datasets/iterable_dataset.py | def skip(self, n: int) -> "IterableDataset":
"""
Create a new [`IterableDataset`] that skips the first `n` elements.
Args:
n (`int`):
Number of elements to skip.
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.skip.txt | def skip(self, n: int) -> "IterableDataset":
"""
Create a new [`IterableDataset`] that skips the first `n` elements.
Args:
n (`int`):
Number of elements to skip.
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_... | IterableDataset.skip | file-level | external |
datasets | 26 | src/datasets/iterable_dataset.py | def take(self, n: int) -> "IterableDataset":
"""
Create a new [`IterableDataset`] with only the first `n` elements.
Args:
n (`int`):
Number of elements to take.
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_d... | /usr/src/app/target_test_cases/failed_tests_IterableDataset.take.txt | def take(self, n: int) -> "IterableDataset":
"""
Create a new [`IterableDataset`] with only the first `n` elements.
Args:
n (`int`):
Number of elements to take.
Example:
```py
>>> from datasets import load_dataset
>>> ds = load_d... | IterableDataset.take | file-level | external |
datasets | 27 | src/datasets/download/streaming_download_manager.py | def download(self, url_or_urls):
"""Normalize URL(s) of files to stream data from.
This is the lazy version of `DownloadManager.download` for streaming.
Args:
url_or_urls (`str` or `list` or `dict`):
URL(s) of files to stream data from. Each url is a `str`.
... | /usr/src/app/target_test_cases/failed_tests_StreamingDownloadManager.download.txt | def download(self, url_or_urls):
"""Normalize URL(s) of files to stream data from.
This is the lazy version of `DownloadManager.download` for streaming.
Args:
url_or_urls (`str` or `list` or `dict`):
URL(s) of files to stream data from. Each url is a `str`.
... | StreamingDownloadManager.download | repository-level | non_external |
datasets | 28 | src/datasets/download/streaming_download_manager.py | def download_and_extract(self, url_or_urls):
"""Prepare given `url_or_urls` for streaming (add extraction protocol).
This is the lazy version of `DownloadManager.download_and_extract` for streaming.
Is equivalent to:
```
urls = dl_manager.extract(dl_manager.download(url_or... | /usr/src/app/target_test_cases/failed_tests_StreamingDownloadManager.download_and_extract.txt | def download_and_extract(self, url_or_urls):
"""Prepare given `url_or_urls` for streaming (add extraction protocol).
This is the lazy version of `DownloadManager.download_and_extract` for streaming.
Is equivalent to:
```
urls = dl_manager.extract(dl_manager.download(url_or... | StreamingDownloadManager.download_and_extract | file-level | non_external |
datasets | 29 | src/datasets/download/streaming_download_manager.py | def extract(self, url_or_urls):
"""Add extraction protocol for given url(s) for streaming.
This is the lazy version of `DownloadManager.extract` for streaming.
Args:
url_or_urls (`str` or `list` or `dict`):
URL(s) of files to stream data from. Each url is a `str... | /usr/src/app/target_test_cases/failed_tests_StreamingDownloadManager.extract.txt | def extract(self, url_or_urls):
"""Add extraction protocol for given url(s) for streaming.
This is the lazy version of `DownloadManager.extract` for streaming.
Args:
url_or_urls (`str` or `list` or `dict`):
URL(s) of files to stream data from. Each url is a `str... | StreamingDownloadManager.extract | repository-level | non_external |
datasets | 30 | src/datasets/download/streaming_download_manager.py | def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:
"""Iterate over files within an archive.
Args:
urlpath_or_buf (`str` or `io.BufferedReader`):
Archive path or archive binary file object.
Yields:
`tuple[str, io... | /usr/src/app/target_test_cases/failed_tests_StreamingDownloadManager.iter_archive.txt | def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]:
"""Iterate over files within an archive.
Args:
urlpath_or_buf (`str` or `io.BufferedReader`):
Archive path or archive binary file object.
Yields:
`tuple[str, io... | StreamingDownloadManager.iter_archive | repository-level | external |
datasets | 31 | src/datasets/download/streaming_download_manager.py | def iter_files(self, urlpaths: Union[str, List[str]]) -> Iterable[str]:
"""Iterate over files.
Args:
urlpaths (`str` or `list` of `str`):
Root paths.
Yields:
str: File URL path.
Example:
```py
>>> files = dl_manager.download... | /usr/src/app/target_test_cases/failed_tests_StreamingDownloadManager.iter_files.txt | def iter_files(self, urlpaths: Union[str, List[str]]) -> Iterable[str]:
"""Iterate over files.
Args:
urlpaths (`str` or `list` of `str`):
Root paths.
Yields:
str: File URL path.
Example:
```py
>>> files = dl_manager.download... | StreamingDownloadManager.iter_files | repository-level | external |
datasets | 32 | src/datasets/table.py | def equals(self, *args, **kwargs):
"""
Check if contents of two tables are equal.
Args:
other ([`~datasets.table.Table`]):
Table to compare against.
check_metadata `bool`, defaults to `False`):
Whether schema metadata equality should b... | /usr/src/app/target_test_cases/failed_tests_Table.equals.txt | def equals(self, *args, **kwargs):
"""
Check if contents of two tables are equal.
Args:
other ([`~datasets.table.Table`]):
Table to compare against.
check_metadata `bool`, defaults to `False`):
Whether schema metadata equality should b... | Table.equals | file-level | non_external |
datasets | 33 | src/datasets/table.py | def validate(self, *args, **kwargs):
"""
Perform validation checks. An exception is raised if validation fails.
By default only cheap validation checks are run. Pass `full=True`
for thorough validation checks (potentially `O(n)`).
Args:
full (`bool`, defaults ... | /usr/src/app/target_test_cases/failed_tests_Table.validate.txt | def validate(self, *args, **kwargs):
"""
Perform validation checks. An exception is raised if validation fails.
By default only cheap validation checks are run. Pass `full=True`
for thorough validation checks (potentially `O(n)`).
Args:
full (`bool`, defaults ... | Table.validate | file-level | non_external |
datasets | 34 | src/datasets/utils/sharding.py | def _distribute_shards(num_shards: int, max_num_jobs: int) -> List[range]:
"""
Get the range of shard indices per job.
If num_shards<max_num_jobs, then num_shards jobs are given a range of one shard.
The shards indices order is preserved: e.g. all the first shards are given the first job.
Moreover a... | /usr/src/app/target_test_cases/failed_tests__distribute_shards.txt | def _distribute_shards(num_shards: int, max_num_jobs: int) -> List[range]:
"""
Get the range of shard indices per job.
If num_shards<max_num_jobs, then num_shards jobs are given a range of one shard.
The shards indices order is preserved: e.g. all the first shards are given the first job.
Moreover a... | _distribute_shards | file-level | external |
datasets | 35 | src/datasets/table.py | def _interpolation_search(arr: List[int], x: int) -> int:
"""
Return the position i of a sorted array so that arr[i] <= x < arr[i+1]
Args:
arr (`List[int]`): non-empty sorted list of integers
x (`int`): query
Returns:
`int`: the position i so that arr[i] <= x < arr[i+1]
Ra... | /usr/src/app/target_test_cases/failed_tests__interpolation_search.txt | def _interpolation_search(arr: List[int], x: int) -> int:
"""
Return the position i of a sorted array so that arr[i] <= x < arr[i+1]
Args:
arr (`List[int]`): non-empty sorted list of integers
x (`int`): query
Returns:
`int`: the position i so that arr[i] <= x < arr[i+1]
Ra... | _interpolation_search | self-contained | external |
datasets | 36 | src/datasets/data_files.py | def _is_inside_unrequested_special_dir(matched_rel_path: str, pattern: str) -> bool:
"""
When a path matches a pattern, we additionnally check if it's inside a special directory
we ignore by default (if it starts with a double underscore).
Users can still explicitly request a filepath inside such a dir... | /usr/src/app/target_test_cases/failed_tests__is_inside_unrequested_special_dir.txt | def _is_inside_unrequested_special_dir(matched_rel_path: str, pattern: str) -> bool:
"""
When a path matches a pattern, we additionnally check if it's inside a special directory
we ignore by default (if it starts with a double underscore).
Users can still explicitly request a filepath inside such a dir... | _is_inside_unrequested_special_dir | self-contained | external |
datasets | 37 | src/datasets/utils/file_utils.py | def cached_path(
url_or_filename,
download_config=None,
**download_kwargs,
) -> str:
"""
Given something that might be a URL (or might be a local path),
determine which. If it's a URL, download the file and cache it, and
return the path to the cached file. If it's already a local path,
m... | /usr/src/app/target_test_cases/failed_tests_cached_path.txt | def cached_path(
url_or_filename,
download_config=None,
**download_kwargs,
) -> str:
"""
Given something that might be a URL (or might be a local path),
determine which. If it's a URL, download the file and cache it, and
return the path to the cached file. If it's already a local path,
m... | cached_path | repository-level | external |
datasets | 38 | src/datasets/features/features.py | def cast_to_python_objects(obj: Any, only_1d_for_numpy=False, optimize_list_casting=True) -> Any:
"""
Cast numpy/pytorch/tensorflow/pandas objects to python lists.
It works recursively.
If `optimize_list_casting` is True, To avoid iterating over possibly long lists, it first checks (recursively) if the... | /usr/src/app/target_test_cases/failed_tests_cast_to_python_objects.txt | def cast_to_python_objects(obj: Any, only_1d_for_numpy=False, optimize_list_casting=True) -> Any:
"""
Cast numpy/pytorch/tensorflow/pandas objects to python lists.
It works recursively.
If `optimize_list_casting` is True, To avoid iterating over possibly long lists, it first checks (recursively) if the... | cast_to_python_objects | file-level | external |
datasets | 39 | src/datasets/table.py | def concat_tables(tables: List[Table], axis: int = 0) -> Table:
"""
Concatenate tables.
Args:
tables (list of `Table`):
List of tables to be concatenated.
axis (`{0, 1}`, defaults to `0`, meaning over rows):
Axis to concatenate over, where `0` means over rows (vertic... | /usr/src/app/target_test_cases/failed_tests_concat_tables.txt | def concat_tables(tables: List[Table], axis: int = 0) -> Table:
"""
Concatenate tables.
Args:
tables (list of `Table`):
List of tables to be concatenated.
axis (`{0, 1}`, defaults to `0`, meaning over rows):
Axis to concatenate over, where `0` means over rows (vertic... | concat_tables | file-level | external |
datasets | 40 | src/datasets/hub.py | def convert_to_parquet(
repo_id: str,
revision: Optional[str] = None,
token: Optional[Union[bool, str]] = None,
trust_remote_code: Optional[bool] = None,
) -> CommitInfo:
"""Convert Hub [script-based dataset](dataset_script) to Parquet [data-only dataset](repository_structure), so that
the datas... | /usr/src/app/target_test_cases/failed_tests_convert_to_parquet.txt | def convert_to_parquet(
repo_id: str,
revision: Optional[str] = None,
token: Optional[Union[bool, str]] = None,
trust_remote_code: Optional[bool] = None,
) -> CommitInfo:
"""Convert Hub [script-based dataset](dataset_script) to Parquet [data-only dataset](repository_structure), so that
the datas... | convert_to_parquet | file-level | external |
datasets | 41 | src/datasets/hub.py | def delete_from_hub(
repo_id: str,
config_name: str,
revision: Optional[str] = None,
token: Optional[Union[bool, str]] = None,
) -> CommitInfo:
"""Delete a dataset configuration from a [data-only dataset](repository_structure) on the Hub.
Args:
repo_id (`str`): ID of the Hub dataset rep... | /usr/src/app/target_test_cases/failed_tests_delete_from_hub.txt | def delete_from_hub(
repo_id: str,
config_name: str,
revision: Optional[str] = None,
token: Optional[Union[bool, str]] = None,
) -> CommitInfo:
"""Delete a dataset configuration from a [data-only dataset](repository_structure) on the Hub.
Args:
repo_id (`str`): ID of the Hub dataset rep... | delete_from_hub | file-level | external |
datasets | 42 | src/datasets/inspect.py | def get_dataset_config_info(
path: str,
config_name: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
revision: Option... | /usr/src/app/target_test_cases/failed_tests_get_dataset_config_info.txt | def get_dataset_config_info(
path: str,
config_name: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
revision: Option... | get_dataset_config_info | repository-level | external |
datasets | 43 | src/datasets/inspect.py | def get_dataset_config_names(
path: str,
revision: Optional[Union[str, Version]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
dynamic_modules_path: Optional[str] = None,
data_files: Optional[Union[Dict, List, str]] = None,
... | /usr/src/app/target_test_cases/failed_tests_get_dataset_config_names.txt | def get_dataset_config_names(
path: str,
revision: Optional[Union[str, Version]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
dynamic_modules_path: Optional[str] = None,
data_files: Optional[Union[Dict, List, str]] = None,
... | get_dataset_config_names | repository-level | external |
datasets | 44 | src/datasets/inspect.py | def get_dataset_default_config_name(
path: str,
revision: Optional[Union[str, Version]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
dynamic_modules_path: Optional[str] = None,
data_files: Optional[Union[Dict, List, str]] = N... | /usr/src/app/target_test_cases/failed_tests_get_dataset_default_config_name.txt | def get_dataset_default_config_name(
path: str,
revision: Optional[Union[str, Version]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
dynamic_modules_path: Optional[str] = None,
data_files: Optional[Union[Dict, List, str]] = N... | get_dataset_default_config_name | repository-level | external |
datasets | 45 | src/datasets/inspect.py | def get_dataset_infos(
path: str,
data_files: Optional[Union[Dict, List, str]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
revision: Optional[Union[str, Version]] = None,
token: Optional[Union[bool, str]] = None,
**confi... | /usr/src/app/target_test_cases/failed_tests_get_dataset_infos.txt | def get_dataset_infos(
path: str,
data_files: Optional[Union[Dict, List, str]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
revision: Optional[Union[str, Version]] = None,
token: Optional[Union[bool, str]] = None,
**confi... | get_dataset_infos | repository-level | external |
datasets | 46 | src/datasets/inspect.py | def get_dataset_split_names(
path: str,
config_name: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
revision: Option... | /usr/src/app/target_test_cases/failed_tests_get_dataset_split_names.txt | def get_dataset_split_names(
path: str,
config_name: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
download_config: Optional[DownloadConfig] = None,
download_mode: Optional[Union[DownloadMode, str]] = None,
revision: Option... | get_dataset_split_names | repository-level | external |
datasets | 47 | src/datasets/utils/file_utils.py | def get_from_cache(
url,
cache_dir=None,
force_download=False,
user_agent=None,
use_etag=True,
token=None,
storage_options=None,
download_desc=None,
disable_tqdm=False,
) -> str:
"""
Given a URL, look for the corresponding file in the local cache.
If it's not there, downl... | /usr/src/app/target_test_cases/failed_tests_get_from_cache.txt | def get_from_cache(
url,
cache_dir=None,
force_download=False,
user_agent=None,
use_etag=True,
token=None,
storage_options=None,
download_desc=None,
disable_tqdm=False,
) -> str:
"""
Given a URL, look for the corresponding file in the local cache.
If it's not there, downl... | get_from_cache | repository-level | external |
datasets | 48 | src/datasets/io/parquet.py | def get_writer_batch_size(features: Features) -> Optional[int]:
"""
Get the writer_batch_size that defines the maximum row group size in the parquet files.
The default in `datasets` is 1,000 but we lower it to 100 for image datasets.
This allows to optimize random access to parquet file, since accessing... | /usr/src/app/target_test_cases/failed_tests_get_writer_batch_size.txt | def get_writer_batch_size(features: Features) -> Optional[int]:
"""
Get the writer_batch_size that defines the maximum row group size in the parquet files.
The default in `datasets` is 1,000 but we lower it to 100 for image datasets.
This allows to optimize random access to parquet file, since accessing... | get_writer_batch_size | repository-level | external |
datasets | 49 | src/datasets/combine.py | def interleave_datasets(
datasets: List[DatasetType],
probabilities: Optional[List[float]] = None,
seed: Optional[int] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
stopping_strategy: Literal["first_exhausted", "all_exhausted"] = "first_exhausted",
) -> DatasetT... | /usr/src/app/target_test_cases/failed_tests_interleave_datasets.txt | def interleave_datasets(
datasets: List[DatasetType],
probabilities: Optional[List[float]] = None,
seed: Optional[int] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
stopping_strategy: Literal["first_exhausted", "all_exhausted"] = "first_exhausted",
) -> DatasetT... | interleave_datasets | repository-level | external |
datasets | 50 | src/datasets/load.py | def load_dataset(
path: str,
name: Optional[str] = None,
data_dir: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
split: Optional[Union[str, Split]] = None,
cache_dir: Optional[str] = None,
features: Optional[Features] =... | /usr/src/app/target_test_cases/failed_tests_load_dataset.txt | def load_dataset(
path: str,
name: Optional[str] = None,
data_dir: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
split: Optional[Union[str, Split]] = None,
cache_dir: Optional[str] = None,
features: Optional[Features] =... | load_dataset | repository-level | external |
datasets | 51 | src/datasets/load.py | def load_dataset_builder(
path: str,
name: Optional[str] = None,
data_dir: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
cache_dir: Optional[str] = None,
features: Optional[Features] = None,
download_config: Optional[Do... | /usr/src/app/target_test_cases/failed_tests_load_dataset_builder.txt | def load_dataset_builder(
path: str,
name: Optional[str] = None,
data_dir: Optional[str] = None,
data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
cache_dir: Optional[str] = None,
features: Optional[Features] = None,
download_config: Optional[Do... | load_dataset_builder | repository-level | external |
datasets | 52 | src/datasets/load.py | def load_from_disk(
dataset_path: PathLike, keep_in_memory: Optional[bool] = None, storage_options: Optional[dict] = None
) -> Union[Dataset, DatasetDict]:
"""
Loads a dataset that was previously saved using [`~Dataset.save_to_disk`] from a dataset directory, or
from a filesystem using any implementatio... | /usr/src/app/target_test_cases/failed_tests_load_from_disk.txt | def load_from_disk(
dataset_path: PathLike, keep_in_memory: Optional[bool] = None, storage_options: Optional[dict] = None
) -> Union[Dataset, DatasetDict]:
"""
Loads a dataset that was previously saved using [`~Dataset.save_to_disk`] from a dataset directory, or
from a filesystem using any implementatio... | load_from_disk | repository-level | external |
datasets | 53 | src/datasets/arrow_reader.py | def make_file_instructions(
name: str,
split_infos: List["SplitInfo"],
instruction: Union[str, "ReadInstruction"],
filetype_suffix: Optional[str] = None,
prefix_path: Optional[str] = None,
) -> FileInstructions:
"""Returns instructions of the split dict.
Args:
name (`str`): Name of ... | /usr/src/app/target_test_cases/failed_tests_make_file_instructions.txt | def make_file_instructions(
name: str,
split_infos: List["SplitInfo"],
instruction: Union[str, "ReadInstruction"],
filetype_suffix: Optional[str] = None,
prefix_path: Optional[str] = None,
) -> FileInstructions:
"""Returns instructions of the split dict.
Args:
name (`str`): Name of ... | make_file_instructions | repository-level | external |
datasets | 54 | src/datasets/utils/py_utils.py | def map_nested(
function: Callable[[Any], Any],
data_struct: Any,
dict_only: bool = False,
map_list: bool = True,
map_tuple: bool = False,
map_numpy: bool = False,
num_proc: Optional[int] = None,
parallel_min_length: int = 2,
batched: bool = False,
batch_size: Optional[int] = 100... | /usr/src/app/target_test_cases/failed_tests_map_nested.txt | def map_nested(
function: Callable[[Any], Any],
data_struct: Any,
dict_only: bool = False,
map_list: bool = True,
map_tuple: bool = False,
map_numpy: bool = False,
num_proc: Optional[int] = None,
parallel_min_length: int = 2,
batched: bool = False,
batch_size: Optional[int] = 100... | map_nested | repository-level | external |
datasets | 55 | src/datasets/formatting/formatting.py | def query_table(
table: Table,
key: Union[int, slice, range, str, Iterable],
indices: Optional[Table] = None,
) -> pa.Table:
"""
Query a Table to extract the subtable that correspond to the given key.
Args:
table (``datasets.table.Table``): The input Table to query from
key (``U... | /usr/src/app/target_test_cases/failed_tests_query_table.txt | def query_table(
table: Table,
key: Union[int, slice, range, str, Iterable],
indices: Optional[Table] = None,
) -> pa.Table:
"""
Query a Table to extract the subtable that correspond to the given key.
Args:
table (``datasets.table.Table``): The input Table to query from
key (``U... | query_table | repository-level | external |
datasets | 56 | src/datasets/distributed.py | def split_dataset_by_node(dataset: DatasetType, rank: int, world_size: int) -> DatasetType:
"""
Split a dataset for the node at rank `rank` in a pool of nodes of size `world_size`.
For map-style datasets:
Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset.
T... | /usr/src/app/target_test_cases/failed_tests_split_dataset_by_node.txt | def split_dataset_by_node(dataset: DatasetType, rank: int, world_size: int) -> DatasetType:
"""
Split a dataset for the node at rank `rank` in a pool of nodes of size `world_size`.
For map-style datasets:
Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset.
T... | split_dataset_by_node | repository-level | non_external |
datasets | 57 | src/datasets/table.py | def table_cast(table: pa.Table, schema: pa.Schema):
"""Improved version of `pa.Table.cast`.
It supports casting to feature types stored in the schema metadata.
Args:
table (`pyarrow.Table`):
PyArrow table to cast.
schema (`pyarrow.Schema`):
Target PyArrow schema.
... | /usr/src/app/target_test_cases/failed_tests_table_cast.txt | def table_cast(table: pa.Table, schema: pa.Schema):
"""Improved version of `pa.Table.cast`.
It supports casting to feature types stored in the schema metadata.
Args:
table (`pyarrow.Table`):
PyArrow table to cast.
schema (`pyarrow.Schema`):
Target PyArrow schema.
... | table_cast | file-level | external |
datasets | 58 | src/datasets/utils/file_utils.py | def xjoin(a, *p):
"""
This function extends os.path.join to support the "::" hop separator. It supports both paths and urls.
A shorthand, particularly useful where you have multiple hops, is to “chain” the URLs with the special separator "::".
This is used to access files inside a zip file over http fo... | /usr/src/app/target_test_cases/failed_tests_xjoin.txt | def xjoin(a, *p):
"""
This function extends os.path.join to support the "::" hop separator. It supports both paths and urls.
A shorthand, particularly useful where you have multiple hops, is to “chain” the URLs with the special separator "::".
This is used to access files inside a zip file over http fo... | xjoin | file-level | external |
pylint | 0 | pylint/lint/pylinter.py | def load_plugin_configuration(self) -> None:
"""Call the configuration hook for plugins.
This walks through the list of plugins, grabs the "load_configuration"
hook, if exposed, and calls it to allow plugins to configure specific
settings.
The result of attempting to load t... | /usr/src/app/target_test_cases/failed_tests_PyLinter.load_plugin_configuration.txt | def load_plugin_configuration(self) -> None:
"""Call the configuration hook for plugins.
This walks through the list of plugins, grabs the "load_configuration"
hook, if exposed, and calls it to allow plugins to configure specific
settings.
The result of attempting to load t... | PyLinter.load_plugin_configuration | file-level | non_external |
pylint | 1 | pylint/extensions/for_any_all.py | def _assigned_reassigned_returned(
node: nodes.For, if_children: list[nodes.NodeNG], node_after_loop: nodes.NodeNG
) -> bool:
"""Detect boolean-assign, for-loop, re-assign, return pattern:
Ex:
def check_lines(lines, max_chars):
long_line = False
... | /usr/src/app/target_test_cases/failed_tests__assigned_reassigned_returned.txt | def _assigned_reassigned_returned(
node: nodes.For, if_children: list[nodes.NodeNG], node_after_loop: nodes.NodeNG
) -> bool:
"""Detect boolean-assign, for-loop, re-assign, return pattern:
Ex:
def check_lines(lines, max_chars):
long_line = False
... | _assigned_reassigned_returned | repository-level | non_external |
pylint | 2 | pylint/extensions/code_style.py | def _check_consider_using_assignment_expr(self, node: nodes.If) -> None:
"""Check if an assignment expression (walrus operator) can be used.
For example if an assignment is directly followed by an if statement:
>>> x = 2
>>> if x:
>>> ...
Can be replaced by:
... | /usr/src/app/target_test_cases/failed_tests__check_consider_using_assignment_expr.txt | def _check_consider_using_assignment_expr(self, node: nodes.If) -> None:
"""Check if an assignment expression (walrus operator) can be used.
For example if an assignment is directly followed by an if statement:
>>> x = 2
>>> if x:
>>> ...
Can be replaced by:
... | _check_consider_using_assignment_expr | file-level | non_external |
pylint | 3 | pylint/lint/pylinter.py | def _check_file(
self,
get_ast: GetAstProtocol,
check_astroid_module: Callable[[nodes.Module], bool | None],
file: FileItem,
) -> None:
"""Check a file using the passed utility functions (get_ast and
check_astroid_module).
:param callable get_ast: callabl... | /usr/src/app/target_test_cases/failed_tests__check_file.txt | def _check_file(
self,
get_ast: GetAstProtocol,
check_astroid_module: Callable[[nodes.Module], bool | None],
file: FileItem,
) -> None:
"""Check a file using the passed utility functions (get_ast and
check_astroid_module).
:param callable get_ast: callabl... | _check_file | repository-level | external |
pylint | 4 | pylint/checkers/variables.py | def _check_loop_finishes_via_except(
node: nodes.NodeNG,
other_node_try_except: nodes.Try,
) -> bool:
"""Check for a specific control flow scenario.
Described in https://github.com/pylint-dev/pylint/issues/5683.
A scenario where the only non-break exit from a loop consi... | /usr/src/app/target_test_cases/failed_tests__check_loop_finishes_via_except.txt | def _check_loop_finishes_via_except(
node: nodes.NodeNG,
other_node_try_except: nodes.Try,
) -> bool:
"""Check for a specific control flow scenario.
Described in https://github.com/pylint-dev/pylint/issues/5683.
A scenario where the only non-break exit from a loop consi... | _check_loop_finishes_via_except | repository-level | non_external |
pylint | 5 | pylint/checkers/classes/class_checker.py | def _check_protected_attribute_access(
self, node: nodes.Attribute | nodes.AssignAttr
) -> None:
"""Given an attribute access node (set or get), check if attribute
access is legitimate.
Call _check_first_attr with node before calling
this method. Valid cases are:
... | /usr/src/app/target_test_cases/failed_tests__check_protected_attribute_access.txt | def _check_protected_attribute_access(
self, node: nodes.Attribute | nodes.AssignAttr
) -> None:
"""Given an attribute access node (set or get), check if attribute
access is legitimate.
Call _check_first_attr with node before calling
this method. Valid cases are:
... | _check_protected_attribute_access | repository-level | non_external |
pylint | 6 | pylint/checkers/variables.py | def _detect_global_scope(
node: nodes.Name,
frame: nodes.LocalsDictNodeNG,
defframe: nodes.LocalsDictNodeNG,
) -> bool:
"""Detect that the given frames share a global scope.
Two frames share a global scope when neither
of them are hidden under a function scope, as well
as any parent scope o... | /usr/src/app/target_test_cases/failed_tests__detect_global_scope.txt | def _detect_global_scope(
node: nodes.Name,
frame: nodes.LocalsDictNodeNG,
defframe: nodes.LocalsDictNodeNG,
) -> bool:
"""Detect that the given frames share a global scope.
Two frames share a global scope when neither
of them are hidden under a function scope, as well
as any parent scope o... | _detect_global_scope | repository-level | non_external |
pylint | 7 | pylint/checkers/unicode.py | def _determine_codec(stream: io.BytesIO) -> tuple[str, int]:
"""Determine the codec from the given stream.
first tries https://www.python.org/dev/peps/pep-0263/
and if this fails also checks for BOMs of UTF-16 and UTF-32
to be future-proof.
Args:
stream: The byt... | /usr/src/app/target_test_cases/failed_tests__determine_codec.txt | def _determine_codec(stream: io.BytesIO) -> tuple[str, int]:
"""Determine the codec from the given stream.
first tries https://www.python.org/dev/peps/pep-0263/
and if this fails also checks for BOMs of UTF-16 and UTF-32
to be future-proof.
Args:
stream: The byt... | _determine_codec | file-level | external |
pylint | 8 | pylint/checkers/typecheck.py | def _emit_no_member(
node: nodes.Attribute | nodes.AssignAttr | nodes.DelAttr,
owner: InferenceResult,
owner_name: str | None,
mixin_class_rgx: Pattern[str],
ignored_mixins: bool = True,
ignored_none: bool = True,
) -> bool:
"""Try to see if no-member should be emitted for the given owner.
... | /usr/src/app/target_test_cases/failed_tests__emit_no_member.txt | def _emit_no_member(
node: nodes.Attribute | nodes.AssignAttr | nodes.DelAttr,
owner: InferenceResult,
owner_name: str | None,
mixin_class_rgx: Pattern[str],
ignored_mixins: bool = True,
ignored_none: bool = True,
) -> bool:
"""Try to see if no-member should be emitted for the given owner.
... | _emit_no_member | repository-level | external |
pylint | 9 | pylint/checkers/symilar.py | def _find_common(
self, lineset1: LineSet, lineset2: LineSet
) -> Generator[Commonality]:
"""Find similarities in the two given linesets.
This the core of the algorithm. The idea is to compute the hashes of a
minimal number of successive lines of each lineset and then compare th... | /usr/src/app/target_test_cases/failed_tests__find_common.txt | def _find_common(
self, lineset1: LineSet, lineset2: LineSet
) -> Generator[Commonality]:
"""Find similarities in the two given linesets.
This the core of the algorithm. The idea is to compute the hashes of a
minimal number of successive lines of each lineset and then compare th... | _find_common | file-level | external |
pylint | 10 | pylint/checkers/strings.py | def _get_quote_delimiter(string_token: str) -> str:
"""Returns the quote character used to delimit this token string.
This function checks whether the token is a well-formed string.
Args:
string_token: The token to be parsed.
Returns:
A string containing solely the first quote delimit... | /usr/src/app/target_test_cases/failed_tests__get_quote_delimiter.txt | def _get_quote_delimiter(string_token: str) -> str:
"""Returns the quote character used to delimit this token string.
This function checks whether the token is a well-formed string.
Args:
string_token: The token to be parsed.
Returns:
A string containing solely the first quote delimit... | _get_quote_delimiter | file-level | non_external |
pylint | 11 | pylint/checkers/variables.py | def _ignore_class_scope(self, node: nodes.NodeNG) -> bool:
"""Return True if the node is in a local class scope, as an assignment.
Detect if we are in a local class scope, as an assignment.
For example, the following is fair game.
class A:
b = 1
c = lambda b=b... | /usr/src/app/target_test_cases/failed_tests__ignore_class_scope.txt | def _ignore_class_scope(self, node: nodes.NodeNG) -> bool:
"""Return True if the node is in a local class scope, as an assignment.
Detect if we are in a local class scope, as an assignment.
For example, the following is fair game.
class A:
b = 1
c = lambda b=b... | _ignore_class_scope | repository-level | non_external |
pylint | 12 | pylint/checkers/typecheck.py | def _infer_from_metaclass_constructor(
cls: nodes.ClassDef, func: nodes.FunctionDef
) -> InferenceResult | None:
"""Try to infer what the given *func* constructor is building.
:param astroid.FunctionDef func:
A metaclass constructor. Metaclass definitions can be
functions, which should acce... | /usr/src/app/target_test_cases/failed_tests__infer_from_metaclass_constructor.txt | def _infer_from_metaclass_constructor(
cls: nodes.ClassDef, func: nodes.FunctionDef
) -> InferenceResult | None:
"""Try to infer what the given *func* constructor is building.
:param astroid.FunctionDef func:
A metaclass constructor. Metaclass definitions can be
functions, which should acce... | _infer_from_metaclass_constructor | file-level | non_external |
pylint | 13 | pylint/checkers/strings.py | def _is_long_string(string_token: str) -> bool:
"""Is this string token a "longstring" (is it triple-quoted)?
Long strings are triple-quoted as defined in
https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals
This function only checks characters up through the open quotes... | /usr/src/app/target_test_cases/failed_tests__is_long_string.txt | def _is_long_string(string_token: str) -> bool:
"""Is this string token a "longstring" (is it triple-quoted)?
Long strings are triple-quoted as defined in
https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals
This function only checks characters up through the open quotes... | _is_long_string | file-level | non_external |
pylint | 14 | pylint/checkers/base/function_checker.py | def _node_fails_contextmanager_cleanup(
node: nodes.FunctionDef, yield_nodes: list[nodes.Yield]
) -> bool:
"""Check if a node fails contextmanager cleanup.
Current checks for a contextmanager:
- only if the context manager yields a non-constant value
- only if th... | /usr/src/app/target_test_cases/failed_tests__node_fails_contextmanager_cleanup.txt | def _node_fails_contextmanager_cleanup(
node: nodes.FunctionDef, yield_nodes: list[nodes.Yield]
) -> bool:
"""Check if a node fails contextmanager cleanup.
Current checks for a contextmanager:
- only if the context manager yields a non-constant value
- only if th... | _node_fails_contextmanager_cleanup | repository-level | non_external |
pylint | 15 | pylint/extensions/docparams.py | def check_arguments_in_docstring(
self,
doc: Docstring,
arguments_node: astroid.Arguments,
warning_node: astroid.NodeNG,
accept_no_param_doc: bool | None = None,
) -> None:
"""Check that all parameters are consistent with the parameters mentioned
in the pa... | /usr/src/app/target_test_cases/failed_tests_check_arguments_in_docstring.txt | def check_arguments_in_docstring(
self,
doc: Docstring,
arguments_node: astroid.Arguments,
warning_node: astroid.NodeNG,
accept_no_param_doc: bool | None = None,
) -> None:
"""Check that all parameters are consistent with the parameters mentioned
in the pa... | check_arguments_in_docstring | repository-level | non_external |
pylint | 16 | pylint/testutils/utils.py | def create_files(paths: list[str], chroot: str = ".") -> None:
"""Creates directories and files found in <path>.
:param list paths: list of relative paths to files or directories
:param str chroot: the root directory in which paths will be created
>>> from os.path import isdir, isfile
>>> isdir('/... | /usr/src/app/target_test_cases/failed_tests_create_files.txt | def create_files(paths: list[str], chroot: str = ".") -> None:
"""Creates directories and files found in <path>.
:param list paths: list of relative paths to files or directories
:param str chroot: the root directory in which paths will be created
>>> from os.path import isdir, isfile
>>> isdir('/... | create_files | self-contained | external |
pylint | 17 | pylint/checkers/symilar.py | def filter_noncode_lines(
ls_1: LineSet,
stindex_1: Index,
ls_2: LineSet,
stindex_2: Index,
common_lines_nb: int,
) -> int:
"""Return the effective number of common lines between lineset1
and lineset2 filtered from non code lines.
That is to say the number of common successive stripped
... | /usr/src/app/target_test_cases/failed_tests_filter_noncode_lines.txt | def filter_noncode_lines(
ls_1: LineSet,
stindex_1: Index,
ls_2: LineSet,
stindex_2: Index,
common_lines_nb: int,
) -> int:
"""Return the effective number of common lines between lineset1
and lineset2 filtered from non code lines.
That is to say the number of common successive stripped
... | filter_noncode_lines | file-level | non_external |
pylint | 18 | pylint/checkers/utils.py | def get_argument_from_call(
call_node: nodes.Call, position: int | None = None, keyword: str | None = None
) -> nodes.Name:
"""Returns the specified argument from a function call.
:param nodes.Call call_node: Node representing a function call to check.
:param int position: position of the argument.
... | /usr/src/app/target_test_cases/failed_tests_get_argument_from_call.txt | def get_argument_from_call(
call_node: nodes.Call, position: int | None = None, keyword: str | None = None
) -> nodes.Name:
"""Returns the specified argument from a function call.
:param nodes.Call call_node: Node representing a function call to check.
:param int position: position of the argument.
... | get_argument_from_call | file-level | non_external |
pylint | 19 | pylint/checkers/utils.py | def get_import_name(importnode: ImportNode, modname: str | None) -> str | None:
"""Get a prepared module name from the given import node.
In the case of relative imports, this will return the
absolute qualified module name, which might be useful
for debugging. Otherwise, the initial module name
is ... | /usr/src/app/target_test_cases/failed_tests_get_import_name.txt | def get_import_name(importnode: ImportNode, modname: str | None) -> str | None:
"""Get a prepared module name from the given import node.
In the case of relative imports, this will return the
absolute qualified module name, which might be useful
for debugging. Otherwise, the initial module name
is ... | get_import_name | self-contained | non_external |
pylint | 20 | pylint/checkers/symilar.py | def hash_lineset(
lineset: LineSet, min_common_lines: int = DEFAULT_MIN_SIMILARITY_LINE
) -> tuple[HashToIndex_T, IndexToLines_T]:
"""Return two dicts.
The first associates the hash of successive stripped lines of a lineset
to the indices of the starting lines.
The second dict, associates the index... | /usr/src/app/target_test_cases/failed_tests_hash_lineset.txt | def hash_lineset(
lineset: LineSet, min_common_lines: int = DEFAULT_MIN_SIMILARITY_LINE
) -> tuple[HashToIndex_T, IndexToLines_T]:
"""Return two dicts.
The first associates the hash of successive stripped lines of a lineset
to the indices of the starting lines.
The second dict, associates the index... | hash_lineset | file-level | external |
pylint | 21 | pylint/lint/message_state_handler.py | def is_message_enabled(
self,
msg_descr: str,
line: int | None = None,
confidence: interfaces.Confidence | None = None,
) -> bool:
"""Is this message enabled for the current file ?
Optionally, is it enabled for this line and confidence level ?
The curren... | /usr/src/app/target_test_cases/failed_tests_is_message_enabled.txt | def is_message_enabled(
self,
msg_descr: str,
line: int | None = None,
confidence: interfaces.Confidence | None = None,
) -> bool:
"""Is this message enabled for the current file ?
Optionally, is it enabled for this line and confidence level ?
The curren... | is_message_enabled | repository-level | non_external |
pylint | 22 | pylint/__init__.py | def modify_sys_path() -> None:
"""Modify sys path for execution as Python module.
Strip out the current working directory from sys.path.
Having the working directory in `sys.path` means that `pylint` might
inadvertently import user code from modules having the same name as
stdlib or pylint's own mo... | /usr/src/app/target_test_cases/failed_tests_modify_sys_path.txt | def modify_sys_path() -> None:
"""Modify sys path for execution as Python module.
Strip out the current working directory from sys.path.
Having the working directory in `sys.path` means that `pylint` might
inadvertently import user code from modules having the same name as
stdlib or pylint's own mo... | modify_sys_path | self-contained | external |
pylint | 23 | pylint/checkers/symilar.py | def remove_successive(all_couples: CplIndexToCplLines_T) -> None:
"""Removes all successive entries in the dictionary in argument.
:param all_couples: collection that has to be cleaned up from successive entries.
The keys are couples of indices that mark the beginning of common entries
... | /usr/src/app/target_test_cases/failed_tests_remove_successive.txt | def remove_successive(all_couples: CplIndexToCplLines_T) -> None:
"""Removes all successive entries in the dictionary in argument.
:param all_couples: collection that has to be cleaned up from successive entries.
The keys are couples of indices that mark the beginning of common entries
... | remove_successive | file-level | non_external |
pylint | 24 | pylint/lint/pylinter.py | def should_analyze_file(modname: str, path: str, is_argument: bool = False) -> bool:
"""Returns whether a module should be checked.
This implementation returns True for all python source files (.py and .pyi),
indicating that all files should be linted.
Subclasses may override this ... | /usr/src/app/target_test_cases/failed_tests_should_analyze_file.txt | def should_analyze_file(modname: str, path: str, is_argument: bool = False) -> bool:
"""Returns whether a module should be checked.
This implementation returns True for all python source files (.py and .pyi),
indicating that all files should be linted.
Subclasses may override this ... | should_analyze_file | self-contained | non_external |
pylint | 25 | pylint/checkers/symilar.py | def stripped_lines(
lines: Iterable[str],
ignore_comments: bool,
ignore_docstrings: bool,
ignore_imports: bool,
ignore_signatures: bool,
line_enabled_callback: Callable[[str, int], bool] | None = None,
) -> list[LineSpecifs]:
"""Return tuples of line/line number/line type with leading/traili... | /usr/src/app/target_test_cases/failed_tests_stripped_lines.txt | def stripped_lines(
lines: Iterable[str],
ignore_comments: bool,
ignore_docstrings: bool,
ignore_imports: bool,
ignore_signatures: bool,
line_enabled_callback: Callable[[str, int], bool] | None = None,
) -> list[LineSpecifs]:
"""Return tuples of line/line number/line type with leading/traili... | stripped_lines | file-level | external |
sympy | 0 | sympy/physics/continuum_mechanics/beam.py | def apply_load(self, value, start, order, end=None):
"""
This method adds up the loads given to a particular beam object.
Parameters
==========
value : Sympifyable
The value inserted should have the units [Force/(Distance**(n+1)]
where n is the order ... | /usr/src/app/target_test_cases/failed_tests_Beam.apply_load.txt | def apply_load(self, value, start, order, end=None):
"""
This method adds up the loads given to a particular beam object.
Parameters
==========
value : Sympifyable
The value inserted should have the units [Force/(Distance**(n+1)]
where n is the order ... | Beam.apply_load | repository-level | non_external |
sympy | 1 | sympy/physics/continuum_mechanics/beam.py | def apply_rotation_hinge(self, loc):
"""
This method applies a rotation hinge at a single location on the beam.
Parameters
----------
loc : Sympifyable
Location of point at which hinge is applied.
Returns
=======
Symbol
The un... | /usr/src/app/target_test_cases/failed_tests_Beam.apply_rotation_hinge.txt | def apply_rotation_hinge(self, loc):
"""
This method applies a rotation hinge at a single location on the beam.
Parameters
----------
loc : Sympifyable
Location of point at which hinge is applied.
Returns
=======
Symbol
The un... | Beam.apply_rotation_hinge | repository-level | non_external |
sympy | 2 | sympy/physics/continuum_mechanics/beam.py | def apply_support(self, loc, type="fixed"):
"""
This method applies support to a particular beam object and returns
the symbol of the unknown reaction load(s).
Parameters
==========
loc : Sympifyable
Location of point at which support is applied.
... | /usr/src/app/target_test_cases/failed_tests_Beam.apply_support.txt | def apply_support(self, loc, type="fixed"):
"""
This method applies support to a particular beam object and returns
the symbol of the unknown reaction load(s).
Parameters
==========
loc : Sympifyable
Location of point at which support is applied.
... | Beam.apply_support | repository-level | non_external |
sympy | 3 | sympy/matrices/expressions/blockmatrix.py | def schur(self, mat = 'A', generalized = False):
"""Return the Schur Complement of the 2x2 BlockMatrix
Parameters
==========
mat : String, optional
The matrix with respect to which the
Schur Complement is calculated. 'A' is
used by default
... | /usr/src/app/target_test_cases/failed_tests_BlockMatrix.schur.txt | def schur(self, mat = 'A', generalized = False):
"""Return the Schur Complement of the 2x2 BlockMatrix
Parameters
==========
mat : String, optional
The matrix with respect to which the
Schur Complement is calculated. 'A' is
used by default
... | BlockMatrix.schur | repository-level | non_external |
sympy | 4 | sympy/physics/mechanics/body.py | def apply_force(self, force, point=None, reaction_body=None, reaction_point=None):
"""Add force to the body(s).
Explanation
===========
Applies the force on self or equal and opposite forces on
self and other body if both are given on the desired point on the bodies.
... | /usr/src/app/target_test_cases/failed_tests_Body.apply_force.txt | def apply_force(self, force, point=None, reaction_body=None, reaction_point=None):
"""Add force to the body(s).
Explanation
===========
Applies the force on self or equal and opposite forces on
self and other body if both are given on the desired point on the bodies.
... | Body.apply_force | repository-level | non_external |
sympy | 5 | sympy/physics/mechanics/body.py | def apply_torque(self, torque, reaction_body=None):
"""Add torque to the body(s).
Explanation
===========
Applies the torque on self or equal and opposite torques on
self and other body if both are given.
The torque applied on other body is taken opposite of self,
... | /usr/src/app/target_test_cases/failed_tests_Body.apply_torque.txt | def apply_torque(self, torque, reaction_body=None):
"""Add torque to the body(s).
Explanation
===========
Applies the torque on self or equal and opposite torques on
self and other body if both are given.
The torque applied on other body is taken opposite of self,
... | Body.apply_torque | repository-level | non_external |
sympy | 6 | sympy/physics/continuum_mechanics/cable.py | def apply_load(self, order, load):
"""
This method adds load to the cable.
Parameters
==========
order : Integer
The order of the applied load.
- For point loads, order = -1
- For distributed load, order = 0
load : tuple... | /usr/src/app/target_test_cases/failed_tests_Cable.apply_load.txt | def apply_load(self, order, load):
"""
This method adds load to the cable.
Parameters
==========
order : Integer
The order of the applied load.
- For point loads, order = -1
- For distributed load, order = 0
load : tuple... | Cable.apply_load | repository-level | non_external |
sympy | 7 | sympy/vector/coordsysrect.py | def orient_new_body(self, name, angle1, angle2, angle3,
rotation_order, location=None,
vector_names=None, variable_names=None):
"""
Body orientation takes this coordinate system through three
successive simple rotations.
Body fixed rot... | /usr/src/app/target_test_cases/failed_tests_CoordSys3D.orient_new_body.txt | def orient_new_body(self, name, angle1, angle2, angle3,
rotation_order, location=None,
vector_names=None, variable_names=None):
"""
Body orientation takes this coordinate system through three
successive simple rotations.
Body fixed rot... | CoordSys3D.orient_new_body | repository-level | non_external |
sympy | 8 | sympy/stats/stochastic_process_types.py | def canonical_form(self) -> tTuple[tList[Basic], ImmutableMatrix]:
"""
Reorders the one-step transition matrix
so that recurrent states appear first and transient
states appear last. Other representations include inserting
transient states first and recurrent states last.
... | /usr/src/app/target_test_cases/failed_tests_DiscreteMarkovChain.canonical_form.txt | def canonical_form(self) -> tTuple[tList[Basic], ImmutableMatrix]:
"""
Reorders the one-step transition matrix
so that recurrent states appear first and transient
states appear last. Other representations include inserting
transient states first and recurrent states last.
... | DiscreteMarkovChain.canonical_form | repository-level | external |
sympy | 9 | sympy/stats/stochastic_process_types.py | def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]:
"""
Returns the list of communication classes that partition
the states of the markov chain.
A communication class is defined to be a set of states
such that every state in that set is reachabl... | /usr/src/app/target_test_cases/failed_tests_DiscreteMarkovChain.communication_classes.txt | def communication_classes(self) -> tList[tTuple[tList[Basic], Boolean, Integer]]:
"""
Returns the list of communication classes that partition
the states of the markov chain.
A communication class is defined to be a set of states
such that every state in that set is reachabl... | DiscreteMarkovChain.communication_classes | repository-level | external |
sympy | 10 | sympy/stats/stochastic_process_types.py | def decompose(self) -> tTuple[tList[Basic], ImmutableMatrix, ImmutableMatrix, ImmutableMatrix]:
"""
Decomposes the transition matrix into submatrices with
special properties.
The transition matrix can be decomposed into 4 submatrices:
- A - the submatrix from recurrent state... | /usr/src/app/target_test_cases/failed_tests_DiscreteMarkovChain.decompose.txt | def decompose(self) -> tTuple[tList[Basic], ImmutableMatrix, ImmutableMatrix, ImmutableMatrix]:
"""
Decomposes the transition matrix into submatrices with
special properties.
The transition matrix can be decomposed into 4 submatrices:
- A - the submatrix from recurrent state... | DiscreteMarkovChain.decompose | repository-level | external |
sympy | 11 | sympy/polys/matrices/domainmatrix.py | def inv_den(self, method=None):
"""
Return the inverse as a :class:`DomainMatrix` with denominator.
Returns
=======
(inv, den) : (:class:`DomainMatrix`, :class:`~.DomainElement`)
The inverse matrix and its denominator.
This is more or less equivalent to... | /usr/src/app/target_test_cases/failed_tests_DomainMatrix.inv_den.txt | def inv_den(self, method=None):
"""
Return the inverse as a :class:`DomainMatrix` with denominator.
Returns
=======
(inv, den) : (:class:`DomainMatrix`, :class:`~.DomainElement`)
The inverse matrix and its denominator.
This is more or less equivalent to... | DomainMatrix.inv_den | file-level | non_external |
sympy | 12 | sympy/polys/matrices/domainmatrix.py | def scc(self):
"""Compute the strongly connected components of a DomainMatrix
Explanation
===========
A square matrix can be considered as the adjacency matrix for a
directed graph where the row and column indices are the vertices. In
this graph if there is an edge ... | /usr/src/app/target_test_cases/failed_tests_DomainMatrix.scc.txt | def scc(self):
"""Compute the strongly connected components of a DomainMatrix
Explanation
===========
A square matrix can be considered as the adjacency matrix for a
directed graph where the row and column indices are the vertices. In
this graph if there is an edge ... | DomainMatrix.scc | repository-level | non_external |
sympy | 13 | sympy/polys/matrices/domainmatrix.py | def solve_den(self, b, method=None):
"""
Solve matrix equation $Ax = b$ without fractions in the ground domain.
Examples
========
Solve a matrix equation over the integers:
>>> from sympy import ZZ
>>> from sympy.polys.matrices import DM
>>> A = DM(... | /usr/src/app/target_test_cases/failed_tests_DomainMatrix.solve_den.txt | def solve_den(self, b, method=None):
"""
Solve matrix equation $Ax = b$ without fractions in the ground domain.
Examples
========
Solve a matrix equation over the integers:
>>> from sympy import ZZ
>>> from sympy.polys.matrices import DM
>>> A = DM(... | DomainMatrix.solve_den | repository-level | non_external |
sympy | 14 | sympy/polys/matrices/domainmatrix.py | def solve_den_charpoly(self, b, cp=None, check=True):
"""
Solve matrix equation $Ax = b$ using the characteristic polynomial.
This method solves the square matrix equation $Ax = b$ for $x$ using
the characteristic polynomial without any division or fractions in the
ground do... | /usr/src/app/target_test_cases/failed_tests_DomainMatrix.solve_den_charpoly.txt | def solve_den_charpoly(self, b, cp=None, check=True):
"""
Solve matrix equation $Ax = b$ using the characteristic polynomial.
This method solves the square matrix equation $Ax = b$ for $x$ using
the characteristic polynomial without any division or fractions in the
ground do... | DomainMatrix.solve_den_charpoly | repository-level | non_external |
sympy | 15 | sympy/physics/control/lti.py | def doit(self, cancel=False, expand=False, **hints):
"""
Returns the resultant transfer function or state space obtained by
feedback connection of transfer functions or state space objects.
Examples
========
>>> from sympy.abc import s
>>> from sympy import ... | /usr/src/app/target_test_cases/failed_tests_Feedback.doit.txt | def doit(self, cancel=False, expand=False, **hints):
"""
Returns the resultant transfer function or state space obtained by
feedback connection of transfer functions or state space objects.
Examples
========
>>> from sympy.abc import s
>>> from sympy import ... | Feedback.doit | repository-level | non_external |
sympy | 16 | sympy/physics/mechanics/actuator.py | def to_loads(self):
"""Loads required by the equations of motion method classes.
Explanation
===========
``KanesMethod`` requires a list of ``Point``-``Vector`` tuples to be
passed to the ``loads`` parameters of its ``kanes_equations`` method
when constructing the e... | /usr/src/app/target_test_cases/failed_tests_ForceActuator.to_loads.txt | def to_loads(self):
"""Loads required by the equations of motion method classes.
Explanation
===========
``KanesMethod`` requires a list of ``Point``-``Vector`` tuples to be
passed to the ``loads`` parameters of its ``kanes_equations`` method
when constructing the e... | ForceActuator.to_loads | file-level | non_external |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.