code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def _get_cache_file_names(self,
cache_directory: str,
fingerprints: Union[str, List[str]] = None,
extension='.arrow'):
"""
Get all cache files in the dataset cache directory with fingerprints,
which ends wi... |
Get all cache files in the dataset cache directory with fingerprints,
which ends with specified extension.
:param cache_directory: dataset cache directory.
:param fingerprints: fingerprints of cache files. String or List are
accepted. If `None`, we will find all cache files... | _get_cache_file_names | python | modelscope/data-juicer | data_juicer/utils/compress.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/compress.py | Apache-2.0 |
def compress(self,
prev_ds: Dataset,
this_ds: Dataset = None,
num_proc: int = 1):
"""
Compress cache files with fingerprint in dataset cache directory.
:param prev_ds: previous dataset whose cache files need to be
compressed here.
... |
Compress cache files with fingerprint in dataset cache directory.
:param prev_ds: previous dataset whose cache files need to be
compressed here.
:param this_ds: Current dataset that is computed from the previous
dataset. There might be overlaps between cache files of th... | compress | python | modelscope/data-juicer | data_juicer/utils/compress.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/compress.py | Apache-2.0 |
def decompress(self,
ds: Dataset,
fingerprints: Union[str, List[str]] = None,
num_proc: int = 1):
"""
Decompress compressed cache files with fingerprint in
dataset cache directory.
:param ds: input dataset.
:param fingerpr... |
Decompress compressed cache files with fingerprint in
dataset cache directory.
:param ds: input dataset.
:param fingerprints: fingerprints of cache files. String or List are
accepted. If `None`, we will find all cache files which starts with
`cache-` and ends wi... | decompress | python | modelscope/data-juicer | data_juicer/utils/compress.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/compress.py | Apache-2.0 |
def format_cache_file_name(
self, cache_file_name: Optional[str]) -> Optional[str]:
"""
Use `*` to replace the sub rank in a cache file name.
:param cache_file_name: a cache file name.
"""
if not cache_file_name:
return cache_file_name
cache_file... |
Use `*` to replace the sub rank in a cache file name.
:param cache_file_name: a cache file name.
| format_cache_file_name | python | modelscope/data-juicer | data_juicer/utils/compress.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/compress.py | Apache-2.0 |
def cleanup_cache_files(self, ds):
"""
Clean up all compressed cache files in dataset cache directory,
which starts with `cache-` and ends with compression format
:param ds: input dataset.
"""
cache_directory = self._get_cache_directory(ds)
if cache_directory is N... |
Clean up all compressed cache files in dataset cache directory,
which starts with `cache-` and ends with compression format
:param ds: input dataset.
| cleanup_cache_files | python | modelscope/data-juicer | data_juicer/utils/compress.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/compress.py | Apache-2.0 |
def __enter__(self):
"""
Record the original cache compression method and turn it off.
"""
from . import cache_utils
self.original_cache_compress = cache_utils.CACHE_COMPRESS
cache_utils.CACHE_COMPRESS = None |
Record the original cache compression method and turn it off.
| __enter__ | python | modelscope/data-juicer | data_juicer/utils/compress.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/compress.py | Apache-2.0 |
def __exit__(self, exc_type, exc_val, exc_tb):
"""
Restore the original cache compression method.
"""
from . import cache_utils
cache_utils.CACHE_COMPRESS = self.original_cache_compress |
Restore the original cache compression method.
| __exit__ | python | modelscope/data-juicer | data_juicer/utils/compress.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/compress.py | Apache-2.0 |
async def follow_read(
logfile_path: str,
skip_existing_content: bool = False,
) -> AsyncGenerator:
"""Read a file in online and iterative manner
Args:
logfile_path (`str`):
The file path to be read.
skip_existing_content (`bool`, defaults to `False):
If True, re... | Read a file in online and iterative manner
Args:
logfile_path (`str`):
The file path to be read.
skip_existing_content (`bool`, defaults to `False):
If True, read from the end, otherwise read from the beginning.
Returns:
One line string of the file content.
| follow_read | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def find_files_with_suffix(
path: Union[str, Path],
suffixes: Union[str, List[str], None] = None) -> Dict[str, List[str]]:
"""
Traverse a path to find all files with the specified suffixes.
:param path: path (str/Path): source path
:param suffixes: specified file suffixes, '.txt' or ['.... |
Traverse a path to find all files with the specified suffixes.
:param path: path (str/Path): source path
:param suffixes: specified file suffixes, '.txt' or ['.txt', '.md']
etc
:return: list of all files with the specified suffixes
| find_files_with_suffix | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def add_suffix_to_filename(filename, suffix):
"""
Add a suffix to the filename. Only regard the content after the last dot
as the file extension.
E.g.
1. abc.jpg + "_resized" --> abc_resized.jpg
2. edf.xyz.csv + "_processed" --> edf.xyz_processed.csv
3. /path/to/file.json + "_suf" --> /path/... |
Add a suffix to the filename. Only regard the content after the last dot
as the file extension.
E.g.
1. abc.jpg + "_resized" --> abc_resized.jpg
2. edf.xyz.csv + "_processed" --> edf.xyz_processed.csv
3. /path/to/file.json + "_suf" --> /path/to/file_suf.json
4. ds.tar.gz + "_whoops" --> ds.... | add_suffix_to_filename | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def create_directory_if_not_exists(directory_path):
"""
create a directory if not exists, this function is process safe
:param directory_path: directory path to be create
"""
directory_path = os.path.abspath(directory_path)
try:
os.makedirs(directory_path, exist_ok=True)
exc... |
create a directory if not exists, this function is process safe
:param directory_path: directory path to be create
| create_directory_if_not_exists | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def transfer_data_dir(original_dir, op_name):
"""
Transfer the original multimodal data dir to a new dir to store the newly
generated multimodal data. The pattern is
`{original_dir}/__dj__produced_data__/{op_name}`
"""
new_dir = os.path.join(original_dir,
f'{Fields.mul... |
Transfer the original multimodal data dir to a new dir to store the newly
generated multimodal data. The pattern is
`{original_dir}/__dj__produced_data__/{op_name}`
| transfer_data_dir | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def transfer_filename(original_filepath: Union[str, Path], op_name,
**op_kwargs):
"""
According to the op and hashing its parameters 'op_kwargs' addition
to the process id and current time as the 'hash_val', map the
original_filepath to another unique file path. E.g.
... |
According to the op and hashing its parameters 'op_kwargs' addition
to the process id and current time as the 'hash_val', map the
original_filepath to another unique file path. E.g.
1. abc.jpg -->
__dj__produced_data__/{op_name}/
abc__dj_hash_#{hash_... | transfer_filename | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def copy_data(from_dir, to_dir, data_path):
"""
Copy data from from_dir/data_path to to_dir/data_path.
Return True if success.
"""
from_path = os.path.join(from_dir, data_path)
to_path = os.path.join(to_dir, data_path)
if not os.path.exists(from_path):
return False
parent... |
Copy data from from_dir/data_path to to_dir/data_path.
Return True if success.
| copy_data | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def single_partition_write_with_filename(
df: pd.DataFrame,
output_file_dir: str,
keep_filename_column: bool = False,
output_type: str = 'jsonl',
) -> pd.Series:
"""
This function processes a DataFrame and writes it to disk
Args:
df: A DataFrame.
output_file_dir: The output ... |
This function processes a DataFrame and writes it to disk
Args:
df: A DataFrame.
output_file_dir: The output file path.
keep_filename_column: Whether to keep or drop the "filename" column, if it exists.
output_type="jsonl": The type of output file to write.
Returns:
... | single_partition_write_with_filename | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def read_single_partition(
files,
filetype='jsonl',
add_filename=False,
input_meta: Union[str, dict] = None,
columns: Optional[List[str]] = None,
**kwargs,
) -> pd.DataFrame:
"""
This function reads a file with cuDF, sorts the columns of the DataFrame
and adds a "filename" column.
... |
This function reads a file with cuDF, sorts the columns of the DataFrame
and adds a "filename" column.
Args:
files: The path to the jsonl files to read.
add_filename: Whether to add a "filename" column to the DataFrame.
input_meta: A dictionary or a string formatted as a dictionary... | read_single_partition | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def get_all_files_paths_under(root,
recurse_subdirectories=True,
followlinks=False):
"""
This function returns a list of all the files under a specified directory.
Args:
root: The path to the directory to read.
recurse_subdirecties:... |
This function returns a list of all the files under a specified directory.
Args:
root: The path to the directory to read.
recurse_subdirecties: Whether to recurse into subdirectories.
Please note that this can be slow for large
number ... | get_all_files_paths_under | python | modelscope/data-juicer | data_juicer/utils/file_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/file_utils.py | Apache-2.0 |
def update_fingerprint(fingerprint, transform, transform_args):
"""
Combining various objects to update the fingerprint.
"""
hasher = Hasher()
hasher.update(fingerprint)
try:
hasher.update(transform)
except: # noqa various errors might raise here from pickle or dill
if _CAC... |
Combining various objects to update the fingerprint.
| update_fingerprint | python | modelscope/data-juicer | data_juicer/utils/fingerprint_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/fingerprint_utils.py | Apache-2.0 |
def generate_fingerprint(ds, *args, **kwargs):
"""
Generate new fingerprints by using various kwargs of the dataset.
"""
if args:
args = list(args)
dataset_kwargs = {'shard': ds, 'function': args[0]}
else:
dataset_kwargs = {'shard': ds}
dataset_kwargs.update(kwargs)
... |
Generate new fingerprints by using various kwargs of the dataset.
| generate_fingerprint | python | modelscope/data-juicer | data_juicer/utils/fingerprint_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/fingerprint_utils.py | Apache-2.0 |
def get_toml_file_path():
"""Get the path to pyproject.toml file."""
try:
# First try to find it in the installed package data
with importlib.resources.path('py_data_juicer',
'pyproject.toml') as toml_path:
return toml_path
except (ImportErro... | Get the path to pyproject.toml file. | get_toml_file_path | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def get_uv_lock_path():
"""Get the path to uv.lock file."""
try:
# First try to find it in the installed package data
with importlib.resources.path('py_data_juicer',
'uv.lock') as lock_path:
return lock_path
except (ImportError, FileNotFoundE... | Get the path to uv.lock file. | get_uv_lock_path | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def get_package_name(cls, module_name: str) -> str:
"""Convert a module name to its corresponding package name.
Args:
module_name: The name of the module (e.g., 'cv2', 'PIL')
Returns:
str: The corresponding package name (e.g., 'opencv-python', 'Pillow')
"""
... | Convert a module name to its corresponding package name.
Args:
module_name: The name of the module (e.g., 'cv2', 'PIL')
Returns:
str: The corresponding package name (e.g., 'opencv-python', 'Pillow')
| get_package_name | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def get_all_dependencies(cls):
"""
Get all dependencies, prioritizing uv.lock if available.
Falls back to pyproject.toml if uv.lock is not found or fails to parse.
Returns:
dict: A dictionary mapping module names to their full package specifications
e.g. {'n... |
Get all dependencies, prioritizing uv.lock if available.
Falls back to pyproject.toml if uv.lock is not found or fails to parse.
Returns:
dict: A dictionary mapping module names to their full package specifications
e.g. {'numpy': 'numpy>=1.26.4,<2.0.0', 'pandas': '... | get_all_dependencies | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def check_packages(cls, package_specs, pip_args=None):
"""
Check if packages are installed and install them if needed.
Args:
package_specs: A list of package specifications to check/install.
Can be package names or URLs (e.g., 'torch' or 'git+https://github... |
Check if packages are installed and install them if needed.
Args:
package_specs: A list of package specifications to check/install.
Can be package names or URLs (e.g., 'torch' or 'git+https://github.com/...')
pip_args: Optional list of additional argum... | check_packages | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def _is_package_installed(package_name):
"""Check if a package is installed by attempting to import it."""
if '@' in package_name:
package_name = package_name.split('@')[0]
if '[' in package_name:
package_name = package_name.split('[')[0]
i... | Check if a package is installed by attempting to import it. | _is_package_installed | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def __init__(self,
module_name: str,
package_name: str = None,
package_url: str = None,
auto_install: bool = True):
"""
Initialize the LazyLoader.
Args:
module_name: The name of the module to import (e.g., 'cv2', 'r... |
Initialize the LazyLoader.
Args:
module_name: The name of the module to import (e.g., 'cv2', 'ray.data', 'torchvision.models')
package_name: The name of the pip package to install (e.g., 'opencv-python', 'ray', 'torchvision')
If None, will use the base m... | __init__ | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def _install_package(cls, package_spec, pip_args=None):
"""Install a package using uv if available, otherwise pip."""
# Print trace information for package installation
logger.debug(f'Installing package: {package_spec}')
# Get last 3 frames of the stack trace
stack = traceback.ex... | Install a package using uv if available, otherwise pip. | _install_package | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def _load(self):
"""Load the module and handle any missing dependencies."""
logger.debug(f'Loading {self._module_name}...')
if self._module is not None:
return self._module
try:
# Try to import the module directly first
self._module = importlib.impor... | Load the module and handle any missing dependencies. | _load | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def __getattr__(self, item):
"""Handle attribute access, including submodule imports."""
if self._module is None:
self._load()
# Try to get the attribute directly
try:
return getattr(self._module, item)
except AttributeError:
# If not found, t... | Handle attribute access, including submodule imports. | __getattr__ | python | modelscope/data-juicer | data_juicer/utils/lazy_loader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/lazy_loader.py | Apache-2.0 |
def get_caller_name(depth=0):
"""
Get caller name by depth.
:param depth: depth of caller context, use 0 for caller depth.
:return: module name of the caller
"""
# the following logic is a little bit faster than inspect.stack() logic
frame = inspect.currentframe().f_back
for _ in range(... |
Get caller name by depth.
:param depth: depth of caller context, use 0 for caller depth.
:return: module name of the caller
| get_caller_name | python | modelscope/data-juicer | data_juicer/utils/logger_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/logger_utils.py | Apache-2.0 |
def __init__(self, level='INFO', caller_names=('datasets', 'logging')):
"""
Initialization method.
:param level: log level string of loguru. Default value: "INFO".
:param caller_names: caller names of redirected module.
Default value: (apex, pycocotools).
"""... |
Initialization method.
:param level: log level string of loguru. Default value: "INFO".
:param caller_names: caller names of redirected module.
Default value: (apex, pycocotools).
| __init__ | python | modelscope/data-juicer | data_juicer/utils/logger_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/logger_utils.py | Apache-2.0 |
def redirect_sys_output(log_level='INFO'):
"""
Redirect stdout/stderr to loguru with log level.
:param log_level: log level string of loguru. Default value: "INFO".
"""
redirect_logger = StreamToLoguru(level=log_level)
sys.stderr = redirect_logger
sys.stdout = redirect_logger |
Redirect stdout/stderr to loguru with log level.
:param log_level: log level string of loguru. Default value: "INFO".
| redirect_sys_output | python | modelscope/data-juicer | data_juicer/utils/logger_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/logger_utils.py | Apache-2.0 |
def get_log_file_path():
"""
Get the path to the location of the log file.
:return: a location of log file.
"""
for _, handler in logger._core.handlers.items():
if isinstance(handler._sink, FileSink):
return handler._sink._file.name |
Get the path to the location of the log file.
:return: a location of log file.
| get_log_file_path | python | modelscope/data-juicer | data_juicer/utils/logger_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/logger_utils.py | Apache-2.0 |
def setup_logger(save_dir,
distributed_rank=0,
filename='log.txt',
mode='o',
level='INFO',
redirect=True):
"""
Setup logger for training and testing.
:param save_dir: location to save log file
:param distributed_rank: ... |
Setup logger for training and testing.
:param save_dir: location to save log file
:param distributed_rank: device rank when multi-gpu environment
:param filename: log file name to save
:param mode: log file write mode, `append` or `override`. default is `o`.
:param level: log severity level. I... | setup_logger | python | modelscope/data-juicer | data_juicer/utils/logger_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/logger_utils.py | Apache-2.0 |
def __enter__(self):
"""
Store the original standard output and redirect the standard output to
null when entering this range.
"""
self._original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w') |
Store the original standard output and redirect the standard output to
null when entering this range.
| __enter__ | python | modelscope/data-juicer | data_juicer/utils/logger_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/logger_utils.py | Apache-2.0 |
def __exit__(self, exc_type, exc_val, exc_tb):
"""
Close the redirected standard output and restore it when exiting from
this range.
"""
sys.stdout.close()
sys.stdout = self._original_stdout |
Close the redirected standard output and restore it when exiting from
this range.
| __exit__ | python | modelscope/data-juicer | data_juicer/utils/logger_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/logger_utils.py | Apache-2.0 |
def load_data_with_context(sample, context, loaded_data_keys, load_func):
"""
The unified loading function with contexts for multimodal data.
"""
data = {}
for loaded_data_key in loaded_data_keys:
if context and loaded_data_key in sample[Fields.context]:
# load from context
... |
The unified loading function with contexts for multimodal data.
| load_data_with_context | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def calculate_resized_dimensions(
original_size: Tuple[PositiveInt, PositiveInt],
target_size: Union[PositiveInt, Tuple[PositiveInt, PositiveInt]],
max_length: Optional[int] = None,
divisible: PositiveInt = 1) -> Tuple[int, int]:
"""
Resize dimensions based on specified constrain... |
Resize dimensions based on specified constraints.
:param original_size: The original dimensions as (height, width).
:param target_size: Desired target size; can be a single integer
(short edge) or a tuple (height, width).
:param max_length: Maximum allowed length for the longer edge.
:para... | calculate_resized_dimensions | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def load_video(path, mode='r'):
"""
Load a video using its path.
:param path: the path to this video.
:param mode: the loading mode. It's "r" in default.
:return: a container object form PyAv library, which contains all streams
in this video (video/audio/...) and can be used to decode these... |
Load a video using its path.
:param path: the path to this video.
:param mode: the loading mode. It's "r" in default.
:return: a container object form PyAv library, which contains all streams
in this video (video/audio/...) and can be used to decode these streams
to frames.
| load_video | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def get_video_duration(input_video: Union[str, av.container.InputContainer],
video_stream_index: int = 0):
"""
Get the video's duration from the container
:param input_video: the container object form PyAv library, which
contains all streams in this video (video/audio/...) an... |
Get the video's duration from the container
:param input_video: the container object form PyAv library, which
contains all streams in this video (video/audio/...) and can be used
to decode these streams to frames.
:param video_stream_index: the video stream index to decode,
default... | get_video_duration | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def get_decoded_frames_from_video(
input_video: Union[str, av.container.InputContainer],
video_stream_index: int = 0):
"""
Get the video's frames from the container
:param input_video: the container object form PyAv library, which
contains all streams in this video (video/audio/...)... |
Get the video's frames from the container
:param input_video: the container object form PyAv library, which
contains all streams in this video (video/audio/...) and can be used
to decode these streams to frames.
:param video_stream_index: the video stream index to decode,
default s... | get_decoded_frames_from_video | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def cut_video_by_seconds(
input_video: Union[str, av.container.InputContainer],
output_video: str,
start_seconds: float,
end_seconds: Optional[float] = None,
):
"""
Cut a video into several segments by times in second.
:param input_video: the path to input video or the video container.
... |
Cut a video into several segments by times in second.
:param input_video: the path to input video or the video container.
:param output_video: the path to output video.
:param start_seconds: the start time in second.
:param end_seconds: the end time in second. If it's None, this function
w... | cut_video_by_seconds | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def process_each_frame(input_video: Union[str, av.container.InputContainer],
output_video: str, frame_func):
"""
Process each frame in video by replacing each frame by
`frame_func(frame)`.
:param input_video: the path to input video or the video container.
:param output_video... |
Process each frame in video by replacing each frame by
`frame_func(frame)`.
:param input_video: the path to input video or the video container.
:param output_video: the path to output video.
:param frame_func: a function which inputs a frame and outputs another
frame.
| process_each_frame | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def extract_key_frames_by_seconds(
input_video: Union[str, av.container.InputContainer],
duration: float = 1):
"""Extract key frames by seconds.
:param input_video: input video path or av.container.InputContainer.
:param duration: duration of each video split in seconds.
"""
... | Extract key frames by seconds.
:param input_video: input video path or av.container.InputContainer.
:param duration: duration of each video split in seconds.
| extract_key_frames_by_seconds | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def extract_key_frames(input_video: Union[str, av.container.InputContainer]):
"""
Extract key frames from the input video. If there is no keyframes in the
video, return the first frame.
:param input_video: input video path or container.
:return: a list of key frames.
"""
# load the input vi... |
Extract key frames from the input video. If there is no keyframes in the
video, return the first frame.
:param input_video: input video path or container.
:return: a list of key frames.
| extract_key_frames | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def get_key_frame_seconds(input_video: Union[str,
av.container.InputContainer]):
"""
Get seconds of key frames in the input video.
"""
key_frames = extract_key_frames(input_video)
ts = [float(f.pts * f.time_base) for f in key_frames]
ts.sort()
ret... |
Get seconds of key frames in the input video.
| get_key_frame_seconds | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def extract_video_frames_uniformly_by_seconds(
input_video: Union[str, av.container.InputContainer],
frame_num: PositiveInt,
duration: float = 1):
"""Extract video frames uniformly by seconds.
:param input_video: input video path or av.container.InputContainer.
:param frame_n... | Extract video frames uniformly by seconds.
:param input_video: input video path or av.container.InputContainer.
:param frame_num: the number of frames to be extracted uniformly from
each video split by duration.
:param duration: duration of each video split in seconds.
| extract_video_frames_uniformly_by_seconds | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def extract_video_frames_uniformly(
input_video: Union[str, av.container.InputContainer],
frame_num: PositiveInt,
):
"""
Extract a number of video frames uniformly within the video duration.
:param input_video: input video path or container.
:param frame_num: The number of frames to be extracte... |
Extract a number of video frames uniformly within the video duration.
:param input_video: input video path or container.
:param frame_num: The number of frames to be extracted. If it's 1, only the
middle frame will be extracted. If it's 2, only the first and the last
frames will be extract... | extract_video_frames_uniformly | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def extract_audio_from_video(
input_video: Union[str, av.container.InputContainer],
output_audio: Optional[str] = None,
start_seconds: int = 0,
end_seconds: Optional[int] = None,
stream_indexes: Union[int, List[int], None] = None,
):
"""
Extract audio data for the given video.
:param in... |
Extract audio data for the given video.
:param input_video: input video. Can be a video path or an
av.container.InputContainer.
:param output_audio: output audio path. If it's None, the audio data won't
be written to file. If stream_indexes is not None, it will output
multiple audi... | extract_audio_from_video | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def timecode_string_to_seconds(timecode: str):
"""
Convert a timecode string to the float seconds.
:param timecode: the input timecode string. Must in "HH:MM:SS.fff(fff)"
format.
"""
# parse the timecode string
dt = datetime.datetime.strptime(timecode, '%H:%M:%S.%f')
# compute the ... |
Convert a timecode string to the float seconds.
:param timecode: the input timecode string. Must in "HH:MM:SS.fff(fff)"
format.
| timecode_string_to_seconds | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def parse_string_to_roi(roi_string, roi_type='pixel'):
"""
Convert a roi string to four number x1, y1, x2, y2 stand for the region.
When the type is 'pixel', (x1, y1), (x2, y2) are the locations of pixels
in the top left corner and the bottom right corner respectively. If the
roi_type is 'ratio', th... |
Convert a roi string to four number x1, y1, x2, y2 stand for the region.
When the type is 'pixel', (x1, y1), (x2, y2) are the locations of pixels
in the top left corner and the bottom right corner respectively. If the
roi_type is 'ratio', the coordinates are normalized by widths and
heights.
:... | parse_string_to_roi | python | modelscope/data-juicer | data_juicer/utils/mm_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/mm_utils.py | Apache-2.0 |
def check_model(model_name, force=False):
"""
Check whether a model exists in DATA_JUICER_MODELS_CACHE.
If exists, return its full path.
Else, download it from cached models links.
:param model_name: a specified model name
:param force: Whether to download model forcefully or not, Sometimes
... |
Check whether a model exists in DATA_JUICER_MODELS_CACHE.
If exists, return its full path.
Else, download it from cached models links.
:param model_name: a specified model name
:param force: Whether to download model forcefully or not, Sometimes
the model file maybe incomplete for some rea... | check_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def filter_arguments(func, args_dict):
"""
Filters and returns only the valid arguments for a given function
signature.
:param func: The function or callable to inspect.
:param args_dict: A dictionary of argument names and values to filter.
:return: A dictionary containing only the arguments th... |
Filters and returns only the valid arguments for a given function
signature.
:param func: The function or callable to inspect.
:param args_dict: A dictionary of argument names and values to filter.
:return: A dictionary containing only the arguments that match the
function's signat... | filter_arguments | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def __init__(self, model, endpoint=None, response_path=None, **kwargs):
"""
Initializes an instance of the APIModel class.
:param model: The name of the model to be used for making API
calls. This should correspond to a valid model identifier
recognized by the API server... |
Initializes an instance of the APIModel class.
:param model: The name of the model to be used for making API
calls. This should correspond to a valid model identifier
recognized by the API server.
:param endpoint: The URL endpoint for the API. If provided as a
... | __init__ | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def __call__(self, messages, **kwargs):
"""
Sends messages to the configured API model and returns the parsed
response content.
:param messages: A list of message dictionaries to send to the API.
Each message should have a 'role' (e.g., 'user',
... |
Sends messages to the configured API model and returns the parsed
response content.
:param messages: A list of message dictionaries to send to the API.
Each message should have a 'role' (e.g., 'user',
'assistant') and 'content' (the message tex... | __call__ | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def __init__(self, model, endpoint=None, response_path=None, **kwargs):
"""
Initializes an instance specialized for embedding APIs.
:param model: The model identifier for embedding API calls.
:param endpoint: API endpoint URL. Defaults to '/embeddings'.
:param response_path: Pat... |
Initializes an instance specialized for embedding APIs.
:param model: The model identifier for embedding API calls.
:param endpoint: API endpoint URL. Defaults to '/embeddings'.
:param response_path: Path to extract embeddings from response.
Defaults to 'data.0.embedding'.
... | __init__ | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def __call__(self, input, **kwargs):
"""
Processes input text and returns embeddings.
:param input: Input text or list of texts to embed.
:param kwargs: Additional API parameters.
:return: Extracted embeddings or empty list on error.
"""
body = {
'mod... |
Processes input text and returns embeddings.
:param input: Input text or list of texts to embed.
:param kwargs: Additional API parameters.
:return: Extracted embeddings or empty list on error.
| __call__ | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_api_model(model,
*,
endpoint=None,
response_path=None,
return_processor=False,
processor_config=None,
**model_params):
"""Creates a callable API model for interacting with ... | Creates a callable API model for interacting with OpenAI-compatible API.
The callable supports custom response parsing and works with proxy servers
that may be incompatible.
:param model: The name of the model to interact with.
:param endpoint: The URL endpoint for the API. If provided as a relative
... | prepare_api_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_diffusion_model(pretrained_model_name_or_path, diffusion_type,
**model_params):
"""
Prepare and load an Diffusion model from HuggingFace.
:param pretrained_model_name_or_path: input Diffusion model name
or local path to the model
:param diffusion_type: th... |
Prepare and load an Diffusion model from HuggingFace.
:param pretrained_model_name_or_path: input Diffusion model name
or local path to the model
:param diffusion_type: the use of the diffusion model. It can be
'image2image', 'text2image', 'inpainting'
:return: a Diffusion model.
| prepare_diffusion_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_fasttext_model(model_name='lid.176.bin', **model_params):
"""
Prepare and load a fasttext model.
:param model_name: input model name
:return: model instance.
"""
logger.info('Loading fasttext language identification model...')
try:
# Suppress FastText warnings by redirec... |
Prepare and load a fasttext model.
:param model_name: input model name
:return: model instance.
| prepare_fasttext_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_huggingface_model(pretrained_model_name_or_path,
*,
return_model=True,
return_pipe=False,
pipe_task='text-generation',
**model_params):
"""
Prepare an... |
Prepare and load a huggingface model.
:param pretrained_model_name_or_path: model name or path
:param return_model: return model or not
:param return_pipe: return pipeline or not
:param pipe_task: task for pipeline
:return: a tuple (model, processor) if `return_model` is True;
otherwis... | prepare_huggingface_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_kenlm_model(lang, name_pattern='{}.arpa.bin', **model_params):
"""
Prepare and load a kenlm model.
:param model_name: input model name in formatting syntax.
:param lang: language to render model name
:return: model instance.
"""
model_params.pop('device', None)
model_name =... |
Prepare and load a kenlm model.
:param model_name: input model name in formatting syntax.
:param lang: language to render model name
:return: model instance.
| prepare_kenlm_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_nltk_model(lang, name_pattern='punkt.{}.pickle', **model_params):
"""
Prepare and load a nltk punkt model with enhanced resource handling.
:param model_name: input model name in formatting syntax
:param lang: language to render model name
:return: model instance.
"""
model_param... |
Prepare and load a nltk punkt model with enhanced resource handling.
:param model_name: input model name in formatting syntax
:param lang: language to render model name
:return: model instance.
| prepare_nltk_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_nltk_pos_tagger(**model_params):
"""
Prepare and load NLTK's part-of-speech tagger with enhanced resource
handling.
:return: The POS tagger model
"""
model_params.pop('device', None)
# Ensure pickle security is patched
patch_nltk_pickle_security()
logger.info('Loadin... |
Prepare and load NLTK's part-of-speech tagger with enhanced resource
handling.
:return: The POS tagger model
| prepare_nltk_pos_tagger | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_recognizeAnything_model(
pretrained_model_name_or_path='ram_plus_swin_large_14m.pth',
input_size=384,
**model_params):
"""
Prepare and load recognizeAnything model.
:param model_name: input model name.
:param input_size: the input size of the model.
"""
logge... |
Prepare and load recognizeAnything model.
:param model_name: input model name.
:param input_size: the input size of the model.
| prepare_recognizeAnything_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_sentencepiece_model(model_path, **model_params):
"""
Prepare and load a sentencepiece model.
:param model_path: input model path
:return: model instance
"""
logger.info('Loading sentencepiece model...')
sentencepiece_model = sentencepiece.SentencePieceProcessor()
try:
... |
Prepare and load a sentencepiece model.
:param model_path: input model path
:return: model instance
| prepare_sentencepiece_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_sentencepiece_for_lang(lang,
name_pattern='{}.sp.model',
**model_params):
"""
Prepare and load a sentencepiece model for specific language.
:param lang: language to render model name
:param name_pattern: pattern to render... |
Prepare and load a sentencepiece model for specific language.
:param lang: language to render model name
:param name_pattern: pattern to render the model name
:return: model instance.
| prepare_sentencepiece_for_lang | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_simple_aesthetics_model(pretrained_model_name_or_path,
*,
return_model=True,
**model_params):
"""
Prepare and load a simple aesthetics model.
:param pretrained_model_name_or_path: model n... |
Prepare and load a simple aesthetics model.
:param pretrained_model_name_or_path: model name or path
:param return_model: return model or not
:return: a tuple (model, input processor) if `return_model` is True;
otherwise, only the processor is returned.
| prepare_simple_aesthetics_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_spacy_model(lang,
name_pattern='{}_core_web_md-3.7.0',
**model_params):
"""
Prepare spacy model for specific language.
:param lang: language of sapcy model. Should be one of ["zh",
"en"]
:return: corresponding spacy model
"""
i... |
Prepare spacy model for specific language.
:param lang: language of sapcy model. Should be one of ["zh",
"en"]
:return: corresponding spacy model
| prepare_spacy_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_video_blip_model(pretrained_model_name_or_path,
*,
return_model=True,
**model_params):
"""
Prepare and load a video-clip model with the corresponding processor.
:param pretrained_model_name_or_path: model nam... |
Prepare and load a video-clip model with the corresponding processor.
:param pretrained_model_name_or_path: model name or path
:param return_model: return model or not
:param trust_remote_code: passed to transformers
:return: a tuple (model, input processor) if `return_model` is True;
othe... | prepare_video_blip_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
interpolate_pos_encoding: bool = False,
) -> Uni... | Flatten `pixel_values` along the batch and time dimension,
pass it through the original vision model,
then unflatten it back.
:param pixel_values: a tensor of shape
(batch, channel, time, height, width)
:returns:
last_hidden_state: a tensor o... | forward | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_vllm_model(pretrained_model_name_or_path, **model_params):
"""
Prepare and load a HuggingFace model with the corresponding processor.
:param pretrained_model_name_or_path: model name or path
:param model_params: LLM initialization parameters.
:return: a tuple of (model, tokenizer)
"... |
Prepare and load a HuggingFace model with the corresponding processor.
:param pretrained_model_name_or_path: model name or path
:param model_params: LLM initialization parameters.
:return: a tuple of (model, tokenizer)
| prepare_vllm_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def prepare_embedding_model(model_path, **model_params):
"""
Prepare and load an embedding model using transformers.
:param model_path: Path to the embedding model.
:param model_params: Optional model parameters.
:return: Model with encode() returning embedding list.
"""
logger.info('Loadin... |
Prepare and load an embedding model using transformers.
:param model_path: Path to the embedding model.
:param model_params: Optional model parameters.
:return: Model with encode() returning embedding list.
| prepare_embedding_model | python | modelscope/data-juicer | data_juicer/utils/model_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/model_utils.py | Apache-2.0 |
def ensure_nltk_resource(resource_path, fallback_package=None):
"""Ensure a specific NLTK resource is available and accessible.
This function attempts to find and load a resource, and if it fails,
downloads the specified fallback package.
Args:
resource_path: The path to the resource to check
... | Ensure a specific NLTK resource is available and accessible.
This function attempts to find and load a resource, and if it fails,
downloads the specified fallback package.
Args:
resource_path: The path to the resource to check
fallback_package: The package to download if the resource isn't... | ensure_nltk_resource | python | modelscope/data-juicer | data_juicer/utils/nltk_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/nltk_utils.py | Apache-2.0 |
def clean_nltk_cache(packages=None, complete_reset=False):
"""Clean NLTK model cache.
Args:
packages (list, optional): List of package names to clean.
If None, cleans all package caches.
complete_reset (bool, optional): If True, deletes all NLTK data.
Default is False.
... | Clean NLTK model cache.
Args:
packages (list, optional): List of package names to clean.
If None, cleans all package caches.
complete_reset (bool, optional): If True, deletes all NLTK data.
Default is False.
| clean_nltk_cache | python | modelscope/data-juicer | data_juicer/utils/nltk_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/nltk_utils.py | Apache-2.0 |
def patch_nltk_pickle_security():
"""Patch NLTK's pickle security restrictions to allow loading models.
NLTK 3.9+ introduced strict pickle security that prevents loading some
models. This function patches NLTK to bypass those restrictions while
maintaining security.
This should be called once duri... | Patch NLTK's pickle security restrictions to allow loading models.
NLTK 3.9+ introduced strict pickle security that prevents loading some
models. This function patches NLTK to bypass those restrictions while
maintaining security.
This should be called once during initialization before any NLTK
fun... | patch_nltk_pickle_security | python | modelscope/data-juicer | data_juicer/utils/nltk_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/nltk_utils.py | Apache-2.0 |
def unrestricted_pickle_load(file_obj):
"""Modified pickle loader that allows our model classes."""
# Handle both file-like objects and byte strings
if hasattr(file_obj, 'read') and hasattr(file_obj, 'readline'):
# It's already a file-like object
... | Modified pickle loader that allows our model classes. | unrestricted_pickle_load | python | modelscope/data-juicer | data_juicer/utils/nltk_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/nltk_utils.py | Apache-2.0 |
def create_physical_resource_alias(source_path, alias_path):
"""Create a physical file alias for NLTK resources.
This function creates a hard link, symlink, or copy of a source resource
to a target alias path. This is useful for problematic resources that
might be requested with a path that doesn't mat... | Create a physical file alias for NLTK resources.
This function creates a hard link, symlink, or copy of a source resource
to a target alias path. This is useful for problematic resources that
might be requested with a path that doesn't match NLTK's structure.
Args:
source_path: The full path t... | create_physical_resource_alias | python | modelscope/data-juicer | data_juicer/utils/nltk_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/nltk_utils.py | Apache-2.0 |
def setup_resource_aliases():
"""Create physical file aliases for common problematic NLTK resources.
This function creates aliases/copies of resources that have known
problematic paths to ensure they can be found regardless of how
they're requested.
"""
try:
import nltk
nltk_dat... | Create physical file aliases for common problematic NLTK resources.
This function creates aliases/copies of resources that have known
problematic paths to ensure they can be found regardless of how
they're requested.
| setup_resource_aliases | python | modelscope/data-juicer | data_juicer/utils/nltk_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/nltk_utils.py | Apache-2.0 |
def calculate_np(name,
mem_required,
cpu_required,
num_proc=None,
use_cuda=False):
"""Calculate the optimum number of processes for the given OP"""
eps = 1e-9 # about 1 byte
if use_cuda:
auto_num_proc = None
cuda_mem_avail... | Calculate the optimum number of processes for the given OP | calculate_np | python | modelscope/data-juicer | data_juicer/utils/process_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/process_utils.py | Apache-2.0 |
def __init__(self, name: str):
"""
Initialization method.
:param name: a registry repo name
"""
self._name = name
self._modules = {} |
Initialization method.
:param name: a registry repo name
| __init__ | python | modelscope/data-juicer | data_juicer/utils/registry.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/registry.py | Apache-2.0 |
def _register_module(self, module_name=None, module_cls=None, force=False):
"""
Register module to registry.
:param module_name: module name
:param module_cls: module class object
:param force: Whether to override an existing class with the
same name. Default: False.... |
Register module to registry.
:param module_name: module name
:param module_cls: module class object
:param force: Whether to override an existing class with the
same name. Default: False.
| _register_module | python | modelscope/data-juicer | data_juicer/utils/registry.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/registry.py | Apache-2.0 |
def register_module(self,
module_name: str = None,
module_cls: type = None,
force=False):
"""
Register module class object to registry with the specified modulename.
:param module_name: module name
:param module_cls... |
Register module class object to registry with the specified modulename.
:param module_name: module name
:param module_cls: module class object
:param force: Whether to override an existing class with
the same name. Default: False.
Example:
>>> regis... | register_module | python | modelscope/data-juicer | data_juicer/utils/registry.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/registry.py | Apache-2.0 |
def _register(module_cls):
"""
Register module class object to registry.
:param module_cls: module class object
:return: module class object.
"""
self._register_module(module_name=module_name,
module_cls=module_cl... |
Register module class object to registry.
:param module_cls: module class object
:return: module class object.
| _register | python | modelscope/data-juicer | data_juicer/utils/registry.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/registry.py | Apache-2.0 |
def random_sample(dataset, weight=1.0, sample_number=0, seed=None):
"""
Randomly sample a subset from a dataset with weight or number,
if sample number is bigger than 0, we will use sample
number instead of weight.
:param dataset: a HuggingFace dataset
:param weight: sample ratio of dataset
... |
Randomly sample a subset from a dataset with weight or number,
if sample number is bigger than 0, we will use sample
number instead of weight.
:param dataset: a HuggingFace dataset
:param weight: sample ratio of dataset
:param sample_number: sample number of dataset
:param seed: random samp... | random_sample | python | modelscope/data-juicer | data_juicer/utils/sample.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/sample.py | Apache-2.0 |
def TEST_TAG(*tags):
"""Tags for test case.
Currently, `standalone`, `ray` are supported.
"""
def decorator(func):
setattr(func, '__test_tags__', tags)
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
# Save the original current_tag if it exists
... | Tags for test case.
Currently, `standalone`, `ray` are supported.
| TEST_TAG | python | modelscope/data-juicer | data_juicer/utils/unittest_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/unittest_utils.py | Apache-2.0 |
def generate_dataset(self, data) -> DJDataset:
"""Generate dataset for a specific executor.
Args:
type (str, optional): "standalone" or "ray".
Defaults to "standalone".
"""
current_tag = getattr(self, 'current_tag', 'standalone')
if current_tag.startswith... | Generate dataset for a specific executor.
Args:
type (str, optional): "standalone" or "ray".
Defaults to "standalone".
| generate_dataset | python | modelscope/data-juicer | data_juicer/utils/unittest_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/unittest_utils.py | Apache-2.0 |
def run_single_op(self, dataset: DJDataset, op, column_names):
"""Run operator in the specific executor."""
current_tag = getattr(self, 'current_tag', 'standalone')
dataset = dataset.process(op)
if current_tag.startswith('standalone'):
dataset = dataset.select_columns(column_... | Run operator in the specific executor. | run_single_op | python | modelscope/data-juicer | data_juicer/utils/unittest_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/unittest_utils.py | Apache-2.0 |
def get_diff_files(prefix_filter=['data_juicer/', 'tests/']):
"""Get git diff files in target dirs except the __init__.py files"""
changed_files = subprocess.check_output(
['git', 'diff', '--name-only', '--diff-filter=ACMRT', 'origin/main'],
universal_newlines=True,
).strip().split('\n')
... | Get git diff files in target dirs except the __init__.py files | get_diff_files | python | modelscope/data-juicer | data_juicer/utils/unittest_utils.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/utils/unittest_utils.py | Apache-2.0 |
def init_config(dataset_path: str, op_name: str, **op_args):
"""
Initialize Data-Juicer config with operator `op_name`.
Args:
dataset_path (`str`):
The input dataset path.
op_name: name of the operator.
op_args: arguments of the operator.
"""
with open(DJ_CONFIG_... |
Initialize Data-Juicer config with operator `op_name`.
Args:
dataset_path (`str`):
The input dataset path.
op_name: name of the operator.
op_args: arguments of the operator.
| init_config | python | modelscope/data-juicer | demos/api_service/utils.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/utils.py | Apache-2.0 |
def execute_analyzer(dj_config: dict):
"""
Execute data-juicer analyzer.
Args:
dj_config: configs of data-juicer
"""
logger.chat(Msg(name='system', content='Analyzing data...', role='system'))
url_path = '/data_juicer/core/Analyzer/run'
try:
res = call_data_juicer_api(url_pa... |
Execute data-juicer analyzer.
Args:
dj_config: configs of data-juicer
| execute_analyzer | python | modelscope/data-juicer | demos/api_service/utils.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/utils.py | Apache-2.0 |
def show_analyzed_results(analyzed_result_path: str,
require_min=True,
require_max=True):
"""
Show the analyzed results to the users and get the specified thresholds.
Args:
analyzed_result_path (`str`):
The analyzed result path.
""... |
Show the analyzed results to the users and get the specified thresholds.
Args:
analyzed_result_path (`str`):
The analyzed result path.
| show_analyzed_results | python | modelscope/data-juicer | demos/api_service/utils.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/utils.py | Apache-2.0 |
def execute_config(dj_config: Dict):
"""
Execute data-juicer data process.
Args:
dj_config: configs of data-juicer
"""
logger.chat(Msg(name='system', content='Processing data...',
role='system'))
url_path = '/data_juicer/core/Executor/run'
try:
res = call... |
Execute data-juicer data process.
Args:
dj_config: configs of data-juicer
| execute_config | python | modelscope/data-juicer | demos/api_service/utils.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/utils.py | Apache-2.0 |
def execute_alphabet_or_numeric_filter(dataset_path: str) -> ServiceResponse:
"""
Filter text with alphabet/numeric ratio out of specific range.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path, 'alphanumeric_filter')
... |
Filter text with alphabet/numeric ratio out of specific range.
Args:
dataset_path (`str`):
The input dataset path.
| execute_alphabet_or_numeric_filter | python | modelscope/data-juicer | demos/api_service/wrapped_filters.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_filters.py | Apache-2.0 |
def execute_text_length_filter(dataset_path: str) -> ServiceResponse:
"""
Filter text with length out of specific range.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path, 'text_length_filter')
export_path = execute... |
Filter text with length out of specific range.
Args:
dataset_path (`str`):
The input dataset path.
| execute_text_length_filter | python | modelscope/data-juicer | demos/api_service/wrapped_filters.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_filters.py | Apache-2.0 |
def execute_image_aesthetics_filter(dataset_path: str) -> ServiceResponse:
"""
Filter samples according to the aesthetic score of images.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path,
'i... |
Filter samples according to the aesthetic score of images.
Args:
dataset_path (`str`):
The input dataset path.
| execute_image_aesthetics_filter | python | modelscope/data-juicer | demos/api_service/wrapped_filters.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_filters.py | Apache-2.0 |
def execute_video_aesthetics_filter(dataset_path: str) -> ServiceResponse:
"""
Filter samples according to the aesthetic scores of videos.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path,
'... |
Filter samples according to the aesthetic scores of videos.
Args:
dataset_path (`str`):
The input dataset path.
| execute_video_aesthetics_filter | python | modelscope/data-juicer | demos/api_service/wrapped_filters.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_filters.py | Apache-2.0 |
def execute_image_nsfw_filter(dataset_path: str) -> ServiceResponse:
"""
Filter samples according to the nsfw scores of images.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path,
'image_nsfw_... |
Filter samples according to the nsfw scores of images.
Args:
dataset_path (`str`):
The input dataset path.
| execute_image_nsfw_filter | python | modelscope/data-juicer | demos/api_service/wrapped_filters.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_filters.py | Apache-2.0 |
def execute_video_nsfw_filter(dataset_path: str) -> ServiceResponse:
"""
Filter samples according to the nsfw scores of videos.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path,
'video_nsfw_... |
Filter samples according to the nsfw scores of videos.
Args:
dataset_path (`str`):
The input dataset path.
| execute_video_nsfw_filter | python | modelscope/data-juicer | demos/api_service/wrapped_filters.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_filters.py | Apache-2.0 |
def execute_image_caption_mapper(dataset_path: str) -> ServiceResponse:
"""
Produce captions for each image in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path,
'image_captionin... |
Produce captions for each image in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
| execute_image_caption_mapper | python | modelscope/data-juicer | demos/api_service/wrapped_mappers.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_mappers.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.