code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def export_compute_stats(self, dataset, export_path):
"""
Export method for saving compute status in filters
"""
keep_stats_in_res_ds = self.keep_stats_in_res_ds
self.keep_stats_in_res_ds = True
self._export_impl(dataset,
export_path,
... |
Export method for saving compute status in filters
| export_compute_stats | python | modelscope/data-juicer | data_juicer/core/exporter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/exporter.py | Apache-2.0 |
def to_json(dataset, export_path, num_proc=1, **kwargs):
"""
Export method for json target files.
:param dataset: the dataset to export.
:param export_path: the path to store the exported dataset.
:param num_proc: the number of processes used to export the dataset.
:para... |
Export method for json target files.
:param dataset: the dataset to export.
:param export_path: the path to store the exported dataset.
:param num_proc: the number of processes used to export the dataset.
:param kwargs: extra arguments.
:return:
| to_json | python | modelscope/data-juicer | data_juicer/core/exporter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/exporter.py | Apache-2.0 |
def _router():
"""
A router from different suffixes to corresponding export methods.
:return: A dict router.
"""
return {
'jsonl': Exporter.to_jsonl,
'json': Exporter.to_json,
'parquet': Exporter.to_parquet,
} |
A router from different suffixes to corresponding export methods.
:return: A dict router.
| _router | python | modelscope/data-juicer | data_juicer/core/exporter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/exporter.py | Apache-2.0 |
def monitor_current_resources():
"""
Detect the resource utilization of the current environment/machine.
All data of "util." is ratios in the range of [0.0, 1.0]. All data of
"mem." is in MB.
"""
resource_dict = dict()
# current time
resource_dict['timesta... |
Detect the resource utilization of the current environment/machine.
All data of "util." is ratios in the range of [0.0, 1.0]. All data of
"mem." is in MB.
| monitor_current_resources | python | modelscope/data-juicer | data_juicer/core/monitor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/monitor.py | Apache-2.0 |
def analyze_resource_util_list(resource_util_list):
"""
Analyze the resource utilization for a given resource util list.
Compute {'max', 'min', 'avg'} of resource metrics for each dict item.
"""
res_list = []
for item in resource_util_list:
res_list.append(Mon... |
Analyze the resource utilization for a given resource util list.
Compute {'max', 'min', 'avg'} of resource metrics for each dict item.
| analyze_resource_util_list | python | modelscope/data-juicer | data_juicer/core/monitor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/monitor.py | Apache-2.0 |
def analyze_single_resource_util(resource_util_dict):
"""
Analyze the resource utilization for a single resource util dict.
Compute {'max', 'min', 'avg'} of each resource metrics.
"""
analysis_res = {}
record_list = {}
for record in resource_util_dict['resource']:... |
Analyze the resource utilization for a single resource util dict.
Compute {'max', 'min', 'avg'} of each resource metrics.
| analyze_single_resource_util | python | modelscope/data-juicer | data_juicer/core/monitor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/monitor.py | Apache-2.0 |
def monitor_func(func, args=None, sample_interval=0.5):
"""
Process the input dataset and probe related information for each OP in
the specified operator list.
For now, we support the following targets to probe:
"resource": resource utilization for each OP.
"speed": aver... |
Process the input dataset and probe related information for each OP in
the specified operator list.
For now, we support the following targets to probe:
"resource": resource utilization for each OP.
"speed": average processing speed for each OP.
The probe result is a li... | monitor_func | python | modelscope/data-juicer | data_juicer/core/monitor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/monitor.py | Apache-2.0 |
def __init__(self, work_dir, show_num=10):
"""
Initialization method.
:param work_dir: the work directory to store the comparison
results
:param show_num: the maximum number of samples to show in the
comparison result files.
"""
self.work_dir = os... |
Initialization method.
:param work_dir: the work directory to store the comparison
results
:param show_num: the maximum number of samples to show in the
comparison result files.
| __init__ | python | modelscope/data-juicer | data_juicer/core/tracer.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/tracer.py | Apache-2.0 |
def trace_mapper(self, op_name: str, previous_ds: Dataset,
processed_ds: Dataset, text_key: str):
"""
Compare datasets before and after a Mapper.
This will mainly show the different sample pairs due to the
modification by the Mapper
:param op_name: the op n... |
Compare datasets before and after a Mapper.
This will mainly show the different sample pairs due to the
modification by the Mapper
:param op_name: the op name of mapper
:param previous_ds: dataset before the mapper process
:param processed_ds: dataset processed by the ... | trace_mapper | python | modelscope/data-juicer | data_juicer/core/tracer.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/tracer.py | Apache-2.0 |
def trace_batch_mapper(self, op_name: str, previous_ds: Dataset,
processed_ds: Dataset, text_key: str):
"""
Compare datasets before and after a BatchMapper.
This will mainly show the new samples augmented by the BatchMapper
:param op_name: the op name of mapp... |
Compare datasets before and after a BatchMapper.
This will mainly show the new samples augmented by the BatchMapper
:param op_name: the op name of mapper
:param previous_ds: dataset before the mapper process
:param processed_ds: dataset processed by the mapper
:param t... | trace_batch_mapper | python | modelscope/data-juicer | data_juicer/core/tracer.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/tracer.py | Apache-2.0 |
def trace_filter(self, op_name: str, previous_ds: Dataset,
processed_ds: Dataset):
"""
Compare datasets before and after a Filter.
This will mainly show the filtered samples by the Filter
:param op_name: the op name of filter
:param previous_ds: dataset bef... |
Compare datasets before and after a Filter.
This will mainly show the filtered samples by the Filter
:param op_name: the op name of filter
:param previous_ds: dataset before the filter process
:param processed_ds: dataset processed by the filter
:return:
| trace_filter | python | modelscope/data-juicer | data_juicer/core/tracer.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/tracer.py | Apache-2.0 |
def trace_deduplicator(self, op_name: str, dup_pairs: list):
"""
Compare datasets before and after a Deduplicator.
This will mainly show the near-duplicate sample pairs extracted
by the Deduplicator. Different from the other two trace methods,
the trace process for deduplicator ... |
Compare datasets before and after a Deduplicator.
This will mainly show the near-duplicate sample pairs extracted
by the Deduplicator. Different from the other two trace methods,
the trace process for deduplicator is embedded into the process
method of deduplicator, but the oth... | trace_deduplicator | python | modelscope/data-juicer | data_juicer/core/tracer.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/tracer.py | Apache-2.0 |
def validate_config(self, ds_config: Dict) -> None:
"""
Validate the configuration dictionary.
Args:
ds_config: Configuration dictionary to validate
Raises:
ValidationError: If validation fails
"""
# Check required fields
missing_fields =... |
Validate the configuration dictionary.
Args:
ds_config: Configuration dictionary to validate
Raises:
ValidationError: If validation fails
| validate_config | python | modelscope/data-juicer | data_juicer/core/data/config_validator.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/config_validator.py | Apache-2.0 |
def rewrite_cli_datapath(dataset_path, max_sample_num=None) -> List:
"""
rewrite the dataset_path from CLI into proper dataset config format
that is compatible with YAML config style; retrofitting CLI input
of local files and huggingface path
:param dataset_path: a dataset file or a dataset dir or ... |
rewrite the dataset_path from CLI into proper dataset config format
that is compatible with YAML config style; retrofitting CLI input
of local files and huggingface path
:param dataset_path: a dataset file or a dataset dir or a list of
them, e.g. `<w1> ds1.jsonl <w2> ds2_dir <w3> ds3_file.json... | rewrite_cli_datapath | python | modelscope/data-juicer | data_juicer/core/data/dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dataset_builder.py | Apache-2.0 |
def parse_cli_datapath(dataset_path) -> Tuple[List[str], List[float]]:
"""
Split every dataset path and its weight.
:param dataset_path: a dataset file or a dataset dir or a list of
them, e.g. `<w1> ds1.jsonl <w2> ds2_dir <w3> ds3_file.json`
:return: list of dataset path and list of weights
... |
Split every dataset path and its weight.
:param dataset_path: a dataset file or a dataset dir or a list of
them, e.g. `<w1> ds1.jsonl <w2> ds2_dir <w3> ds3_file.json`
:return: list of dataset path and list of weights
| parse_cli_datapath | python | modelscope/data-juicer | data_juicer/core/data/dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dataset_builder.py | Apache-2.0 |
def validate(self, dataset: DJDataset) -> None:
"""
Validate dataset content
Args:
dataset: The dataset to validate
Raises:
DataValidationError: If validation fails
"""
if not isinstance(dataset, DJDataset):
raise DataValidationError(... |
Validate dataset content
Args:
dataset: The dataset to validate
Raises:
DataValidationError: If validation fails
| validate | python | modelscope/data-juicer | data_juicer/core/data/data_validator.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/data_validator.py | Apache-2.0 |
def validate(self, dataset: DJDataset) -> None:
"""Base validation for all conversation formats"""
super().validate(dataset)
for item in dataset.get(self.sample_size):
self.validate_conversation(item) | Base validation for all conversation formats | validate | python | modelscope/data-juicer | data_juicer/core/data/data_validator.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/data_validator.py | Apache-2.0 |
def __init__(self, config: Dict):
"""
Initialize validator with config
Args:
config: Dict containing:
- required_fields: List of field names that must exist
- field_types: Optional map of field names to expected types
- allow_missing: ... |
Initialize validator with config
Args:
config: Dict containing:
- required_fields: List of field names that must exist
- field_types: Optional map of field names to expected types
- allow_missing: Optional float for max ratio missing allowed
... | __init__ | python | modelscope/data-juicer | data_juicer/core/data/data_validator.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/data_validator.py | Apache-2.0 |
def validate(self, dataset: DJDataset) -> None:
"""
Validate dataset has required fields with correct types
Args:
dataset: NestedDataset or RayDataset to validate
Raises:
DataValidationError: If validation fails
"""
super().validate(dataset)
... |
Validate dataset has required fields with correct types
Args:
dataset: NestedDataset or RayDataset to validate
Raises:
DataValidationError: If validation fails
| validate | python | modelscope/data-juicer | data_juicer/core/data/data_validator.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/data_validator.py | Apache-2.0 |
def process(
self,
operators, # TODO: add type hint
*,
exporter=None,
checkpointer=None,
tracer=None) -> DJDataset:
"""process a list of operators on the dataset."""
pass | process a list of operators on the dataset. | process | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def wrap_func_with_nested_access(f):
"""
Before conducting actual function `f`, wrap its args and kargs into nested
ones.
:param f: function to be wrapped.
:return: wrapped function
"""
def wrap_nested_structure(*args, **kargs):
wrapped_args = [nested_obj_factory(arg) for arg in ar... |
Before conducting actual function `f`, wrap its args and kargs into nested
ones.
:param f: function to be wrapped.
:return: wrapped function
| wrap_func_with_nested_access | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def nested_obj_factory(obj):
"""
Use nested classes to wrap the input object.
:param obj: object to be nested.
:return: nested object
"""
if isinstance(obj, Dataset):
return NestedDataset(obj)
elif isinstance(obj, DatasetDict):
return NestedDatasetDict(obj)
elif isinstan... |
Use nested classes to wrap the input object.
:param obj: object to be nested.
:return: nested object
| nested_obj_factory | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def map(self, **args):
"""Override the map func, which is called by most common operations,
such that the processed samples can be accessed by nested manner."""
if 'function' not in args or args['function'] is None:
args['function'] = lambda x: nested_obj_factory(x)
else:
... | Override the map func, which is called by most common operations,
such that the processed samples can be accessed by nested manner. | map | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def get_column(self, column: str, k: Optional[int] = None) -> List[Any]:
"""Get column values from HuggingFace dataset.
Args:
column: Name of the column to retrieve
k: Optional number of rows to return. If None, returns all rows
Returns:
List of values from ... | Get column values from HuggingFace dataset.
Args:
column: Name of the column to retrieve
k: Optional number of rows to return. If None, returns all rows
Returns:
List of values from the specified column
Raises:
KeyError: If column doesn't exist
... | get_column | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def map(self, *args, **kargs):
"""Override the map func, which is called by most common operations,
such that the processed samples can be accessed by nested manner."""
args, kargs = self.update_args(args, kargs)
if cache_utils.CACHE_COMPRESS:
decompress(self, kargs['new_fi... | Override the map func, which is called by most common operations,
such that the processed samples can be accessed by nested manner. | map | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def filter(self, *args, **kargs):
"""Override the filter func, which is called by most common operations,
such that the processed samples can be accessed by nested manner."""
args, kargs = self.update_args(args, kargs, is_filter=True)
# For filter, it involves a map and a filter operati... | Override the filter func, which is called by most common operations,
such that the processed samples can be accessed by nested manner. | filter | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def nested_query(root_obj: Union[NestedDatasetDict, NestedDataset,
NestedQueryDict], key):
"""
Find item from a given object, by first checking flatten layer, then
checking nested layers.
:param root_obj: the object
:param key: the stored item to be queried, e.g., "... |
Find item from a given object, by first checking flatten layer, then
checking nested layers.
:param root_obj: the object
:param key: the stored item to be queried, e.g., "meta" or
"meta.date"
:return:
| nested_query | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def add_same_content_to_new_column(sample,
new_column_name,
initial_value=None):
"""
A helper function to speed up add_column function. Apply map on this
function in parallel instead of using add_column.
:param sample: a single sample... |
A helper function to speed up add_column function. Apply map on this
function in parallel instead of using add_column.
:param sample: a single sample to add this new column/field.
:param new_column_name: the name of this new column/field.
:param initial_value: the initial value of this new column/f... | add_same_content_to_new_column | python | modelscope/data-juicer | data_juicer/core/data/dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/dj_dataset.py | Apache-2.0 |
def matches(self, other: 'StrategyKey') -> bool:
"""
Check if this key matches another key with wildcard support
Supports Unix-style wildcards:
- '*' matches any string
- '?' matches any single character
- '[seq]' matches any character in seq
- '[!seq]' matches a... |
Check if this key matches another key with wildcard support
Supports Unix-style wildcards:
- '*' matches any string
- '?' matches any single character
- '[seq]' matches any character in seq
- '[!seq]' matches any character not in seq
| matches | python | modelscope/data-juicer | data_juicer/core/data/load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/load_strategy.py | Apache-2.0 |
def get_strategy_class(
cls, executor_type: str, data_type: str,
data_source: str) -> Optional[Type[DataLoadStrategy]]:
"""
Retrieve the most specific matching strategy
Matching priority:
1. Exact match
2. Wildcard matches from most specific to most gener... |
Retrieve the most specific matching strategy
Matching priority:
1. Exact match
2. Wildcard matches from most specific to most general
| get_strategy_class | python | modelscope/data-juicer | data_juicer/core/data/load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/load_strategy.py | Apache-2.0 |
def specificity_score(key: StrategyKey) -> int:
"""
Calculate specificity score (lower is more specific)
Exact match: 0
One wildcard: 1
Two wildcards: 2
All wildcards: 3
"""
return sum(1 for p... |
Calculate specificity score (lower is more specific)
Exact match: 0
One wildcard: 1
Two wildcards: 2
All wildcards: 3
| specificity_score | python | modelscope/data-juicer | data_juicer/core/data/load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/load_strategy.py | Apache-2.0 |
def register(cls, executor_type: str, data_type: str, data_source: str):
"""
Decorator for registering data load strategies with wildcard support
:param executor_type: Type of executor (e.g., 'default', 'ray')
:param data_type: Type of data (e.g., 'local', 'remote')
:param data_... |
Decorator for registering data load strategies with wildcard support
:param executor_type: Type of executor (e.g., 'default', 'ray')
:param data_type: Type of data (e.g., 'local', 'remote')
:param data_source: Specific data source (e.g., 'arxiv', 's3')
:return: Decorator functi... | register | python | modelscope/data-juicer | data_juicer/core/data/load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/load_strategy.py | Apache-2.0 |
def decorator(strategy_class: Type[DataLoadStrategy]):
"""
Register the strategy class for the given key
:param strategy_class: Strategy class to register
:return: Original strategy class
"""
key = StrategyKey(executor_type, data_type, data_source... |
Register the strategy class for the given key
:param strategy_class: Strategy class to register
:return: Original strategy class
| decorator | python | modelscope/data-juicer | data_juicer/core/data/load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/load_strategy.py | Apache-2.0 |
def set_dataset_to_absolute_path(dataset, dataset_path, cfg):
"""
Set all the path in input data to absolute path.
Checks dataset_dir and project_dir for valid paths.
"""
path_keys = []
columns = dataset.columns()
for key in [cfg.video_key, cfg.image_key, cfg.audio_key]:
if key in co... |
Set all the path in input data to absolute path.
Checks dataset_dir and project_dir for valid paths.
| set_dataset_to_absolute_path | python | modelscope/data-juicer | data_juicer/core/data/ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/ray_dataset.py | Apache-2.0 |
def schema(self) -> Schema:
"""Get dataset schema.
Returns:
Schema: Dataset schema containing column names and types
"""
if self.data is None or self.data.columns() is None:
raise ValueError('Dataset is empty or not initialized')
# Get schema from Ray da... | Get dataset schema.
Returns:
Schema: Dataset schema containing column names and types
| schema | python | modelscope/data-juicer | data_juicer/core/data/ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/ray_dataset.py | Apache-2.0 |
def get_column(self, column: str, k: Optional[int] = None) -> List[Any]:
"""Get column values from Ray dataset.
Args:
column: Name of the column to retrieve
k: Optional number of rows to return. If None, returns all rows
Returns:
List of values from the spec... | Get column values from Ray dataset.
Args:
column: Name of the column to retrieve
k: Optional number of rows to return. If None, returns all rows
Returns:
List of values from the specified column
Raises:
KeyError: If column doesn't exist
... | get_column | python | modelscope/data-juicer | data_juicer/core/data/ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/ray_dataset.py | Apache-2.0 |
def map_hf_type_to_python(cls, feature):
"""Map HuggingFace feature type to Python type.
Recursively maps nested types (e.g., List[str], Dict[str, int]).
Examples:
Value('string') -> str
Sequence(Value('int32')) -> List[int]
Dict({'text': Value('string')}) -... | Map HuggingFace feature type to Python type.
Recursively maps nested types (e.g., List[str], Dict[str, int]).
Examples:
Value('string') -> str
Sequence(Value('int32')) -> List[int]
Dict({'text': Value('string')}) -> Dict[str, Any]
Args:
feature:... | map_hf_type_to_python | python | modelscope/data-juicer | data_juicer/core/data/schema.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/schema.py | Apache-2.0 |
def map_ray_type_to_python(cls, ray_type: pa.DataType) -> type:
"""Map Ray/Arrow data type to Python type.
Args:
ray_type: PyArrow DataType
Returns:
Corresponding Python type
"""
# String types
if pa.types.is_string(ray_type):
return... | Map Ray/Arrow data type to Python type.
Args:
ray_type: PyArrow DataType
Returns:
Corresponding Python type
| map_ray_type_to_python | python | modelscope/data-juicer | data_juicer/core/data/schema.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/schema.py | Apache-2.0 |
def __str__(self) -> str:
"""Return formatted string representation of schema"""
lines = ['Dataset Schema:']
lines.append('-' * 40)
for col in self.columns:
lines.append(f'{col}: {self.column_types[col]}')
return '\n'.join(lines) | Return formatted string representation of schema | __str__ | python | modelscope/data-juicer | data_juicer/core/data/schema.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/data/schema.py | Apache-2.0 |
def __init__(self, cfg: Optional[Namespace] = None):
"""
Initialization method.
:param cfg: optional jsonargparse Namespace.
"""
super().__init__(cfg)
self.executor_type = 'default'
self.work_dir = self.cfg.work_dir
self.tracer = None
self.ckpt_m... |
Initialization method.
:param cfg: optional jsonargparse Namespace.
| __init__ | python | modelscope/data-juicer | data_juicer/core/executor/default_executor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/executor/default_executor.py | Apache-2.0 |
def run(self,
dataset: Union[Dataset, NestedDataset] = None,
load_data_np: Optional[PositiveInt] = None,
skip_return=False):
"""
Running the dataset process pipeline.
:param dataset: a Dataset object to be executed.
:param load_data_np: number of work... |
Running the dataset process pipeline.
:param dataset: a Dataset object to be executed.
:param load_data_np: number of workers when loading the dataset.
:param skip_return: skip return for API called.
:return: processed dataset.
| run | python | modelscope/data-juicer | data_juicer/core/executor/default_executor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/executor/default_executor.py | Apache-2.0 |
def sample_data(self,
dataset_to_sample: Dataset = None,
load_data_np=None,
sample_ratio: float = 1.0,
sample_algo: str = 'uniform',
**kwargs):
"""
Sample a subset from the given dataset.
TODO add... |
Sample a subset from the given dataset.
TODO add support other than LocalExecutor
:param dataset_to_sample: Dataset to sample from. If None, will use
the formatter linked by the executor. Default is None.
:param load_data_np: number of workers when loading the dataset.
... | sample_data | python | modelscope/data-juicer | data_juicer/core/executor/default_executor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/executor/default_executor.py | Apache-2.0 |
def __init__(self, cfg: Optional[Namespace] = None):
"""
Initialization method.
:param cfg: optional config dict.
"""
super().__init__(cfg)
self.executor_type = 'ray'
self.work_dir = self.cfg.work_dir
self.adapter = Adapter(self.cfg)
# init ray
... |
Initialization method.
:param cfg: optional config dict.
| __init__ | python | modelscope/data-juicer | data_juicer/core/executor/ray_executor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/executor/ray_executor.py | Apache-2.0 |
def run(self,
load_data_np: Optional[PositiveInt] = None,
skip_return=False):
"""
Running the dataset process pipeline
:param load_data_np: number of workers when loading the dataset.
:param skip_return: skip return for API called.
:return: processed data... |
Running the dataset process pipeline
:param load_data_np: number of workers when loading the dataset.
:param skip_return: skip return for API called.
:return: processed dataset.
| run | python | modelscope/data-juicer | data_juicer/core/executor/ray_executor.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/executor/ray_executor.py | Apache-2.0 |
def __init__(self, job_cfg, watcher, *args, **kwargs):
"""
Initialize the hook for refining the recipe via K Sigma
:param job_cfg: the job configs
:param watcher: for watching the result
"""
super(RefineRecipeViaKSigmaHook,
self).__init__(job_cfg, watcher, ... |
Initialize the hook for refining the recipe via K Sigma
:param job_cfg: the job configs
:param watcher: for watching the result
| __init__ | python | modelscope/data-juicer | data_juicer/core/sandbox/hooks.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/hooks.py | Apache-2.0 |
def __init__(self, job_cfg, watcher, *args, **kwargs):
"""
Initialize the hook for refining the recipe via Model Feedback
:param job_cfg: the job configs
:param watcher: for watching the result
"""
super(RefineRecipeViaModelFeedbackHook,
self).__init__(job_... |
Initialize the hook for refining the recipe via Model Feedback
:param job_cfg: the job configs
:param watcher: for watching the result
| __init__ | python | modelscope/data-juicer | data_juicer/core/sandbox/hooks.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/hooks.py | Apache-2.0 |
async def run(self, run_type, run_obj=None, **kwargs):
"""
conduct some model-related execution tasks
given specified run_type and run_obj
"""
watch_task = asyncio.create_task(
self.watch_run(run_type, run_obj, **kwargs))
if self.watcher is None:
... |
conduct some model-related execution tasks
given specified run_type and run_obj
| run | python | modelscope/data-juicer | data_juicer/core/sandbox/model_executors.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/model_executors.py | Apache-2.0 |
async def watch_run(self, run_type, run_obj=None, **kwargs):
"""
watch the running process in an online manner, and
return the summarized results
"""
met_eof = False
while not met_eof:
if os.path.exists(self.watcher.model_exe_log_file):
asy... |
watch the running process in an online manner, and
return the summarized results
| watch_run | python | modelscope/data-juicer | data_juicer/core/sandbox/model_executors.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/model_executors.py | Apache-2.0 |
def __init__(self, sandbox_cfg):
"""
Initialize the watcher with a reference to an executor instance.
"""
# the web-ui and experiment versioning is based on WandB
project_name = sandbox_cfg.project_name
experiment_name = sandbox_cfg.experiment_name
hpo_config = s... |
Initialize the watcher with a reference to an executor instance.
| __init__ | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def watch(self, res, meta_name: str = ''):
"""
Flatten the result in dot structure and log it into WandB.
"""
if isinstance(res, dict):
for key, value in res.items():
# getting the left nodes of the given res dictionary.
if isinstance(value, di... |
Flatten the result in dot structure and log it into WandB.
| watch | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def setup_sweep(self, hpo_config: dict = None, project_name: str = None):
"""
Setup and start a new WandB sweep.
"""
if hpo_config is None:
hpo_config = self.sandbox_cfg.hpo_config
if project_name is None:
project_name = self.sandbox_cfg.project_name
... |
Setup and start a new WandB sweep.
| setup_sweep | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def watch_cfgs(self, cfgs: List[tuple] = None):
"""
Watch the configuration of the experiment.
"""
merged_cfgs = {}
if cfgs is not None:
for cfg, cfg_prefix in cfgs:
# skip empty configs
if cfg is None:
continue
... |
Watch the configuration of the experiment.
| watch_cfgs | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def __init__(
self,
cfg=None,
):
"""
Initialization method.
:param cfg: configuration of sandbox.
"""
self.cfg = cfg
self.watcher = SandBoxWatcher(self.cfg)
self.watcher.watch_cfgs([(cfg, 'sandbox')])
# jobs to probe, refine_recipe,... |
Initialization method.
:param cfg: configuration of sandbox.
| __init__ | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def run(self):
"""
Running the sandbox pipeline at once or in HPO style.
"""
if self.cfg.hpo_config is not None:
# execute_hpo_wandb contains running one_trail with HPO scheduler
self.execute_hpo_wandb()
else:
self.one_trial() |
Running the sandbox pipeline at once or in HPO style.
| run | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def one_trial(self):
"""
Running the sandbox pipeline at once.
Users can flexibly conduct some steps of the whole sandbox pipeline
according to their own need and configuration. The watcher will
automatically track the results in terms of data, model and specified
... |
Running the sandbox pipeline at once.
Users can flexibly conduct some steps of the whole sandbox pipeline
according to their own need and configuration. The watcher will
automatically track the results in terms of data, model and specified
evaluation metrics to the watche... | one_trial | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def execute_hpo_wandb(self):
"""
Running the sandbox pipeline in HPO style.
Users can flexibly conduct some steps of the whole sandbox pipeline
according to their own need and configuration. The watcher will
automatically track the results in terms of data, model and specifi... |
Running the sandbox pipeline in HPO style.
Users can flexibly conduct some steps of the whole sandbox pipeline
according to their own need and configuration. The watcher will
automatically track the results in terms of data, model and specified
evaluation metrics to the w... | execute_hpo_wandb | python | modelscope/data-juicer | data_juicer/core/sandbox/pipelines.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/sandbox/pipelines.py | Apache-2.0 |
def _tex_proj_loader(self, file_or_dir_path):
r"""function to load the tex files from a tar file or a gzip file. The
function will return a tuple containing a list of tex files and the
timestamp of the project.
@param file_or_dir_path: path to the tar file or the gzip file
@ret... | function to load the tex files from a tar file or a gzip file. The
function will return a tuple containing a list of tex files and the
timestamp of the project.
@param file_or_dir_path: path to the tar file or the gzip file
@return: tuple containing a list of tex files and the timestam... | _tex_proj_loader | python | modelscope/data-juicer | data_juicer/download/arxiv.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/download/arxiv.py | Apache-2.0 |
def _format_arxiv_id(self, arxiv_id):
r"""this function brings the raw arxiv-id into a format compliant with the
specification from arxiv. This is used to create the url to the arxiv
abstract page.
- Format prior to March 2007:
<archive>/YYMMNNN where N is a 3-digit number
... | this function brings the raw arxiv-id into a format compliant with the
specification from arxiv. This is used to create the url to the arxiv
abstract page.
- Format prior to March 2007:
<archive>/YYMMNNN where N is a 3-digit number
- Format after March 2007: <archive>/YYMM.N... | _format_arxiv_id | python | modelscope/data-juicer | data_juicer/download/arxiv.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/download/arxiv.py | Apache-2.0 |
def _clean_tex_file(self, file_content, arg_macros, non_arg_macros):
r"""function takes a tex file as input and returns a cleaned version. The
cleaned version is a concatenation of the tex files with the
following modifications:
- remove all comments (i.e. all lines starting with %)
... | function takes a tex file as input and returns a cleaned version. The
cleaned version is a concatenation of the tex files with the
following modifications:
- remove all comments (i.e. all lines starting with %)
- remove everything before the first section-like header
- remove e... | _clean_tex_file | python | modelscope/data-juicer | data_juicer/download/arxiv.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/download/arxiv.py | Apache-2.0 |
def _build_non_arg_macros_dict(self, file_content):
r"""function takes the content of a tex file and returns a dictionary
that contains the definitions of all macros that do not use arguments.
The dictionary is of the form {macro_name: macro_value}.
@param file_content: the content of t... | function takes the content of a tex file and returns a dictionary
that contains the definitions of all macros that do not use arguments.
The dictionary is of the form {macro_name: macro_value}.
@param file_content: the content of the tex file as a string.
@return: dict
| _build_non_arg_macros_dict | python | modelscope/data-juicer | data_juicer/download/arxiv.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/download/arxiv.py | Apache-2.0 |
def get_wikipedia_urls(
language='en',
wikidumps_index_prefix='https://dumps.wikimedia.org',
dump_date: Optional[str] = None,
) -> List[str]:
"""
Retrieves all urls pointing to the latest Wikipedia dumps
Args:
language: Desired language of the Wikipedia dump.
wikidumps_index_pre... |
Retrieves all urls pointing to the latest Wikipedia dumps
Args:
language: Desired language of the Wikipedia dump.
wikidumps_index_prefix: The base url for all wikipedia dumps
dump_date: A string formatted as "YYYYMMDD" for the wikipedia dump to use.
If None, latest dump is us... | get_wikipedia_urls | python | modelscope/data-juicer | data_juicer/download/downloader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/download/downloader.py | Apache-2.0 |
def validate_snapshot_format(snapshot: Optional[str]) -> None:
"""
Validate snapshot format 'YYYY-WW'.
Args:
snapshot: Snapshot string in format 'YYYY-WW' or None
Raises:
ValueError: If format is invalid
"""
if snapshot is None:
return
# Check basic format with reg... |
Validate snapshot format 'YYYY-WW'.
Args:
snapshot: Snapshot string in format 'YYYY-WW' or None
Raises:
ValueError: If format is invalid
| validate_snapshot_format | python | modelscope/data-juicer | data_juicer/download/downloader.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/download/downloader.py | Apache-2.0 |
def __init__(self, dataset_path, suffixes=None, **kwargs):
"""
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be processed
:param kwargs: extra args
"""
super().__init__(
... |
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be processed
:param kwargs: extra args
| __init__ | python | modelscope/data-juicer | data_juicer/format/csv_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/csv_formatter.py | Apache-2.0 |
def __init__(self, length, feature_keys: List[str] = [], *args, **kwargs):
"""
Initialization method.
:param length: The empty dataset length.
:param feature_keys: feature key name list.
"""
self.length = length
self.feature_keys = feature_keys
if isinsta... |
Initialization method.
:param length: The empty dataset length.
:param feature_keys: feature key name list.
| __init__ | python | modelscope/data-juicer | data_juicer/format/empty_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/empty_formatter.py | Apache-2.0 |
def __init__(self, length, feature_keys: List[str] = [], *args, **kwargs):
"""
Initialization method.
:param length: The empty dataset length.
:param feature_keys: feature key name list.
"""
self.length = length
self.feature_keys = feature_keys
if isinsta... |
Initialization method.
:param length: The empty dataset length.
:param feature_keys: feature key name list.
| __init__ | python | modelscope/data-juicer | data_juicer/format/empty_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/empty_formatter.py | Apache-2.0 |
def __init__(
self,
dataset_path: str,
type: str,
suffixes: Union[str, List[str], None] = None,
text_keys: List[str] = None,
add_suffix=False,
**kwargs,
):
"""
Initialization method.
:param dataset_path: path to a dataset file or a dat... |
Initialization method.
:param dataset_path: path to a dataset file or a dataset
directory
:param type: a packaged dataset module type (json, csv, etc.)
:param suffixes: files with specified suffixes to be processed
:param text_keys: key names of field that stores sa... | __init__ | python | modelscope/data-juicer | data_juicer/format/formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/formatter.py | Apache-2.0 |
def load_dataset(self, num_proc: int = 1, global_cfg=None) -> Dataset:
"""
Load a dataset from dataset file or dataset directory, and unify its
format.
:param num_proc: number of processes when loading the dataset
:param global_cfg: global cfg used in consequent processes,
... |
Load a dataset from dataset file or dataset directory, and unify its
format.
:param num_proc: number of processes when loading the dataset
:param global_cfg: global cfg used in consequent processes,
:return: formatted dataset
| load_dataset | python | modelscope/data-juicer | data_juicer/format/formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/formatter.py | Apache-2.0 |
def __init__(self,
dataset_path: str,
text_keys: List[str] = None,
**kwargs):
"""
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param text_keys: key names of field that stores sample
text... |
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param text_keys: key names of field that stores sample
text.
:param kwargs: extra args
| __init__ | python | modelscope/data-juicer | data_juicer/format/formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/formatter.py | Apache-2.0 |
def load_dataset(self, num_proc: int = 1, global_cfg=None) -> Dataset:
"""
Load a dataset from HuggingFace, and unify its format.
:param num_proc: number of processes when loading the dataset
:param global_cfg: the global cfg used in consequent processes,
:return: formatted data... |
Load a dataset from HuggingFace, and unify its format.
:param num_proc: number of processes when loading the dataset
:param global_cfg: the global cfg used in consequent processes,
:return: formatted dataset
| load_dataset | python | modelscope/data-juicer | data_juicer/format/formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/formatter.py | Apache-2.0 |
def add_suffixes(datasets: DatasetDict, num_proc: int = 1) -> Dataset:
"""
Add suffix filed to datasets.
:param datasets: a DatasetDict object
:param num_proc: number of processes to add suffixes
:return: datasets with suffix features.
"""
logger.info('Add suffix column for dataset')
fr... |
Add suffix filed to datasets.
:param datasets: a DatasetDict object
:param num_proc: number of processes to add suffixes
:return: datasets with suffix features.
| add_suffixes | python | modelscope/data-juicer | data_juicer/format/formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/formatter.py | Apache-2.0 |
def unify_format(
dataset: Dataset,
text_keys: Union[List[str], str] = 'text',
num_proc: int = 1,
global_cfg=None,
) -> Dataset:
"""
Get an unified internal format, conduct the following modifications.
1. check keys of dataset
2. filter out those samples with empty or None text
:p... |
Get an unified internal format, conduct the following modifications.
1. check keys of dataset
2. filter out those samples with empty or None text
:param dataset: input dataset
:param text_keys: original text key(s) of dataset.
:param num_proc: number of processes for mapping
:param globa... | unify_format | python | modelscope/data-juicer | data_juicer/format/formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/formatter.py | Apache-2.0 |
def __init__(self, dataset_path, suffixes=None, **kwargs):
"""
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be processed
:param kwargs: extra args
"""
super().__init__(
... |
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be processed
:param kwargs: extra args
| __init__ | python | modelscope/data-juicer | data_juicer/format/json_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/json_formatter.py | Apache-2.0 |
def load_formatter(dataset_path,
text_keys=None,
suffixes=None,
add_suffix=False,
**kwargs) -> BaseFormatter:
"""
Load the appropriate formatter for different types of data formats.
:param dataset_path: Path to dataset file or data... |
Load the appropriate formatter for different types of data formats.
:param dataset_path: Path to dataset file or dataset directory
:param text_keys: key names of field that stores sample text.
Default: None
:param suffixes: the suffix of files that will be read.
Default: None
:para... | load_formatter | python | modelscope/data-juicer | data_juicer/format/load.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/load.py | Apache-2.0 |
def __init__(self, dataset_path, suffixes=None, **kwargs):
"""
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be processed
:param kwargs: extra args
"""
super().__init__(
... |
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be processed
:param kwargs: extra args
| __init__ | python | modelscope/data-juicer | data_juicer/format/parquet_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/parquet_formatter.py | Apache-2.0 |
def extract_txt_from_docx(fn, tgt_path):
"""
Extract text from a docx file and save to target path.
:param fn: path to input pdf file
:param tgt_path: path to save text file.
"""
doc = Document(fn)
text = [para.text for para in doc.paragraphs if para.text.strip()]
base_fn = os.path.base... |
Extract text from a docx file and save to target path.
:param fn: path to input pdf file
:param tgt_path: path to save text file.
| extract_txt_from_docx | python | modelscope/data-juicer | data_juicer/format/text_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/text_formatter.py | Apache-2.0 |
def extract_txt_from_pdf(fn, tgt_path):
"""
Extract text from a pdf file and save to target path.
:param fn: path to input pdf file
:param tgt_path: path to save text file.
"""
with pdfplumber.open(fn) as pdf:
text = []
for page in pdf.pages:
# remove tables from eac... |
Extract text from a pdf file and save to target path.
:param fn: path to input pdf file
:param tgt_path: path to save text file.
| extract_txt_from_pdf | python | modelscope/data-juicer | data_juicer/format/text_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/text_formatter.py | Apache-2.0 |
def __init__(self,
dataset_path,
suffixes=None,
add_suffix=False,
**kwargs):
"""
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be pro... |
Initialization method.
:param dataset_path: a dataset file or a dataset directory
:param suffixes: files with specified suffixes to be processed
:param add_suffix: Whether to add file suffix to dataset meta
info
:param kwargs: extra args
| __init__ | python | modelscope/data-juicer | data_juicer/format/text_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/text_formatter.py | Apache-2.0 |
def load_dataset(self, num_proc: int = 1, global_cfg=None) -> Dataset:
"""
Load a dataset from local text-type files.
:param num_proc: number of processes when loading the dataset
:param global_cfg: the global cfg used in consequent processes,
:return: unified_format_dataset.
... |
Load a dataset from local text-type files.
:param num_proc: number of processes when loading the dataset
:param global_cfg: the global cfg used in consequent processes,
:return: unified_format_dataset.
| load_dataset | python | modelscope/data-juicer | data_juicer/format/text_formatter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/format/text_formatter.py | Apache-2.0 |
def catch_map_batches_exception(method, skip_op_error=False, op_name=None):
"""
For batched-map sample-level fault tolerance.
"""
if op_name is None:
op_name = method.__name__
@wraps(method)
@convert_arrow_to_python
def wrapper(samples, *args, **kwargs):
try:
re... |
For batched-map sample-level fault tolerance.
| catch_map_batches_exception | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def catch_map_single_exception(method,
return_sample=True,
skip_op_error=False,
op_name=None):
"""
For single-map sample-level fault tolerance.
The input sample is expected batch_size = 1.
"""
if op_name is... |
For single-map sample-level fault tolerance.
The input sample is expected batch_size = 1.
| catch_map_single_exception | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def __init__(self, *args, **kwargs):
"""
Base class of operators.
:param text_key: the key name of field that stores sample texts
to be processed.
:param image_key: the key name of field that stores sample image list
to be processed
:param audio_key: the ... |
Base class of operators.
:param text_key: the key name of field that stores sample texts
to be processed.
:param image_key: the key name of field that stores sample image list
to be processed
:param audio_key: the key name of field that stores sample audio list
... | __init__ | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def remove_extra_parameters(self, param_dict, keys=None):
"""
at the beginning of the init of the mapper op, call
self.remove_extra_parameters(locals())
to get the init parameter dict of the op for convenience
"""
if keys is None:
param_dict = {
... |
at the beginning of the init of the mapper op, call
self.remove_extra_parameters(locals())
to get the init parameter dict of the op for convenience
| remove_extra_parameters | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def add_parameters(self, init_parameter_dict, **extra_param_dict):
"""
add parameters for each sample, need to keep extra_param_dict
and init_parameter_dict unchanged.
"""
related_parameters = copy.deepcopy(init_parameter_dict)
related_parameters.update(extra_para... |
add parameters for each sample, need to keep extra_param_dict
and init_parameter_dict unchanged.
| add_parameters | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def __init__(self, *args, **kwargs):
"""
Base class that conducts data editing.
:param text_key: the key name of field that stores sample texts
to be processed.
:param image_key: the key name of field that stores sample image list
to be processed
:param a... |
Base class that conducts data editing.
:param text_key: the key name of field that stores sample texts
to be processed.
:param image_key: the key name of field that stores sample image list
to be processed
:param audio_key: the key name of field that stores samp... | __init__ | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def __init__(self, *args, **kwargs):
"""
Base class that removes specific info.
:param text_key: the key name of field that stores sample texts
to be processed
:param image_key: the key name of field that stores sample image list
to be processed
:param au... |
Base class that removes specific info.
:param text_key: the key name of field that stores sample texts
to be processed
:param image_key: the key name of field that stores sample image list
to be processed
:param audio_key: the key name of field that stores sampl... | __init__ | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def __init__(self, *args, **kwargs):
"""
Base class that conducts deduplication.
:param text_key: the key name of field that stores sample texts
to be processed
:param image_key: the key name of field that stores sample image list
to be processed
:param a... |
Base class that conducts deduplication.
:param text_key: the key name of field that stores sample texts
to be processed
:param image_key: the key name of field that stores sample image list
to be processed
:param audio_key: the key name of field that stores samp... | __init__ | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def __init__(self, *args, **kwargs):
"""
Base class that group samples.
:param text_key: the key name of field that stores sample texts
to be processed
:param image_key: the key name of field that stores sample image list
to be processed
:param audio_key:... |
Base class that group samples.
:param text_key: the key name of field that stores sample texts
to be processed
:param image_key: the key name of field that stores sample image list
to be processed
:param audio_key: the key name of field that stores sample audio ... | __init__ | python | modelscope/data-juicer | data_juicer/ops/base_op.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/base_op.py | Apache-2.0 |
def load_ops(process_list):
"""
Load op list according to the process list from config file.
:param process_list: A process list. Each item is an op name and its
arguments.
:return: The op instance list.
"""
ops = []
new_process_list = []
for process in process_list:
op... |
Load op list according to the process list from config file.
:param process_list: A process list. Each item is an op name and its
arguments.
:return: The op instance list.
| load_ops | python | modelscope/data-juicer | data_juicer/ops/load.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/load.py | Apache-2.0 |
def register_event_handler(self, event_type: str, handler: Callable):
"""Register a handler for a specific event type.
Args:
event_type: Type of event to handle
handler: Callback function to handle the event
"""
if event_type not in self.event_handlers:
... | Register a handler for a specific event type.
Args:
event_type: Type of event to handle
handler: Callback function to handle the event
| register_event_handler | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def trigger_event(self, event_type: str, data: Dict):
"""Trigger an event and call all registered handlers.
Args:
event_type: Type of event to trigger
data: Event data to pass to handlers
"""
if event_type in self.event_handlers:
for handler in self.e... | Trigger an event and call all registered handlers.
Args:
event_type: Type of event to trigger
data: Event data to pass to handlers
| trigger_event | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def start_polling(self,
event_type: str,
poll_func: Callable,
interval: int = 60):
"""Start polling for a specific event type.
Args:
event_type: Type of event to poll for
poll_func: Function to call for polling
... | Start polling for a specific event type.
Args:
event_type: Type of event to poll for
poll_func: Function to call for polling
interval: Polling interval in seconds
| start_polling | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def stop_polling(self, event_type: str):
"""Stop polling for a specific event type.
Args:
event_type: Type of event to stop polling for
"""
if event_type in self.polling_threads:
self.stop_polling_flags[event_type] = True
self.polling_threads[event_ty... | Stop polling for a specific event type.
Args:
event_type: Type of event to stop polling for
| stop_polling | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def wait_for_completion(self,
condition_func: Callable[[], bool],
timeout: int = 3600,
poll_interval: int = 10,
error_message: str = 'Operation timed out'):
"""Wait for a condition to be met.
... | Wait for a condition to be met.
Args:
condition_func: Function that returns True when condition is met
timeout: Maximum time to wait in seconds
poll_interval: Polling interval in seconds
error_message: Error message to raise on timeout
Raises:
... | wait_for_completion | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def send_notification(self,
message: str,
notification_type: str = None,
**kwargs):
"""Send a notification message.
Args:
message: The message to send
notification_type: The type of notification to sen... | Send a notification message.
Args:
message: The message to send
notification_type: The type of notification to send.
Email, Slack, DingTalk.
If None, send nothing
**kwargs: Additional arguments to pass to the no... | send_notification | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def _send_email_notification(self, message: str, **kwargs):
"""Send an email notification.
Args:
message: The message to send
**kwargs: Additional parameters for email configuration
(recipients, subject, etc.)
Returns:
bool: Whether the... | Send an email notification.
Args:
message: The message to send
**kwargs: Additional parameters for email configuration
(recipients, subject, etc.)
Returns:
bool: Whether the email was sent successfully
| _send_email_notification | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def _send_slack_notification(self, message: str, **kwargs):
"""Send a Slack notification.
Args:
message: The message to send
**kwargs: Additional parameters for Slack configuration
(webhook_url, channel, etc.)
Returns:
bool: Whether the... | Send a Slack notification.
Args:
message: The message to send
**kwargs: Additional parameters for Slack configuration
(webhook_url, channel, etc.)
Returns:
bool: Whether the notification was sent successfully
| _send_slack_notification | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def _send_dingtalk_notification(self, message: str, **kwargs):
"""Send a DingTalk notification.
Args:
message: The message to send
**kwargs: Additional parameters for DingTalk configuration
(access_token, secret, etc.)
Returns:
bool: Wh... | Send a DingTalk notification.
Args:
message: The message to send
**kwargs: Additional parameters for DingTalk configuration
(access_token, secret, etc.)
Returns:
bool: Whether the notification was sent successfully
| _send_dingtalk_notification | python | modelscope/data-juicer | data_juicer/ops/mixins.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/mixins.py | Apache-2.0 |
def fuse_operators(ops, probe_res=None):
"""
Fuse the input ops list and return the fused ops list.
:param ops: the corresponding list of op objects.
:param probe_res: the probed speed for each OP from Monitor.
:return: a list of fused op objects.
"""
if probe_res is None:
probe_res... |
Fuse the input ops list and return the fused ops list.
:param ops: the corresponding list of op objects.
:param probe_res: the probed speed for each OP from Monitor.
:return: a list of fused op objects.
| fuse_operators | python | modelscope/data-juicer | data_juicer/ops/op_fusion.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/op_fusion.py | Apache-2.0 |
def fuse_filter_group(original_filter_group):
"""
Fuse single filter group and return the fused filter group.
:param original_filter_group: the original filter group, including op
definitions and objects.
:return: the fused definitions and objects of the input filter group.
"""
fused_gr... |
Fuse single filter group and return the fused filter group.
:param original_filter_group: the original filter group, including op
definitions and objects.
:return: the fused definitions and objects of the input filter group.
| fuse_filter_group | python | modelscope/data-juicer | data_juicer/ops/op_fusion.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/op_fusion.py | Apache-2.0 |
def __init__(self, name: str, fused_filters: List):
"""
Initialization method.
:param fused_filters: a list of filters to be fused.
"""
self._name = name
super().__init__()
self.fused_filters = fused_filters
# set accelerator to 'cuda' if there exists any... |
Initialization method.
:param fused_filters: a list of filters to be fused.
| __init__ | python | modelscope/data-juicer | data_juicer/ops/op_fusion.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/ops/op_fusion.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.