code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def execute_image_diffusion_mapper(dataset_path: str) -> ServiceResponse:
"""
Produce images according to each text in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path,
'image_d... |
Produce images according to each text in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
| execute_image_diffusion_mapper | python | modelscope/data-juicer | demos/api_service/wrapped_mappers.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_mappers.py | Apache-2.0 |
def execute_image_face_blur_mapper(dataset_path: str) -> ServiceResponse:
"""
Detect and blur face areas for each images in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path, 'image_face_blur_mapper')
r... |
Detect and blur face areas for each images in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
| execute_image_face_blur_mapper | python | modelscope/data-juicer | demos/api_service/wrapped_mappers.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_mappers.py | Apache-2.0 |
def execute_video_caption_mapper(dataset_path: str) -> ServiceResponse:
"""
Produce captions for each video in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path,
'video_captionin... |
Produce captions for each video in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
| execute_video_caption_mapper | python | modelscope/data-juicer | demos/api_service/wrapped_mappers.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_mappers.py | Apache-2.0 |
def execute_video_face_blur_mapper(dataset_path: str) -> ServiceResponse:
"""
Detect and blur face areas for each video in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
"""
try:
dj_config = init_config(dataset_path, 'video_face_blur_mapper')
re... |
Detect and blur face areas for each video in the dataset.
Args:
dataset_path (`str`):
The input dataset path.
| execute_video_face_blur_mapper | python | modelscope/data-juicer | demos/api_service/wrapped_mappers.py | https://github.com/modelscope/data-juicer/blob/master/demos/api_service/wrapped_mappers.py | Apache-2.0 |
def keep_by_lang(sample, lang):
"""
Keep samples with the specified language.
:param sample: a sample in dataset
:param lang: the specified language
:return: True to keep, False to discard
"""
if sample[Fields.stats][StatsKeys.lang] == lang:
return True
return False |
Keep samples with the specified language.
:param sample: a sample in dataset
:param lang: the specified language
:return: True to keep, False to discard
| keep_by_lang | python | modelscope/data-juicer | demos/tool_dataset_splitting_by_language/dataset_splitting_by_language.py | https://github.com/modelscope/data-juicer/blob/master/demos/tool_dataset_splitting_by_language/dataset_splitting_by_language.py | Apache-2.0 |
def main(src_dir, target_dir, text_key=None, suffixes=[], num_proc=1):
"""
Load dataset from the source directory, then apply language identification
using the operation filter called `LanguageIDScoreFilter`,
finally, split the dataset by language and save it.
:param src_dir: path to store the datas... |
Load dataset from the source directory, then apply language identification
using the operation filter called `LanguageIDScoreFilter`,
finally, split the dataset by language and save it.
:param src_dir: path to store the dataset.
:param target_dir: path to store subset files(`jsonl` format)
:par... | main | python | modelscope/data-juicer | demos/tool_dataset_splitting_by_language/dataset_splitting_by_language.py | https://github.com/modelscope/data-juicer/blob/master/demos/tool_dataset_splitting_by_language/dataset_splitting_by_language.py | Apache-2.0 |
def main(positive_datasets=None,
negative_datasets=None,
model='my_quality_model',
tokenizer=None,
text_key='text'):
"""
:param positive_datasets: the paths to the positive datasets. It could be a
string for a single dataset, e.g. 'pos.parquet', or a list of strings
... |
:param positive_datasets: the paths to the positive datasets. It could be a
string for a single dataset, e.g. 'pos.parquet', or a list of strings
for several datasets, e.g. '["pos1.parquet", "pos2.parquet"]'.
:param negative_datasets: the paths to the negative datasets. It could be a
s... | main | python | modelscope/data-juicer | demos/tool_quality_classifier/quality_classifier/eval.py | https://github.com/modelscope/data-juicer/blob/master/demos/tool_quality_classifier/quality_classifier/eval.py | Apache-2.0 |
def main(dataset_path,
result_path,
model='gpt3',
tokenizer=None,
keep_method='gpt3',
text_key='text',
overall_statistics=False):
"""
Apply quality classifier for your dataset.
:param dataset_path: the path to the dataset you want to predict for.
:pa... |
Apply quality classifier for your dataset.
:param dataset_path: the path to the dataset you want to predict for.
:param result_path: the path to store the predicted result dataset.
:param model: quality classifier name to apply. It's "gpt3" in default. You
can use one of ["gpt3", "chinese", "co... | main | python | modelscope/data-juicer | demos/tool_quality_classifier/quality_classifier/predict.py | https://github.com/modelscope/data-juicer/blob/master/demos/tool_quality_classifier/quality_classifier/predict.py | Apache-2.0 |
def init_spark():
"""
Initialize a spark session. You can set parameters such as memory, number
of partitions, timeout and so on here.
:return: A spark session instance.
"""
spark = (SparkSession.builder.config('spark.driver.memory', '64g').config(
'spark.executor.memory',
'64g')... |
Initialize a spark session. You can set parameters such as memory, number
of partitions, timeout and so on here.
:return: A spark session instance.
| init_spark | python | modelscope/data-juicer | demos/tool_quality_classifier/quality_classifier/qc_utils.py | https://github.com/modelscope/data-juicer/blob/master/demos/tool_quality_classifier/quality_classifier/qc_utils.py | Apache-2.0 |
def main(positive_datasets,
negative_datasets,
output_model_path='my_quality_model',
num_training_samples=0,
train_test_split_ratio=0.8,
tokenizer=None,
evaluation=True,
text_key='text'):
"""
Train a quality classifier using your own pos/neg dataset... |
Train a quality classifier using your own pos/neg datasets.
:param positive_datasets: the paths to the positive datasets. It could be a
string for a single dataset, e.g. 'pos.parquet', or a list of strings
for several datasets, e.g. '["pos1.parquet", "pos2.parquet"]'.
:param negative_datase... | main | python | modelscope/data-juicer | demos/tool_quality_classifier/quality_classifier/train.py | https://github.com/modelscope/data-juicer/blob/master/demos/tool_quality_classifier/quality_classifier/train.py | Apache-2.0 |
def get_lang_link(language, pagename, lang_code, non_zh_pages=[], current_version=""):
"""Generate language specific links for documentation pages"""
base_path = "../../" if current_version else "../"
def norm_pagename(pagename):
return os.path.normpath(pagename)
norm_non_zh_pages = set(map(no... | Generate language specific links for documentation pages | get_lang_link | python | modelscope/data-juicer | docs/sphinx_doc/source/conf.py | https://github.com/modelscope/data-juicer/blob/master/docs/sphinx_doc/source/conf.py | Apache-2.0 |
def find_zh_exclusions(app, config):
"""
Find Chinese translation files to exclude when building English documentation
"""
non_zh_pages = set()
zh_exclusions = []
for root, dirs, files in os.walk(app.srcdir):
for file in files:
# Check for files with English base names and c... |
Find Chinese translation files to exclude when building English documentation
| find_zh_exclusions | python | modelscope/data-juicer | docs/sphinx_doc/source/conf.py | https://github.com/modelscope/data-juicer/blob/master/docs/sphinx_doc/source/conf.py | Apache-2.0 |
def create_symlinks(source_dir):
"""Create symbolic links for markdown files in the documentation"""
# Use app.srcdir to get the current version of the document source directory
project_root = source_dir.parent.parent.parent
for md_file in project_root.rglob("*.md"):
exclude_paths = ["outputs",... | Create symbolic links for markdown files in the documentation | create_symlinks | python | modelscope/data-juicer | docs/sphinx_doc/source/conf.py | https://github.com/modelscope/data-juicer/blob/master/docs/sphinx_doc/source/conf.py | Apache-2.0 |
def skip(app, what, name, obj, would_skip, options):
"""Control which members to skip in documentation"""
if name == "__init__":
return False
return would_skip | Control which members to skip in documentation | skip | python | modelscope/data-juicer | docs/sphinx_doc/source/conf.py | https://github.com/modelscope/data-juicer/blob/master/docs/sphinx_doc/source/conf.py | Apache-2.0 |
def copy_sphinx_doc_to_build(app, config):
"""Copies the entire project directory to the Sphinx build directory."""
source_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
dest_dir = app.srcdir.parent.parent.parent / "docs/sphinx_doc"
try:
shutil.copytree(source_dir, dest_di... | Copies the entire project directory to the Sphinx build directory. | copy_sphinx_doc_to_build | python | modelscope/data-juicer | docs/sphinx_doc/source/conf.py | https://github.com/modelscope/data-juicer/blob/master/docs/sphinx_doc/source/conf.py | Apache-2.0 |
def test_get_default_cfg(self):
"""Test getting default configuration from config_all.yaml"""
# Get default config
cfg = get_default_cfg()
# Verify basic default values
self.assertIsInstance(cfg, Namespace)
# Test essential defaults
self.assertEq... | Test getting default configuration from config_all.yaml | test_get_default_cfg | python | modelscope/data-juicer | tests/config/test_config.py | https://github.com/modelscope/data-juicer/blob/master/tests/config/test_config.py | Apache-2.0 |
def test_cli_override(self):
"""Test that command line arguments correctly override YAML config values."""
out = StringIO()
with redirect_stdout(out):
# Test with multiple operators and nested parameters
cfg = init_configs(args=[
'--config', test_yaml_path... | Test that command line arguments correctly override YAML config values. | test_cli_override | python | modelscope/data-juicer | tests/config/test_config.py | https://github.com/modelscope/data-juicer/blob/master/tests/config/test_config.py | Apache-2.0 |
def test_cli_override_with_equals(self):
"""Test command line overrides using equals sign syntax."""
out = StringIO()
with redirect_stdout(out):
cfg = init_configs(args=[
'--config', test_yaml_path,
'--language_id_score_filter.lang=en',
... | Test command line overrides using equals sign syntax. | test_cli_override_with_equals | python | modelscope/data-juicer | tests/config/test_config.py | https://github.com/modelscope/data-juicer/blob/master/tests/config/test_config.py | Apache-2.0 |
def test_cli_override_invalid_value(self):
"""Test that invalid command line override values are properly caught."""
out = StringIO()
with redirect_stdout(out), redirect_stderr(out):
with self.assertRaises(SystemExit) as cm:
init_configs(args=[
'--... | Test that invalid command line override values are properly caught. | test_cli_override_invalid_value | python | modelscope/data-juicer | tests/config/test_config.py | https://github.com/modelscope/data-juicer/blob/master/tests/config/test_config.py | Apache-2.0 |
def test_builder_single_dataset_config(self):
"""Test handling of single dataset configuration"""
# Setup single dataset config
self.base_cfg.dataset = {
'configs': [
{
'type': 'local',
'path': 'test.jsonl'
}
... | Test handling of single dataset configuration | test_builder_single_dataset_config | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_builder_multiple_dataset_config(self):
"""Test handling of multiple dataset configurations"""
# Setup multiple dataset config
self.base_cfg.dataset = {
'configs': [
{
'type': 'local',
'path': 'test1.jsonl'
... | Test handling of multiple dataset configurations | test_builder_multiple_dataset_config | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_builder_none_dataset_config(self):
"""Test handling when both dataset and dataset_path are None"""
self.base_cfg.dataset = None
with self.assertRaises(ValueError) as context:
builder = DatasetBuilder(self.base_cfg, self.executor_type)
builder.load_datase... | Test handling when both dataset and dataset_path are None | test_builder_none_dataset_config | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_builder_mixed_dataset_types(self):
"""Test validation of mixed dataset types"""
self.base_cfg.dataset = {
'configs': [
{
'type': 'local',
'path': 'test1.jsonl'
},
{
'type': 'r... | Test validation of mixed dataset types | test_builder_mixed_dataset_types | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_builder_multiple_remote_datasets(self):
"""Test validation of multiple remote datasets"""
self.base_cfg.dataset = {
'configs': [
{
'type': 'remote',
'source': 'source1'
},
{
'... | Test validation of multiple remote datasets | test_builder_multiple_remote_datasets | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_builder_empty_dataset_config(self):
"""Test handling of empty dataset configuration"""
self.base_cfg.dataset = {
'configs': []
}
with self.assertRaises(ConfigValidationError) as context:
DatasetBuilder(self.base_cfg, self.executor_type)
... | Test handling of empty dataset configuration | test_builder_empty_dataset_config | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_builder_invalid_dataset_config_type(self):
"""Test handling of invalid dataset configuration type"""
self.base_cfg.dataset = "invalid_string_config"
with self.assertRaises(ConfigValidationError) as context:
DatasetBuilder(self.base_cfg, self.executor_type)
... | Test handling of invalid dataset configuration type | test_builder_invalid_dataset_config_type | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_mixed_dataset_configs(self):
"""Test handling of mixed dataset configurations"""
self.base_cfg.dataset = {
'configs': [
{
'type': 'local',
'path': 'test1.jsonl',
'weight': 1.0
},
... | Test handling of mixed dataset configurations | test_mixed_dataset_configs | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_builder_ray_config(self):
"""Test loading Ray configuration from YAML"""
test_config_file = os.path.join(WORK_DIR, 'test_data', 'test_config_ray.yaml')
cfg = init_configs(args=f'--config {test_config_file}'.split())
# Verify basic config
self.assertIsInstance(c... | Test loading Ray configuration from YAML | test_builder_ray_config | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_dataset_path_and_dataset_priority(self):
"""Test priority between dataset_path and dataset configuration"""
# Create test files
with tempfile.TemporaryDirectory() as tmp_dir:
# Create two different test files
path1 = os.path.join(tmp_dir, 'test1.jsonl')
... | Test priority between dataset_path and dataset configuration | test_dataset_path_and_dataset_priority | python | modelscope/data-juicer | tests/core/data/test_dataset_builder.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dataset_builder.py | Apache-2.0 |
def test_invalid_dataset_type(self):
"""Test validation with unsupported dataset type"""
config = {
'required_fields': ['text']
}
validator = RequiredFieldsValidator(config)
with self.assertRaises(DataValidationError) as exc:
validator.validate([1... | Test validation with unsupported dataset type | test_invalid_dataset_type | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_empty_required_fields(self):
"""Test validation with empty required fields"""
config = {
'required_fields': []
}
validator = RequiredFieldsValidator(config)
# Should pass as no fields are required
validator.validate(self.dataset) | Test validation with empty required fields | test_empty_required_fields | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_valid_conversation_with_system(self):
"""Test valid conversation with system message"""
data = {
'messages': [
{'role': 'system', 'content': 'Be helpful'},
{'role': 'user', 'content': 'Hello'},
{'role': 'assistant', 'content': 'Hi ther... | Test valid conversation with system message | test_valid_conversation_with_system | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_valid_conversation_without_system(self):
"""Test valid conversation without system message"""
data = {
'messages': [
{'role': 'user', 'content': 'Hello'},
{'role': 'assistant', 'content': 'Hi there'}
]
}
dataset = NestedDat... | Test valid conversation without system message | test_valid_conversation_without_system | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_missing_messages(self):
"""Test conversation with missing messages field"""
data = {'random': 'random_value'}
dataset = NestedDataset(datasets.Dataset.from_list([data]))
with self.assertRaises(DataValidationError) as exc:
self.validator.validate(dataset)
self... | Test conversation with missing messages field | test_missing_messages | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_invalid_messages_type(self):
"""Test conversation with non-array messages"""
data = {'messages': 'not an array'}
dataset = NestedDataset(datasets.Dataset.from_list([data]))
with self.assertRaises(DataValidationError) as exc:
self.validator.validate(dataset)
s... | Test conversation with non-array messages | test_invalid_messages_type | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_non_string_content(self):
"""Test message with non-string content"""
data = {
'messages': [
{'role': 'user', 'content': 123},
{'role': 'assistant', 'content': 'Hi'}
]
}
with self.assertRaises(Exception) as exc:
... | Test message with non-string content | test_non_string_content | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_valid_conversation_with_system(self):
"""Test valid conversation with system message"""
data = {
'system': 'Be helpful',
'instruction': 'Help me with this',
'query': 'How do I code?',
'response': 'Here is how...',
'history': [
... | Test valid conversation with system message | test_valid_conversation_with_system | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_valid_conversation_without_system(self):
"""Test valid conversation without optional fields"""
data = {
'instruction': 'Help me',
'query': 'How do I code?',
'response': 'Here is how...'
}
dataset = NestedDataset(datasets.Dataset.from_list([dat... | Test valid conversation without optional fields | test_valid_conversation_without_system | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_missing_required_field(self):
"""Test conversation with missing required field"""
data = {
'instruction': 'Help me',
'query': 'How do I code?'
# missing 'response'
}
dataset = NestedDataset(datasets.Dataset.from_list([data]))
with self... | Test conversation with missing required field | test_missing_required_field | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_invalid_field_type(self):
"""Test conversation with invalid field type"""
data = {
'instruction': 'Help me',
'query': 123, # should be string
'response': 'Here is how...'
}
dataset = NestedDataset(datasets.Dataset.from_list([data]))
w... | Test conversation with invalid field type | test_invalid_field_type | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_invalid_history_format(self):
"""Test conversation with invalid history format"""
data = {
'instruction': 'Help me',
'query': 'How do I code?',
'response': 'Here is how...',
'history': [
['Single element'] # should be [query, resp... | Test conversation with invalid history format | test_invalid_history_format | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_invalid_history_types(self):
"""Test conversation with invalid types in history"""
data = {
'instruction': 'Help me',
'query': 'How do I code?',
'response': 'Here is how...',
'history': [
[123, 'A1'] # query should be string
... | Test conversation with invalid types in history | test_invalid_history_types | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_invalid_system_type(self):
"""Test conversation with invalid system type"""
data = {
'system': 123, # should be string
'instruction': 'Help me',
'query': 'How do I code?',
'response': 'Here is how...'
}
dataset = NestedDataset(dat... | Test conversation with invalid system type | test_invalid_system_type | python | modelscope/data-juicer | tests/core/data/test_data_validator.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_data_validator.py | Apache-2.0 |
def test_get_column_preserve_order(self):
"""Test that column order is preserved"""
texts = self.dataset.get_column('text')
self.assertEqual(texts[0], 'Hello')
self.assertEqual(texts[1], 'World')
self.assertEqual(texts[2], 'Test')
# Test with k
texts = se... | Test that column order is preserved | test_get_column_preserve_order | python | modelscope/data-juicer | tests/core/data/test_dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dj_dataset.py | Apache-2.0 |
def test_schema_multiple_datasets(self):
"""Test schema consistency across multiple datasets"""
data1 = [{'text': 'hello', 'score': 1}]
data2 = [{'text': 'world', 'score': 2}]
dataset1 = NestedDataset(Dataset.from_list(data1))
dataset2 = NestedDataset(Dataset.from_list(d... | Test schema consistency across multiple datasets | test_schema_multiple_datasets | python | modelscope/data-juicer | tests/core/data/test_dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dj_dataset.py | Apache-2.0 |
def test_schema_nested_structures(self):
"""Test schema with nested data structures"""
data = [{
'text': 'hello',
'int_value': 1,
'float_value': 1.0,
'bool_value': True,
'metadata': {'lang': 'en', 'score': 1},
'tags': ['tag1', 'tag2... | Test schema with nested data structures | test_schema_nested_structures | python | modelscope/data-juicer | tests/core/data/test_dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dj_dataset.py | Apache-2.0 |
def test_schema_special_characters(self):
"""Test schema with special characters in column names"""
data = [{
'normal': 1,
'with.dot': 2,
'with-dash': 3,
'_underscore': 4,
'with space': 5
}]
dataset = NestedDataset(Data... | Test schema with special characters in column names | test_schema_special_characters | python | modelscope/data-juicer | tests/core/data/test_dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dj_dataset.py | Apache-2.0 |
def test_schema_type_consistency(self):
"""Test schema type consistency across rows"""
data = [
{'text': 'hello', 'score': 1, 'flag': True},
{'text': 'world', 'score': 2, 'flag': False},
{'text': 'test', 'score': 3, 'flag': True}
]
dataset = N... | Test schema type consistency across rows | test_schema_type_consistency | python | modelscope/data-juicer | tests/core/data/test_dj_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_dj_dataset.py | Apache-2.0 |
def setUpClass(cls):
"""Class-level setup run once before all tests"""
super().setUpClass()
# Save original strategies
cls._original_strategies = DataLoadStrategyRegistry._strategies.copy() | Class-level setup run once before all tests | setUpClass | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def tearDownClass(cls):
"""Class-level cleanup run once after all tests"""
# Restore original strategies
DataLoadStrategyRegistry._strategies = cls._original_strategies
super().tearDownClass() | Class-level cleanup run once after all tests | tearDownClass | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def setUp(self):
"""Instance-level setup run before each test"""
super().setUp()
# Clear strategies before each test
DataLoadStrategyRegistry._strategies = {} | Instance-level setup run before each test | setUp | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def test_load_strategy_default_config(self):
"""Test load strategy with minimal config"""
DataLoadStrategyRegistry._strategies = {}
# Create minimal config
minimal_cfg = Namespace(
path='test/path'
)
ds_config = {
'path': 'test/path'
... | Test load strategy with minimal config | test_load_strategy_default_config | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def test_load_strategy_full_config(self):
"""Test load strategy with full config"""
DataLoadStrategyRegistry._strategies = {}
# Create config with all options
full_cfg = Namespace(
path='test/path',
text_keys=['content', 'title'],
suffixes=['.txt', '.... | Test load strategy with full config | test_load_strategy_full_config | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def test_load_strategy_partial_config(self):
"""Test load strategy with partial config"""
DataLoadStrategyRegistry._strategies = {}
# Create config with some options
partial_cfg = Namespace(
path='test/path',
text_keys=['content'],
# suffixes and add_... | Test load strategy with partial config | test_load_strategy_partial_config | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def test_load_strategy_empty_config(self):
"""Test load strategy with empty config"""
DataLoadStrategyRegistry._strategies = {}
# Create empty config
empty_cfg = Namespace()
ds_config = {
'path': 'test/path'
}
strategy = Defa... | Test load strategy with empty config | test_load_strategy_empty_config | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def setUp(self):
"""Instance-level setup run before each test"""
cur_dir = osp.dirname(osp.abspath(__file__))
self.tmp_dir = osp.join(cur_dir, f'tmp_{uuid.uuid4().hex}')
os.makedirs(self.tmp_dir, exist_ok=True)
self.cfg = get_default_cfg()
self.cfg.ray_address = 'local'
... | Instance-level setup run before each test | setUp | python | modelscope/data-juicer | tests/core/data/test_load_strategy.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_load_strategy.py | Apache-2.0 |
def test_get_column_preserve_order(self):
"""Test that column order is preserved"""
texts = self.dataset.get_column('text')
self.assertEqual(texts[0], 'Hello')
self.assertEqual(texts[1], 'World')
self.assertEqual(texts[2], 'Test')
# Test with k
texts = self.datas... | Test that column order is preserved | test_get_column_preserve_order | python | modelscope/data-juicer | tests/core/data/test_ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_ray_dataset.py | Apache-2.0 |
def test_schema_multiple_datasets(self):
"""Test schema consistency across multiple datasets"""
import ray.data
from data_juicer.core.data.ray_dataset import RayDataset
data1 = [{'text': 'hello', 'score': 1}]
data2 = [{'text': 'world', 'score': 2}]
dataset1 = RayDataset(... | Test schema consistency across multiple datasets | test_schema_multiple_datasets | python | modelscope/data-juicer | tests/core/data/test_ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_ray_dataset.py | Apache-2.0 |
def test_schema_nested_structures(self):
"""Test schema with nested data structures"""
import ray.data
from data_juicer.core.data.ray_dataset import RayDataset
data = [{
'text': 'hello',
'int_value': 1,
'float_value': 1.0,
'bool_value': Tru... | Test schema with nested data structures | test_schema_nested_structures | python | modelscope/data-juicer | tests/core/data/test_ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_ray_dataset.py | Apache-2.0 |
def test_schema_special_characters(self):
"""Test schema with special characters in column names"""
import ray.data
from data_juicer.core.data.ray_dataset import RayDataset
data = [{
'normal': 1,
'with.dot': 2,
'with-dash': 3,
'_underscore'... | Test schema with special characters in column names | test_schema_special_characters | python | modelscope/data-juicer | tests/core/data/test_ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_ray_dataset.py | Apache-2.0 |
def test_schema_type_consistency(self):
"""Test schema type consistency across rows"""
import ray.data
from data_juicer.core.data.ray_dataset import RayDataset
data = [
{'text': 'hello', 'score': 1, 'flag': True},
{'text': 'world', 'score': 2, 'flag': False},
... | Test schema type consistency across rows | test_schema_type_consistency | python | modelscope/data-juicer | tests/core/data/test_ray_dataset.py | https://github.com/modelscope/data-juicer/blob/master/tests/core/data/test_ray_dataset.py | Apache-2.0 |
def test_load_formatter_with_directory(self):
"""Test loading a directory with mixed file types"""
formatter = load_formatter(self._path)
# Should pick the formatter with most matching files
ds = formatter.load_dataset()
self.assertTrue(len(ds) > 0) | Test loading a directory with mixed file types | test_load_formatter_with_directory | python | modelscope/data-juicer | tests/format/test_load_formatter.py | https://github.com/modelscope/data-juicer/blob/master/tests/format/test_load_formatter.py | Apache-2.0 |
def test_load_formatter_with_specific_suffix(self):
"""Test loading a file with specific suffix"""
formatter = load_formatter(self._json_file, suffixes=['.jsonl'])
self.assertIsInstance(formatter, JsonFormatter)
ds = formatter.load_dataset()
self.assertEqual(len(ds), 6) | Test loading a file with specific suffix | test_load_formatter_with_specific_suffix | python | modelscope/data-juicer | tests/format/test_load_formatter.py | https://github.com/modelscope/data-juicer/blob/master/tests/format/test_load_formatter.py | Apache-2.0 |
def test_load_formatter_with_complex_extension(self):
"""Test loading a file with a complex extension like jsonl.zst
which is not supported by json formatter"""
with self.assertRaises(ValueError):
formatter = load_formatter(self._complex_ext_file) | Test loading a file with a complex extension like jsonl.zst
which is not supported by json formatter | test_load_formatter_with_complex_extension | python | modelscope/data-juicer | tests/format/test_load_formatter.py | https://github.com/modelscope/data-juicer/blob/master/tests/format/test_load_formatter.py | Apache-2.0 |
def test_load_formatter_with_stacktrace_scenario(self):
"""Specifically test the scenario from the error stacktrace"""
# Create a temp directory with the name that matches the error
temp_path = os.path.join(self._temp_dir, 'data')
os.makedirs(temp_path, exist_ok=True)
# ... | Specifically test the scenario from the error stacktrace | test_load_formatter_with_stacktrace_scenario | python | modelscope/data-juicer | tests/format/test_load_formatter.py | https://github.com/modelscope/data-juicer/blob/master/tests/format/test_load_formatter.py | Apache-2.0 |
def test_load_formatter_with_relative_path(self):
"""Test loading a file using a relative path"""
# Change to the temp directory
os.chdir(self._temp_dir)
# Use a relative path to the test file
rel_path = os.path.join('rel_path_test', 'test_rel.jsonl')
# ... | Test loading a file using a relative path | test_load_formatter_with_relative_path | python | modelscope/data-juicer | tests/format/test_load_formatter.py | https://github.com/modelscope/data-juicer/blob/master/tests/format/test_load_formatter.py | Apache-2.0 |
def test_load_formatter_with_relative_directory_path(self):
"""Test loading a directory using a relative path"""
# Change to the temp directory
os.chdir(self._temp_dir)
# Use a relative path to the directory
rel_dir_path = 'rel_path_test'
# Try to load u... | Test loading a directory using a relative path | test_load_formatter_with_relative_directory_path | python | modelscope/data-juicer | tests/format/test_load_formatter.py | https://github.com/modelscope/data-juicer/blob/master/tests/format/test_load_formatter.py | Apache-2.0 |
def test_function_execution(self):
"""Test the correct execution of a loadable function."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.py', mode='w+') as temp_file:
temp_file.write(
"def process_data(sample):\n"
" return {'result': sample['value... | Test the correct execution of a loadable function. | test_function_execution | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def test_function_batched(self):
"""Test for a function that processes a batch."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.py', mode='w+') as temp_file:
temp_file.write(
"def process_data(samples):\n"
" return {'result': samples['value'] + [1... | Test for a function that processes a batch. | test_function_batched | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def test_function_with_import(self):
"""Test for a function that contains an import statement."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.py', mode='w+') as temp_file:
temp_file.write(
"import numpy as np\n"
"def process_data(sample):\n"
... | Test for a function that contains an import statement. | test_function_with_import | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def test_file_not_python_extension(self):
"""Test for a file that exists but is not a .py file."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.txt', mode='w+') as temp_file:
temp_file.write("This is a text file.")
temp_file.seek(0) # Rewind the file so it can be read
... | Test for a file that exists but is not a .py file. | test_file_not_python_extension | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def test_function_not_found(self):
"""Test for function not existing in the provided file."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.py', mode='w+') as temp_file:
temp_file.write(
"def existing_function(sample):\n"
" return sample\n"
... | Test for function not existing in the provided file. | test_function_not_found | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def test_function_not_callable(self):
"""Test for trying to load a non-callable function."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.py', mode='w+') as temp_file:
temp_file.write("x = 42")
temp_file.seek(0) # Rewind the file so it can be read
with self... | Test for trying to load a non-callable function. | test_function_not_callable | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def test_function_mutiple_arguments(self):
"""Test for function that requires more than one argument."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.py', mode='w+') as temp_file:
temp_file.write(
"def multi_arg_function(arg1, arg2):\n"
" return a... | Test for function that requires more than one argument. | test_function_mutiple_arguments | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def test_invalid_return_type(self):
"""Test for a function returning a non-dictionary."""
with tempfile.NamedTemporaryFile(delete=True, suffix='.py', mode='w+') as temp_file:
temp_file.write(
"def invalid_function(sample):\n"
" return sample['value'] + 5\n"... | Test for a function returning a non-dictionary. | test_invalid_return_type | python | modelscope/data-juicer | tests/ops/mapper/test_python_file_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/test_python_file_mapper.py | Apache-2.0 |
def _create_tasks_batch(self, tasks_data: List[Dict],
sample_ids: List[Any]) -> List[int]:
"""Mock implementation that returns fake task IDs"""
task_ids = []
for i, (task_data, sample_id_list) in enumerate(zip(tasks_data, sample_ids)):
task_id = i + 1000 #... | Mock implementation that returns fake task IDs | _create_tasks_batch | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def _process_annotation_result(self, annotation: Dict, sample: Dict) -> Dict:
"""Mock implementation that adds annotation to the sample"""
sample_copy = sample.copy()
sample_copy["annotation_result"] = annotation.get("result", {})
return sample_copy | Mock implementation that adds annotation to the sample | _process_annotation_result | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def _check_annotation_status(self, task_ids):
"""Mock implementation for checking annotation status"""
has_changes = False
completed_tasks = {}
for task_id in task_ids:
if task_id in self.mock_annotations and task_id not in self.processed_annotations:
has_cha... | Mock implementation for checking annotation status | _check_annotation_status | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def add_mock_annotation(self, task_id, annotation_data):
"""Helper method to add mock annotations for testing"""
self.mock_annotations[task_id] = {
"id": f"annotation_{task_id}",
"result": annotation_data
} | Helper method to add mock annotations for testing | add_mock_annotation | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_event_handlers_registration(self):
"""Test that event handlers are properly registered"""
mapper = MockAnnotationMapper()
# Check that all event handlers are registered
self.assertIn(ANNOTATION_EVENTS['TASK_CREATED'], mapper.event_handlers)
self.assertIn(ANNOTAT... | Test that event handlers are properly registered | test_event_handlers_registration | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_process_batched_without_waiting(self):
"""Test processing a batch of samples without waiting for annotations"""
mapper = MockAnnotationMapper(wait_for_annotations=False)
# Process the samples
result = mapper.process_batched(self.samples_dict)
# Verify r... | Test processing a batch of samples without waiting for annotations | test_process_batched_without_waiting | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_process_batched_with_waiting(self):
"""Test processing a batch of samples and waiting for annotations"""
mapper = MockAnnotationMapper(wait_for_annotations=True)
# Add mock annotations for all tasks that will be created
for i in range(5):
task_id = 1000 + i ... | Test processing a batch of samples and waiting for annotations | test_process_batched_with_waiting | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_process_batched_with_custom_samples_per_task(self):
"""Test processing with multiple samples per task"""
mapper = MockAnnotationMapper(samples_per_task=2)
# Process the samples
result = mapper.process_batched(self.samples_dict)
# Verify results
... | Test processing with multiple samples per task | test_process_batched_with_custom_samples_per_task | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_wait_for_batch_annotations_timeout(self):
"""Test waiting for annotations with a timeout"""
# Create a mapper with a very short timeout
mapper = MockAnnotationMapper(wait_for_annotations=True, timeout=0.1, poll_interval=0.01)
# Create a task but don't add annotations
... | Test waiting for annotations with a timeout | test_wait_for_batch_annotations_timeout | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_process_uses_existing_ids(self):
"""Test that the mapper uses existing IDs in samples instead of generating new ones"""
# First pass: process without waiting for annotations
mapper = MockAnnotationMapper(wait_for_annotations=False)
# Create samples with predefined IDs
... | Test that the mapper uses existing IDs in samples instead of generating new ones | test_process_uses_existing_ids | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def _create_tasks_batch(self, tasks_data, sample_ids):
"""Mock implementation that returns fake task IDs"""
task_ids = []
for i, task_data in enumerate(tasks_data):
task_id = i + 2000 # Start with task ID 2000
self.mock_tasks[task_id] = task_data
task_ids.app... | Mock implementation that returns fake task IDs | _create_tasks_batch | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def _check_annotation_status(self, task_ids):
"""Mock implementation for checking annotation status"""
has_changes = False
completed_tasks = {}
for task_id in task_ids:
if task_id in self.mock_annotations and task_id not in self.processed_annotations:
has_cha... | Mock implementation for checking annotation status | _check_annotation_status | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def add_mock_annotation(self, task_id, annotation_data):
"""Helper method to add mock annotations for testing"""
self.mock_annotations[task_id] = {
"id": f"annotation_{task_id}",
"result": annotation_data
} | Helper method to add mock annotations for testing | add_mock_annotation | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_samples_per_task_enforcement(self):
"""Test that samples_per_task is always 1 for Label Studio"""
# Try to create with samples_per_task=2
mapper = MockLabelStudioAnnotationMapper(samples_per_task=2)
# It should be reset to 1
self.assertEqual(mapper.samples_per_t... | Test that samples_per_task is always 1 for Label Studio | test_samples_per_task_enforcement | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def test_process_batched(self):
"""Test processing a batch of samples with Label Studio mapper"""
mapper = MockLabelStudioAnnotationMapper(wait_for_annotations=True)
# Add mock annotations for all tasks that will be created
for i in range(len(self.samples)):
task_id ... | Test processing a batch of samples with Label Studio mapper | test_process_batched | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_annotation_mapper.py | Apache-2.0 |
def _create_tasks_batch(self, tasks_data, sample_ids):
"""Mock implementation that returns fake task IDs"""
task_ids = []
for i, task_data in enumerate(tasks_data):
task_id = i + 3000 # Start with task ID 3000
self.mock_tasks[task_id] = task_data
task_ids.app... | Mock implementation that returns fake task IDs | _create_tasks_batch | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def _check_annotation_status(self, task_ids):
"""Mock implementation for checking annotation status"""
has_changes = False
completed_tasks = {}
for task_id in task_ids:
if task_id in self.mock_annotations and task_id not in self.processed_annotations:
has_cha... | Mock implementation for checking annotation status | _check_annotation_status | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def _get_task_annotation(self, task_id: int) -> Optional[Dict]:
"""Get annotation for a task if available with preference processing"""
annotation = self.mock_annotations.get(task_id)
# Process the annotation if available to extract preference
if annotation and 'chosen' in annotation:
... | Get annotation for a task if available with preference processing | _get_task_annotation | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def add_mock_annotation(self, task_id, annotation_data):
"""Helper method to add mock annotations for testing"""
self.mock_annotations[task_id] = {
"id": f"annotation_{task_id}",
"result": annotation_data
} | Helper method to add mock annotations for testing | add_mock_annotation | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def _format_task(self, samples: List[Dict]) -> Dict:
"""Format samples as a Label Studio task for human preference.
Args:
samples: List of samples to include in the task
Returns:
Dict: Formatted task data
"""
# For human preference, we need a special for... | Format samples as a Label Studio task for human preference.
Args:
samples: List of samples to include in the task
Returns:
Dict: Formatted task data
| _format_task | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def _process_annotation_result(self, annotation: Dict,
sample: Dict) -> Dict:
"""Process human preference annotation result and update the sample
Args:
annotation: The annotation result from the annotation platform
sample: The original sample t... | Process human preference annotation result and update the sample
Args:
annotation: The annotation result from the annotation platform
sample: The original sample that was annotated
Returns:
Dict: The updated sample with preference results
| _process_annotation_result | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def test_format_task(self):
"""Test task formatting for human preference"""
mapper = MockHumanPreferenceAnnotationMapper()
# Format a task from the first sample
formatted_task = mapper._format_task([self.samples[0]])
# Verify the formatting
self.assertIn('data', formatt... | Test task formatting for human preference | test_format_task | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def test_process_annotation_result_left_preference(self):
"""Test processing annotation result when left option is preferred"""
mapper = MockHumanPreferenceAnnotationMapper()
# Create a sample
sample = self.samples[0].copy()
# Create an annotation with preference for the left o... | Test processing annotation result when left option is preferred | test_process_annotation_result_left_preference | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def test_process_annotation_result_right_preference(self):
"""Test processing annotation result when right option is preferred"""
mapper = MockHumanPreferenceAnnotationMapper()
# Create a sample
sample = self.samples[0].copy()
# Create an annotation with preference for the righ... | Test processing annotation result when right option is preferred | test_process_annotation_result_right_preference | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
def test_process_batched(self):
"""Test processing a batch of samples with HumanPreferenceAnnotationMapper"""
mapper = MockHumanPreferenceAnnotationMapper(wait_for_annotations=True)
# Add mock annotations for all tasks that will be created
for i in range(len(self.samples)):
... | Test processing a batch of samples with HumanPreferenceAnnotationMapper | test_process_batched | python | modelscope/data-juicer | tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | https://github.com/modelscope/data-juicer/blob/master/tests/ops/mapper/annotation/test_human_preference_annotation_mapper.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.