code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def combine_preamble_and_source(self, preamble: str) -> str:
"""The manifest info needs to be moved to before the preamble.
Also, because rust-scipt relies on inner docs, there can't be an empty line
between the manifest and preamble.
"""
manifest, src = RustScript.extract_manife... | The manifest info needs to be moved to before the preamble.
Also, because rust-scipt relies on inner docs, there can't be an empty line
between the manifest and preamble.
| combine_preamble_and_source | python | snakemake/snakemake | src/snakemake/script/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/script/__init__.py | MIT |
def _strip_shebang(src: str) -> Tuple[str, str]:
"""From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L312-L320"""
rgx = re.compile(r"^#![^\[].*?(\r\n|\n)")
return strip_re(rgx, src) | From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L312-L320 | _strip_shebang | python | snakemake/snakemake | src/snakemake/script/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/script/__init__.py | MIT |
def _strip_manifest(src: str) -> Tuple[str, str]:
"""From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L405-L411"""
manifest, remainder = RustScript._strip_single_line_manifest(src)
if not manifest:
manifest, remainder = RustSc... | From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L405-L411 | _strip_manifest | python | snakemake/snakemake | src/snakemake/script/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/script/__init__.py | MIT |
def _strip_single_line_manifest(src: str) -> Tuple[str, str]:
"""From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L618-L632"""
rgx = re.compile(r"^\s*//\s*cargo-deps\s*:(.*?)(\r\n|\n)", flags=re.IGNORECASE)
return strip_re(rgx, src) | From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L618-L632 | _strip_single_line_manifest | python | snakemake/snakemake | src/snakemake/script/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/script/__init__.py | MIT |
def _strip_code_block_manifest(src: str) -> Tuple[str, str]:
"""From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L634-L664
We need to find the first `/*!` or `//!` that *isn't* preceded by something
that would make it apply to anything ot... | From https://github.com/fornwall/rust-script/blob/ce508bad02a11d574657d2f1debf7e73fca2bf6e/src/manifest.rs#L634-L664
We need to find the first `/*!` or `//!` that *isn't* preceded by something
that would make it apply to anything other than the create itself. Because we
can't do this accurately,... | _strip_code_block_manifest | python | snakemake/snakemake | src/snakemake/script/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/script/__init__.py | MIT |
def strip_re(regex: Pattern, s: str) -> Tuple[str, str]:
"""Strip a substring matching a regex from a string and return the stripped part
and the remainder of the original string.
Returns an empty string and the original string if the regex is not found
"""
rgx = re.compile(regex)
match = rgx.se... | Strip a substring matching a regex from a string and return the stripped part
and the remainder of the original string.
Returns an empty string and the original string if the regex is not found
| strip_re | python | snakemake/snakemake | src/snakemake/script/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/script/__init__.py | MIT |
def script(
path,
basedir,
input,
output,
params,
wildcards,
threads,
resources,
log,
config,
rulename,
conda_env,
conda_base_path,
container_img,
singularity_args,
env_modules,
bench_record,
jobid,
bench_iteration,
cleanup_scripts,
sha... |
Load a script from the given basedir + path and execute it.
| script | python | snakemake/snakemake | src/snakemake/script/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/script/__init__.py | MIT |
def get_batch(self, items: list):
"""Return the defined batch of the given items.
Items are usually input files."""
# make sure that we always consider items in the same order
if len(items) < self.batches:
raise WorkflowError(
"Batching rule {} has less input ... | Return the defined batch of the given items.
Items are usually input files. | get_batch | python | snakemake/snakemake | src/snakemake/settings/types.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/settings/types.py | MIT |
def generate(dag, path: Path, deploy=["conda", "singularity"], configfiles=None):
"""Generate unit tests from given dag at a given path."""
logger.info("Generating unit tests for each rule...")
try:
from jinja2 import Environment, PackageLoader
except ImportError:
raise WorkflowError(
... | Generate unit tests from given dag at a given path. | generate | python | snakemake/snakemake | src/snakemake/unit_tests/__init__.py | https://github.com/snakemake/snakemake/blob/master/src/snakemake/unit_tests/__init__.py | MIT |
def get_expected_files(results_dir):
"""Recursively walk through the expected-results directory to enumerate
all expected files."""
return [
os.path.relpath(f, results_dir)
for f in glob.iglob(os.path.join(results_dir, "**/**"), recursive=True)
if not os.path.isdir(f)
] | Recursively walk through the expected-results directory to enumerate
all expected files. | get_expected_files | python | snakemake/snakemake | tests/common.py | https://github.com/snakemake/snakemake/blob/master/tests/common.py | MIT |
def run(
path,
shouldfail=False,
snakefile="Snakefile",
subpath=None,
no_tmpdir=False,
check_md5=True,
check_results=None,
cores=3,
nodes=None,
set_pythonpath=True,
cleanup=True,
conda_frontend="conda",
config=dict(),
targets=set(),
container_image=os.environ.... |
Test the Snakefile in the path.
There must be a Snakefile in the path and a subdirectory named
expected-results. If cleanup is False, we return the temporary
directory to the calling test for inspection, and the test should
clean it up.
| run | python | snakemake/snakemake | tests/common.py | https://github.com/snakemake/snakemake/blob/master/tests/common.py | MIT |
def reset_paths_between_tests():
"""Ensure that changes to sys.path are reset between tests"""
org_path = sys.path.copy()
yield
sys.path = org_path | Ensure that changes to sys.path are reset between tests | reset_paths_between_tests | python | snakemake/snakemake | tests/conftest.py | https://github.com/snakemake/snakemake/blob/master/tests/conftest.py | MIT |
def test_github_issue_14():
"""Add cleanup_scripts argument to allow the user to keep scripts"""
# Return temporary directory for inspection - we should keep scripts here
tmpdir = run(dpath("test_github_issue_14"), cleanup=False, cleanup_scripts=False)
assert os.listdir(os.path.join(tmpdir, ".snakemake"... | Add cleanup_scripts argument to allow the user to keep scripts | test_github_issue_14 | python | snakemake/snakemake | tests/tests.py | https://github.com/snakemake/snakemake/blob/master/tests/tests.py | MIT |
def test_empty_pattern_matches_everything(mocker):
"""Test that empty patterns match any filename"""
rule = mocker.Mock(
products=lambda: [Mock(constant_prefix=lambda: "", constant_suffix=lambda: "")]
)
output_index = OutputIndex([rule])
assert rule in output_index.match("")
assert rule ... | Test that empty patterns match any filename | test_empty_pattern_matches_everything | python | snakemake/snakemake | tests/test_output_index.py | https://github.com/snakemake/snakemake/blob/master/tests/test_output_index.py | MIT |
def test_empty_prefix_and_suffix(mocker):
"""Test with empty prefix and suffix"""
rule = mocker.Mock(
products=lambda: [Mock(constant_prefix=lambda: "", constant_suffix=lambda: "")]
)
output_index = OutputIndex([rule])
matches = output_index.match("anything.txt")
assert rule in matches | Test with empty prefix and suffix | test_empty_prefix_and_suffix | python | snakemake/snakemake | tests/test_output_index.py | https://github.com/snakemake/snakemake/blob/master/tests/test_output_index.py | MIT |
def test_parametrized_matches(mocker, target, expected_match):
"""Parametrized test for various matching scenarios"""
rule = mocker.Mock(
products=lambda: [
Mock(constant_prefix=lambda: "test", constant_suffix=lambda: "txt")
]
)
output_index = OutputIndex([rule])
matches ... | Parametrized test for various matching scenarios | test_parametrized_matches | python | snakemake/snakemake | tests/test_output_index.py | https://github.com/snakemake/snakemake/blob/master/tests/test_output_index.py | MIT |
def test_prefix_matching_non_consecutive():
"""Test that demonstrates why we need to keep checking even after seeing a longer prefix,
when prefixes aren't consecutive (like 'ab' vs 'abbc')."""
lookup = PrefixLookup([("a", 1), ("ab", 2), ("abbcc", 3), ("abbc", 4), ("abbd", 5)])
# Query "abbz" process:
... | Test that demonstrates why we need to keep checking even after seeing a longer prefix,
when prefixes aren't consecutive (like 'ab' vs 'abbc'). | test_prefix_matching_non_consecutive | python | snakemake/snakemake | tests/test_prefix_lookup.py | https://github.com/snakemake/snakemake/blob/master/tests/test_prefix_lookup.py | MIT |
def test_named_list_one_named_one_str(self):
"""InputFiles is a subclass of snakemake.io.NamedInput
ierate over input and store each with the integer index - i.e 0, 1, 2
then use input.items() to iterate over the named files and store them as named also
check how this works with named th... | InputFiles is a subclass of snakemake.io.NamedInput
ierate over input and store each with the integer index - i.e 0, 1, 2
then use input.items() to iterate over the named files and store them as named also
check how this works with named things being lists
| test_named_list_one_named_one_str | python | snakemake/snakemake | tests/test_script.py | https://github.com/snakemake/snakemake/blob/master/tests/test_script.py | MIT |
def test_named_list_named_is_list(self):
"""Named lists that are lists of files become a space-separated string as you
can't nest arrays in bash"""
named_list = InputFiles(["test1.in", ["test2.in", "named.in"]])
named_list._set_name("named", 1)
actual = BashEncoder.encode_namedl... | Named lists that are lists of files become a space-separated string as you
can't nest arrays in bash | test_named_list_named_is_list | python | snakemake/snakemake | tests/test_script.py | https://github.com/snakemake/snakemake/blob/master/tests/test_script.py | MIT |
def append(
self: "ContentSequence",
part_or_parts: Union[BasePart, List[BasePart]],
add_end: bool = False,
speaker: Union[str, int] | None = None,
):
"""
Append a part or list of parts to the sequence.
Args:
part_or_parts: A single part or list o... |
Append a part or list of parts to the sequence.
Args:
part_or_parts: A single part or list of parts to add
add_end: Whether to add the IM_END_TOKEN after these parts
speaker: Optional speaker identifier (name or ID) to add before the parts
| append | python | fishaudio/fish-speech | fish_speech/content_sequence.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/content_sequence.py | Apache-2.0 |
def encode(
self: "ContentSequence",
tokenizer: FishTokenizer,
add_shift: bool = True,
ignore_loss_tokens: list[str] = [],
) -> EncodedMessage:
"""
Encode the sequence parts into tokens for the model.
Args:
tokenizer: The tokenizer to use
... |
Encode the sequence parts into tokens for the model.
Args:
tokenizer: The tokenizer to use
add_shift: Whether to shift tokens for next-token prediction
ignore_loss_tokens: List of token strings to ignore when calculating loss
Returns:
EncodedMes... | encode | python | fishaudio/fish-speech | fish_speech/content_sequence.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/content_sequence.py | Apache-2.0 |
def visualize(
self: "ContentSequence",
tokenizer: FishTokenizer,
ignore_loss_tokens: list[str] = [],
merge_semantic_tokens: bool = False,
):
"""
Visualize the encoded sequence with color-coded tokens.
Blue/cyan tokens contribute to loss, green tokens do not.
... |
Visualize the encoded sequence with color-coded tokens.
Blue/cyan tokens contribute to loss, green tokens do not.
| visualize | python | fishaudio/fish-speech | fish_speech/content_sequence.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/content_sequence.py | Apache-2.0 |
def __init__(self) -> None:
"""
Component of the TTSInferenceEngine class.
Loads and manages the cache for the reference audio and text.
"""
self.ref_by_id: dict = {}
self.ref_by_hash: dict = {}
# Make Pylance happy (attribut/method not defined...)
self.d... |
Component of the TTSInferenceEngine class.
Loads and manages the cache for the reference audio and text.
| __init__ | python | fishaudio/fish-speech | fish_speech/inference_engine/reference_loader.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/inference_engine/reference_loader.py | Apache-2.0 |
def load_audio(self, reference_audio, sr):
"""
Load the audio data from a file or bytes.
"""
if len(reference_audio) > 255 or not Path(reference_audio).exists():
audio_data = reference_audio
reference_audio = io.BytesIO(audio_data)
waveform, original_sr =... |
Load the audio data from a file or bytes.
| load_audio | python | fishaudio/fish-speech | fish_speech/inference_engine/reference_loader.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/inference_engine/reference_loader.py | Apache-2.0 |
def inference(self, req: ServeTTSRequest) -> Generator[InferenceResult, None, None]:
"""
Main inference function:
- Loads the reference audio and text.
- Calls the LLAMA model for inference.
- Decodes the VQ tokens to audio.
"""
ref_id: str | None = req.reference... |
Main inference function:
- Loads the reference audio and text.
- Calls the LLAMA model for inference.
- Decodes the VQ tokens to audio.
| inference | python | fishaudio/fish-speech | fish_speech/inference_engine/__init__.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/inference_engine/__init__.py | Apache-2.0 |
def send_Llama_request(
self, req: ServeTTSRequest, prompt_tokens: list, prompt_texts: list
) -> queue.Queue:
"""
Send a request to the LLAMA model to generate the symbolic tokens.
"""
# Prepare the request
request = dict(
device=self.decoder_model.device... |
Send a request to the LLAMA model to generate the symbolic tokens.
| send_Llama_request | python | fishaudio/fish-speech | fish_speech/inference_engine/__init__.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/inference_engine/__init__.py | Apache-2.0 |
def get_audio_segment(self, result: GenerateResponse) -> np.ndarray:
"""
Decode the VQ tokens to audio.
"""
# Don't use autocast on MPS devices
with autocast_exclude_mps(
device_type=self.decoder_model.device.type, dtype=self.precision
):
# Decode... |
Decode the VQ tokens to audio.
| get_audio_segment | python | fishaudio/fish-speech | fish_speech/inference_engine/__init__.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/inference_engine/__init__.py | Apache-2.0 |
def setup_caches(self, max_batch_size, max_seq_length):
"""
This method will only be called during inference when using KV cache.
"""
head_dim = self.config.dim // self.config.n_head
max_seq_length = find_multiple(max_seq_length, 8)
self.max_seq_length = max_seq_length
... |
This method will only be called during inference when using KV cache.
| setup_caches | python | fishaudio/fish-speech | fish_speech/models/dac/modded_dac.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/dac/modded_dac.py | Apache-2.0 |
def make_mask(
self,
max_length: int,
x_lens: Optional[Tensor] = None,
) -> Tensor:
"""
Make ordinary mask if window size is not specified.
"""
if self.causal:
mask = torch.tril(torch.ones(max_length, max_length))
else:
mask = t... |
Make ordinary mask if window size is not specified.
| make_mask | python | fishaudio/fish-speech | fish_speech/models/dac/modded_dac.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/dac/modded_dac.py | Apache-2.0 |
def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
"""Remove padding from x, handling properly zero padding. Only for 1d!"""
padding_left, padding_right = paddings
assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
assert (padding_left + padding_right) <= x.shape[-1]... | Remove padding from x, handling properly zero padding. Only for 1d! | unpad1d | python | fishaudio/fish-speech | fish_speech/models/dac/modded_dac.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/dac/modded_dac.py | Apache-2.0 |
def pad1d(
x: torch.Tensor,
paddings: tp.Tuple[int, int],
mode: str = "zeros",
value: float = 0.0,
):
"""Tiny wrapper around F.pad, just to allow for reflect padding on small input.
If this is the case, we insert extra 0 padding to the right
before the reflection happen.
"""
length =... | Tiny wrapper around F.pad, just to allow for reflect padding on small input.
If this is the case, we insert extra 0 padding to the right
before the reflection happen.
| pad1d | python | fishaudio/fish-speech | fish_speech/models/dac/modded_dac.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/dac/modded_dac.py | Apache-2.0 |
def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
"""Remove padding from x, handling properly zero padding. Only for 1d!"""
padding_left, padding_right = paddings
assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
assert (padding_left + padding_right) <= x.shape[-1]... | Remove padding from x, handling properly zero padding. Only for 1d! | unpad1d | python | fishaudio/fish-speech | fish_speech/models/dac/rvq.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/dac/rvq.py | Apache-2.0 |
def pad1d(
x: torch.Tensor,
paddings: tp.Tuple[int, int],
mode: str = "zeros",
value: float = 0.0,
):
"""Tiny wrapper around F.pad, just to allow for reflect padding on small input.
If this is the case, we insert extra 0 padding to the right
before the reflection happen.
"""
length =... | Tiny wrapper around F.pad, just to allow for reflect padding on small input.
If this is the case, we insert extra 0 padding to the right
before the reflection happen.
| pad1d | python | fishaudio/fish-speech | fish_speech/models/dac/rvq.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/dac/rvq.py | Apache-2.0 |
def generate(
*,
model: BaseTransformer,
prompt: torch.Tensor,
max_new_tokens: int,
audio_masks: torch.Tensor,
audio_parts: torch.Tensor,
decode_one_token=decode_one_token_ar,
num_samples: int = 1,
**sampling_kwargs,
):
"""
Takes a conditioning sequence (prompt) as input and ... |
Takes a conditioning sequence (prompt) as input and continues to generate as many tokens as requested.
| generate | python | fishaudio/fish-speech | fish_speech/models/text2semantic/inference.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/text2semantic/inference.py | Apache-2.0 |
def precompute_freqs_cis(seq_len: int, n_elem: int, base: int = 10000) -> Tensor:
"""
Precomputes frequency tensors for complex exponentials (cis)
Args:
seq_len: Length of the sequence for which positional embeddings are needed.
n_elem: Number of elements in the frequency tensor.
ba... |
Precomputes frequency tensors for complex exponentials (cis)
Args:
seq_len: Length of the sequence for which positional embeddings are needed.
n_elem: Number of elements in the frequency tensor.
base: Base value for the frequency scaling (default: 10000).
Returns:
A tensor... | precompute_freqs_cis | python | fishaudio/fish-speech | fish_speech/models/text2semantic/llama.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/models/text2semantic/llama.py | Apache-2.0 |
def list_files(
path: Union[Path, str],
extensions: set[str] = set(),
recursive: bool = False,
sort: bool = True,
) -> list[Path]:
"""List files in a directory.
Args:
path (Path): Path to the directory.
extensions (set, optional): Extensions to filter. Defaults to None.
... | List files in a directory.
Args:
path (Path): Path to the directory.
extensions (set, optional): Extensions to filter. Defaults to None.
recursive (bool, optional): Whether to search recursively. Defaults to False.
sort (bool, optional): Whether to sort the files. Defaults to True.
... | list_files | python | fishaudio/fish-speech | fish_speech/utils/file.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/file.py | Apache-2.0 |
def __init__(
self,
name: str = __name__,
rank_zero_only: bool = True,
extra: Optional[Mapping[str, object]] = None,
) -> None:
"""Initializes a multi-GPU-friendly python command line logger that logs on all processes
with their rank prefixed in the log message.
... | Initializes a multi-GPU-friendly python command line logger that logs on all processes
with their rank prefixed in the log message.
:param name: The name of the logger. Default is ``__name__``.
:param rank_zero_only: Whether to force all logs to only occur on the rank zero process. Default is `... | __init__ | python | fishaudio/fish-speech | fish_speech/utils/logger.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/logger.py | Apache-2.0 |
def log(
self, level: int, msg: str, rank: Optional[int] = None, *args, **kwargs
) -> None:
"""Delegate a log call to the underlying logger, after prefixing its message with the rank
of the process it's being logged from. If `'rank'` is provided, then the log will only
occur on that ... | Delegate a log call to the underlying logger, after prefixing its message with the rank
of the process it's being logged from. If `'rank'` is provided, then the log will only
occur on that rank/process.
:param level: The level to log at. Look at `logging.__init__.py` for more information.
... | log | python | fishaudio/fish-speech | fish_speech/utils/logger.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/logger.py | Apache-2.0 |
def log_hyperparameters(object_dict: dict) -> None:
"""Controls which config parts are saved by lightning loggers.
Additionally saves:
- Number of model parameters
"""
hparams = {}
cfg = object_dict["cfg"]
model = object_dict["model"]
trainer = object_dict["trainer"]
if not train... | Controls which config parts are saved by lightning loggers.
Additionally saves:
- Number of model parameters
| log_hyperparameters | python | fishaudio/fish-speech | fish_speech/utils/logging_utils.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/logging_utils.py | Apache-2.0 |
def print_config_tree(
cfg: DictConfig,
print_order: Sequence[str] = (
"data",
"model",
"callbacks",
"logger",
"trainer",
"paths",
"extras",
),
resolve: bool = False,
save_to_file: bool = False,
) -> None:
"""Prints content of DictConfig us... | Prints content of DictConfig using Rich library and its tree structure.
Args:
cfg (DictConfig): Configuration composed by Hydra.
print_order (Sequence[str], optional): Determines in what order config components are printed.
resolve (bool, optional): Whether to resolve reference fields of Di... | print_config_tree | python | fishaudio/fish-speech | fish_speech/utils/rich_utils.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/rich_utils.py | Apache-2.0 |
def enforce_tags(cfg: DictConfig, save_to_file: bool = False) -> None:
"""Prompts user to input tags from command line if no tags are provided in config.""" # noqa: E501
if not cfg.get("tags"):
if "id" in HydraConfig().cfg.hydra.job:
raise ValueError("Specify tags before launching a multir... | Prompts user to input tags from command line if no tags are provided in config. | enforce_tags | python | fishaudio/fish-speech | fish_speech/utils/rich_utils.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/rich_utils.py | Apache-2.0 |
def extras(cfg: DictConfig) -> None:
"""Applies optional utilities before the task is started.
Utilities:
- Ignoring python warnings
- Setting tags from command line
- Rich config printing
"""
# return if no `extras` config
if not cfg.get("extras"):
log.warning("Extras config n... | Applies optional utilities before the task is started.
Utilities:
- Ignoring python warnings
- Setting tags from command line
- Rich config printing
| extras | python | fishaudio/fish-speech | fish_speech/utils/utils.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/utils.py | Apache-2.0 |
def task_wrapper(task_func: Callable) -> Callable:
"""Optional decorator that controls the failure behavior when executing the task function.
This wrapper can be used to:
- make sure loggers are closed even if the task function raises an exception (prevents multirun failure)
- save the exception to a `... | Optional decorator that controls the failure behavior when executing the task function.
This wrapper can be used to:
- make sure loggers are closed even if the task function raises an exception (prevents multirun failure)
- save the exception to a `.log` file
- mark the run as failed with a dedicated f... | task_wrapper | python | fishaudio/fish-speech | fish_speech/utils/utils.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/utils.py | Apache-2.0 |
def get_metric_value(metric_dict: dict, metric_name: str) -> float:
"""Safely retrieves value of the metric logged in LightningModule."""
if not metric_name:
log.info("Metric name is None! Skipping metric value retrieval...")
return None
if metric_name not in metric_dict:
raise Exc... | Safely retrieves value of the metric logged in LightningModule. | get_metric_value | python | fishaudio/fish-speech | fish_speech/utils/utils.py | https://github.com/fishaudio/fish-speech/blob/master/fish_speech/utils/utils.py | Apache-2.0 |
def inference_wrapper(req: ServeTTSRequest, engine: TTSInferenceEngine):
"""
Wrapper for the inference function.
Used in the API server.
"""
count = 0
for result in engine.inference(req):
match result.code:
case "header":
if isinstance(result.audio, tuple):
... |
Wrapper for the inference function.
Used in the API server.
| inference_wrapper | python | fishaudio/fish-speech | tools/server/inference.py | https://github.com/fishaudio/fish-speech/blob/master/tools/server/inference.py | Apache-2.0 |
def inference_wrapper(
text,
reference_id,
reference_audio,
reference_text,
max_new_tokens,
chunk_length,
top_p,
repetition_penalty,
temperature,
seed,
use_memory_cache,
engine,
):
"""
Wrapper for the inference function.
Used in the Gradio interface.
"""
... |
Wrapper for the inference function.
Used in the Gradio interface.
| inference_wrapper | python | fishaudio/fish-speech | tools/webui/inference.py | https://github.com/fishaudio/fish-speech/blob/master/tools/webui/inference.py | Apache-2.0 |
def get_inference_wrapper(engine) -> Callable:
"""
Get the inference function with the immutable arguments.
"""
return partial(
inference_wrapper,
engine=engine,
) |
Get the inference function with the immutable arguments.
| get_inference_wrapper | python | fishaudio/fish-speech | tools/webui/inference.py | https://github.com/fishaudio/fish-speech/blob/master/tools/webui/inference.py | Apache-2.0 |
def register_objects_from_init(directory: str):
"""
Traverse the specified directory for __init__.py files and
register objects defined in __all__.
"""
for dirpath, _, filenames in os.walk(os.path.normpath(directory)):
if '__init__.py' in filenames:
module_path = dirpath.replace(... |
Traverse the specified directory for __init__.py files and
register objects defined in __all__.
| register_objects_from_init | python | modelscope/data-juicer | service.py | https://github.com/modelscope/data-juicer/blob/master/service.py | Apache-2.0 |
def register_class(module, cls):
"""Register class and its methods as endpoints."""
def create_class_call(cls, method_name: str):
async def class_call(request: Request):
try:
# wrap init method
cls.__init__ = validate_call(
cls.__init__, ... | Register class and its methods as endpoints. | register_class | python | modelscope/data-juicer | service.py | https://github.com/modelscope/data-juicer/blob/master/service.py | Apache-2.0 |
def analyze_modality_tag(code, op_prefix):
"""
Analyze the modality tag for the given code content string. Should be one
of the "Modality Tags" in `tagging_mappings.json`. It makes the choice by
finding the usages of attributes `{modality}_key` and the prefix of the OP
name. If there are multiple mo... |
Analyze the modality tag for the given code content string. Should be one
of the "Modality Tags" in `tagging_mappings.json`. It makes the choice by
finding the usages of attributes `{modality}_key` and the prefix of the OP
name. If there are multiple modality keys are used, the 'multimodal' tag
wil... | analyze_modality_tag | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def analyze_resource_tag(code):
"""
Analyze the resource tag for the given code content string. Should be one
of the "Resource Tags" in `tagging_mappings.json`. It makes the choice
according to their assigning statement to attribute `_accelerator`.
"""
if '_accelerator = \'cuda\'' in code:
... |
Analyze the resource tag for the given code content string. Should be one
of the "Resource Tags" in `tagging_mappings.json`. It makes the choice
according to their assigning statement to attribute `_accelerator`.
| analyze_resource_tag | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def analyze_model_tags(code):
"""
Analyze the model tag for the given code content string. SHOULD be one of
the "Model Tags" in `tagging_mappings.json`. It makes the choice by finding
the `model_type` arg in `prepare_model` method invocation.
"""
pattern = r'model_type=[\'|\"](.*?)[\'|\"]'
g... |
Analyze the model tag for the given code content string. SHOULD be one of
the "Model Tags" in `tagging_mappings.json`. It makes the choice by finding
the `model_type` arg in `prepare_model` method invocation.
| analyze_model_tags | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def analyze_tag_from_code(code_path):
"""
Analyze the tags for the OP from the given code path.
"""
tags = []
op_prefix = code_path.split('/')[-1].split('_')[0]
with open(code_path, 'r', encoding='utf-8') as fin:
content = fin.read()
# analyze modality
tags.extend(analyze... |
Analyze the tags for the OP from the given code path.
| analyze_tag_from_code | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def get_class_and_docstring(code_path):
"""
Get the class name and its doc strings from the given Python code path.
"""
with open(code_path, 'r', encoding='utf-8') as fin:
code = fin.read()
tree = ast.parse(code)
cls_visitor = ClassVisitor()
cls_visitor.visit(tree)
... |
Get the class name and its doc strings from the given Python code path.
| get_class_and_docstring | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def get_op_list_from_code_for_formatter():
"""
Get the OP record list for Formatters specifically.
"""
op_record_list = []
type = 'formatter'
for formatter in os.listdir(FORMATTER_CODE_PREFIX):
if formatter in FORMATTER_EXCLUDE:
continue
if formatter == 'formatter.py'... |
Get the OP record list for Formatters specifically.
| get_op_list_from_code_for_formatter | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def get_op_list_from_code():
"""
Get the OP record list for regular OPs (except Formatters).
"""
# get docs for formatters first
op_record_list = get_op_list_from_code_for_formatter()
# get docs for other ops
for type in os.listdir(OP_CODE_PREFIX):
if type in OP_EXCLUDE:
... |
Get the OP record list for regular OPs (except Formatters).
| get_op_list_from_code | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def generate_new_doc(op_record_list):
"""
Generate new docs for the updated OP records.
"""
op_record_dict = {}
for record in op_record_list:
op_record_dict.setdefault(record.type, []).append(record)
# initialize with abstraction
doc = [DOC_ABSTRACT]
# make overview
doc.appen... |
Generate new docs for the updated OP records.
| generate_new_doc | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def check_and_update_op_record(old_op_record_list, new_op_record_list):
"""
Update states in the new OP records based on the old version.
The update categories cover:
1. usability tags update
1.1 If there is no unittest for this OP, set it to alpha;
otherwise, set it to beta.
... |
Update states in the new OP records based on the old version.
The update categories cover:
1. usability tags update
1.1 If there is no unittest for this OP, set it to alpha;
otherwise, set it to beta.
1.2 Then if it's beta in the new version, but it's *mannally* checked
... | check_and_update_op_record | python | modelscope/data-juicer | .pre-commit-hooks/build_op_doc.py | https://github.com/modelscope/data-juicer/blob/master/.pre-commit-hooks/build_op_doc.py | Apache-2.0 |
def __init__(self, tokenizer):
"""
Initialization method.
:param tokenizer: tokenizer name on huggingface
"""
self.tokenizer = transformers.AutoTokenizer.from_pretrained(
tokenizer, trust_remote_code=True)
self.vocab_size = len(self.tokenizer) |
Initialization method.
:param tokenizer: tokenizer name on huggingface
| __init__ | python | modelscope/data-juicer | data_juicer/analysis/collector.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/collector.py | Apache-2.0 |
def collect(self,
data_path,
text_key,
num_proc=1) -> 'torch.distributions.Categorical':
"""
Tokenize and collect tokens distribution of input dataset
:param data_path: path to input dataset.
:param text_key: field keys that will be conside... |
Tokenize and collect tokens distribution of input dataset
:param data_path: path to input dataset.
:param text_key: field keys that will be considered into token counts.
:param num_proc: number of processes to count tokens.
:return: token distribution.
| collect | python | modelscope/data-juicer | data_juicer/analysis/collector.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/collector.py | Apache-2.0 |
def prepare_tokenizer(
tokenizer,
text_key,
):
"""
Prepare a tokenizer function for dataset.
:param tokenizer: a tokenizer to tokenize sample.
:param text_key: field keys that will be
considered into token counts.
... |
Prepare a tokenizer function for dataset.
:param tokenizer: a tokenizer to tokenize sample.
:param text_key: field keys that will be
considered into token counts.
| prepare_tokenizer | python | modelscope/data-juicer | data_juicer/analysis/collector.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/collector.py | Apache-2.0 |
def get_row_col(total_num, factor=2):
"""
Given the total number of stats figures, get the "best" number of rows and
columns. This function is needed when we need to store all stats figures
into one image.
:param total_num: Total number of stats figures
:param factor: Number of sub-figure types... |
Given the total number of stats figures, get the "best" number of rows and
columns. This function is needed when we need to store all stats figures
into one image.
:param total_num: Total number of stats figures
:param factor: Number of sub-figure types in each figure. In
default, it's 2, ... | get_row_col | python | modelscope/data-juicer | data_juicer/analysis/column_wise_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/column_wise_analysis.py | Apache-2.0 |
def __init__(self,
dataset,
output_path,
overall_result=None,
save_stats_in_one_file=True):
"""
Initialization method
:param dataset: the dataset to be analyzed
:param output_path: path to store the analysis results
... |
Initialization method
:param dataset: the dataset to be analyzed
:param output_path: path to store the analysis results
:param overall_result: optional precomputed overall stats result
:param save_stats_in_one_file: whether save all analysis figures of all
stats int... | __init__ | python | modelscope/data-juicer | data_juicer/analysis/column_wise_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/column_wise_analysis.py | Apache-2.0 |
def analyze(self, show_percentiles=False, show=False, skip_export=False):
"""
Apply analysis and draw the analysis figure for stats.
:param show_percentiles: whether to show the percentile line in
each sub-figure. If it's true, there will be several red
lines to indicate... |
Apply analysis and draw the analysis figure for stats.
:param show_percentiles: whether to show the percentile line in
each sub-figure. If it's true, there will be several red
lines to indicate the quantiles of the stats distributions
:param show: whether to show in a s... | analyze | python | modelscope/data-juicer | data_juicer/analysis/column_wise_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/column_wise_analysis.py | Apache-2.0 |
def draw_hist(self, ax, data, save_path, percentiles=None, show=False):
"""
Draw the histogram for the data.
:param ax: the axes to draw
:param data: data to draw
:param save_path: the path to save the histogram figure
:param percentiles: the overall analysis result of t... |
Draw the histogram for the data.
:param ax: the axes to draw
:param data: data to draw
:param save_path: the path to save the histogram figure
:param percentiles: the overall analysis result of the data
including percentile information
:param show: whether t... | draw_hist | python | modelscope/data-juicer | data_juicer/analysis/column_wise_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/column_wise_analysis.py | Apache-2.0 |
def draw_box(self, ax, data, save_path, percentiles=None, show=False):
"""
Draw the box plot for the data.
:param ax: the axes to draw
:param data: data to draw
:param save_path: the path to save the box figure
:param percentiles: the overall analysis result of the data
... |
Draw the box plot for the data.
:param ax: the axes to draw
:param data: data to draw
:param save_path: the path to save the box figure
:param percentiles: the overall analysis result of the data
including percentile information
:param show: whether to show ... | draw_box | python | modelscope/data-juicer | data_juicer/analysis/column_wise_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/column_wise_analysis.py | Apache-2.0 |
def find_root_verb_and_its_dobj(tree_root):
"""
Find the verb and its object closest to the root.
:param tree_root: the root of lexical tree
:return: valid verb and its object.
"""
# first check if the current node and its children satisfy the condition
if tree_root.pos_ == 'VERB':
... |
Find the verb and its object closest to the root.
:param tree_root: the root of lexical tree
:return: valid verb and its object.
| find_root_verb_and_its_dobj | python | modelscope/data-juicer | data_juicer/analysis/diversity_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/diversity_analysis.py | Apache-2.0 |
def find_root_verb_and_its_dobj_in_string(nlp, s, first_sent=True):
"""
Find the verb and its object closest to the root of lexical tree of input
string.
:param nlp: the diversity model to analyze the diversity strings
:param s: the string to be analyzed
:param first_sent: whether to analyze th... |
Find the verb and its object closest to the root of lexical tree of input
string.
:param nlp: the diversity model to analyze the diversity strings
:param s: the string to be analyzed
:param first_sent: whether to analyze the first sentence in the
input string only. If it's true, return the... | find_root_verb_and_its_dobj_in_string | python | modelscope/data-juicer | data_juicer/analysis/diversity_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/diversity_analysis.py | Apache-2.0 |
def get_diversity(dataset, top_k_verbs=20, top_k_nouns=4, **kwargs):
"""
Given the lexical tree analysis result, return the diversity results.
:param dataset: lexical tree analysis result
:param top_k_verbs: only keep the top_k_verbs largest verb groups
:param top_k_nouns: only keep the top_k_nouns... |
Given the lexical tree analysis result, return the diversity results.
:param dataset: lexical tree analysis result
:param top_k_verbs: only keep the top_k_verbs largest verb groups
:param top_k_nouns: only keep the top_k_nouns largest noun groups
for each verb group
:param kwargs: extra ar... | get_diversity | python | modelscope/data-juicer | data_juicer/analysis/diversity_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/diversity_analysis.py | Apache-2.0 |
def __init__(self, dataset, output_path, lang_or_model='en'):
"""Initialization method :param dataset: the dataset to be analyzed
:param output_path: path to store the analysis results :param
lang_or_model: the diversity model or a specific language used to load
the diversity model."""
... | Initialization method :param dataset: the dataset to be analyzed
:param output_path: path to store the analysis results :param
lang_or_model: the diversity model or a specific language used to load
the diversity model. | __init__ | python | modelscope/data-juicer | data_juicer/analysis/diversity_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/diversity_analysis.py | Apache-2.0 |
def compute(self, lang_or_model=None, column_name='text'):
"""
Apply lexical tree analysis on each sample.
:param lang_or_model: the diversity model or a specific language
used to load the diversity model
:param column_name: the name of column to be analyzed
:return:... |
Apply lexical tree analysis on each sample.
:param lang_or_model: the diversity model or a specific language
used to load the diversity model
:param column_name: the name of column to be analyzed
:return: the analysis result.
| compute | python | modelscope/data-juicer | data_juicer/analysis/diversity_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/diversity_analysis.py | Apache-2.0 |
def analyze(self,
lang_or_model=None,
column_name='text',
postproc_func=get_diversity,
**postproc_kwarg):
"""
Apply diversity analysis on the whole dataset.
:param lang_or_model: the diversity model or a specific language
... |
Apply diversity analysis on the whole dataset.
:param lang_or_model: the diversity model or a specific language
used to load the diversity model
:param column_name: the name of column to be analyzed
:param postproc_func: function to analyze diversity. In default,
... | analyze | python | modelscope/data-juicer | data_juicer/analysis/diversity_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/diversity_analysis.py | Apache-2.0 |
def draw_heatmap(data,
xlabels,
ylabels='auto',
figsize=None,
triangle=False,
show=False):
"""
Draw heatmap of input data with special labels.
:param data: input data, now support
[`list`, `tuple`, `numpy array`, '... |
Draw heatmap of input data with special labels.
:param data: input data, now support
[`list`, `tuple`, `numpy array`, 'torch tensor']
:param xlabels: x axis labels.
:param ylabels: y axis labels, if None, use xlabels.
:param figsize: figure size.
:param triangle: only display triangle.... | draw_heatmap | python | modelscope/data-juicer | data_juicer/analysis/draw.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/draw.py | Apache-2.0 |
def _convert_to_tensor(self, p):
"""
Convert input data to torch tensor.
:param p: input data, now support
[`scalar`,`list`, `tuple`, `torch binary file`, and `Categorical`].
:return: torch tensor
"""
if isinstance(p, torch.Tensor):
return p
... |
Convert input data to torch tensor.
:param p: input data, now support
[`scalar`,`list`, `tuple`, `torch binary file`, and `Categorical`].
:return: torch tensor
| _convert_to_tensor | python | modelscope/data-juicer | data_juicer/analysis/measure.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/measure.py | Apache-2.0 |
def _convert_to_categorical(self, p):
"""
Convert input data to torch Categorical.
:param p: input data, now support
[`scalar`,`list`, `tuple`, `torch binary file`, and `Categorical`].
:return: torch Categorical
"""
if isinstance(p, td.Categorical):
... |
Convert input data to torch Categorical.
:param p: input data, now support
[`scalar`,`list`, `tuple`, `torch binary file`, and `Categorical`].
:return: torch Categorical
| _convert_to_categorical | python | modelscope/data-juicer | data_juicer/analysis/measure.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/measure.py | Apache-2.0 |
def measure(self, p, q):
"""
:param p: the first feature or distribution. (stats/tags/categories)
:param q: the second feature or distribution. (stats/tags/categories)
:return: the T-Test results object -- ([ref](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats._result_cl... |
:param p: the first feature or distribution. (stats/tags/categories)
:param q: the second feature or distribution. (stats/tags/categories)
:return: the T-Test results object -- ([ref](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats._result_classes.TtestResult.html#scipy.stats._... | measure | python | modelscope/data-juicer | data_juicer/analysis/measure.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/measure.py | Apache-2.0 |
def __init__(self, dataset, output_path):
"""
Initialization method.
:param dataset: the dataset to be analyzed
:param output_path: path to store the analysis results.
"""
self.stats = pd.DataFrame(dataset[Fields.stats])
self.meta = pd.DataFrame(dataset[Fields.me... |
Initialization method.
:param dataset: the dataset to be analyzed
:param output_path: path to store the analysis results.
| __init__ | python | modelscope/data-juicer | data_juicer/analysis/overall_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/overall_analysis.py | Apache-2.0 |
def analyze(self, percentiles=[], num_proc=1, skip_export=False):
"""
Apply overall analysis on the whole dataset based on the describe
method of pandas.
:param percentiles: percentiles to analyze
:param num_proc: number of processes to analyze the dataset
:param skip_ex... |
Apply overall analysis on the whole dataset based on the describe
method of pandas.
:param percentiles: percentiles to analyze
:param num_proc: number of processes to analyze the dataset
:param skip_export: whether export the results to disk
:return: the overall analysi... | analyze | python | modelscope/data-juicer | data_juicer/analysis/overall_analysis.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/analysis/overall_analysis.py | Apache-2.0 |
def init_configs(args: Optional[List[str]] = None, which_entry: object = None):
"""
initialize the jsonargparse parser and parse configs from one of:
1. POSIX-style commands line args;
2. config files in yaml (json and jsonnet supersets);
3. environment variables
4. hard-coded de... |
initialize the jsonargparse parser and parse configs from one of:
1. POSIX-style commands line args;
2. config files in yaml (json and jsonnet supersets);
3. environment variables
4. hard-coded defaults
:param args: list of params, e.g., ['--config', 'cfg.yaml'], default None.
... | init_configs | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def init_setup_from_cfg(cfg: Namespace):
"""
Do some extra setup tasks after parsing config file or command line.
1. create working directory and a log directory
2. update cache directory
3. update checkpoint and `temp_dir` of tempfile
:param cfg: an original cfg
:param cfg: an updated cfg... |
Do some extra setup tasks after parsing config file or command line.
1. create working directory and a log directory
2. update cache directory
3. update checkpoint and `temp_dir` of tempfile
:param cfg: an original cfg
:param cfg: an updated cfg
| init_setup_from_cfg | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def _collect_config_info_from_class_docs(configurable_ops, parser):
"""
Add ops and its params to parser for command line with optimized performance.
"""
with timing_context('Collecting operator configuration info'):
op_params = {}
# Add arguments for all provided operators
for ... |
Add ops and its params to parser for command line with optimized performance.
| _collect_config_info_from_class_docs | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def sort_op_by_types_and_names(op_name_classes):
"""
Split ops items by op type and sort them to sub-ops by name, then concat
together.
:param op_name_classes: a list of op modules
:return: sorted op list , each item is a pair of op_name and
op_class
"""
with timing_context('Sorting... |
Split ops items by op type and sort them to sub-ops by name, then concat
together.
:param op_name_classes: a list of op modules
:return: sorted op list , each item is a pair of op_name and
op_class
| sort_op_by_types_and_names | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def update_op_process(cfg, parser, used_ops=None):
"""
Update operator process configuration with optimized performance.
Args:
cfg: Configuration namespace
parser: Argument parser
used_ops: Set of operator names that are actually used in the config
"""
if used_ops is None:
... |
Update operator process configuration with optimized performance.
Args:
cfg: Configuration namespace
parser: Argument parser
used_ops: Set of operator names that are actually used in the config
| update_op_process | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def export_config(cfg: Namespace,
path: str,
format: str = 'yaml',
skip_none: bool = True,
skip_check: bool = True,
overwrite: bool = False,
multifile: bool = True):
"""
Save the config object, some param... |
Save the config object, some params are from jsonargparse
:param cfg: cfg object to save (Namespace type)
:param path: the save path
:param format: 'yaml', 'json', 'json_indented', 'parser_mode'
:param skip_none: Whether to exclude entries whose value is None.
:param skip_check: Whether to ski... | export_config | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def merge_config(ori_cfg: Namespace, new_cfg: Namespace):
"""
Merge configuration from new_cfg into ori_cfg
:param ori_cfg: the original configuration object, whose type is
expected as namespace from jsonargparse
:param new_cfg: the configuration object to be merged, whose type is
expec... |
Merge configuration from new_cfg into ori_cfg
:param ori_cfg: the original configuration object, whose type is
expected as namespace from jsonargparse
:param new_cfg: the configuration object to be merged, whose type is
expected as dict or namespace from jsonargparse
:return: cfg_afte... | merge_config | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def prepare_side_configs(ori_config: Union[str, Namespace, Dict]):
"""
parse the config if ori_config is a string of a config file path with
yaml, yml or json format
:param ori_config: a config dict or a string of a config file path with
yaml, yml or json format
:return: a config dict
... |
parse the config if ori_config is a string of a config file path with
yaml, yml or json format
:param ori_config: a config dict or a string of a config file path with
yaml, yml or json format
:return: a config dict
| prepare_side_configs | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def get_init_configs(cfg: Union[Namespace, Dict]):
"""
set init configs of data-juicer for cfg
"""
temp_dir = tempfile.gettempdir()
temp_file = os.path.join(temp_dir, 'job_dj_config.json')
if isinstance(cfg, Namespace):
cfg = namespace_to_dict(cfg)
# create an temp config file
wi... |
set init configs of data-juicer for cfg
| get_init_configs | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def get_default_cfg():
"""Get default config values from config_all.yaml"""
cfg = Namespace()
# Get path to config_all.yaml
config_dir = os.path.dirname(os.path.abspath(__file__))
default_config_path = os.path.join(config_dir,
'../../configs/config_min.yaml')
... | Get default config values from config_all.yaml | get_default_cfg | python | modelscope/data-juicer | data_juicer/config/config.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/config/config.py | Apache-2.0 |
def execute_and_probe(dataset, operators, sample_interval=0.5):
"""
Process the input dataset and probe related information for each OP in
the specified operator list.
For now, we support the following targets to probe:
"resource": resource utilization for each OP.
"spee... |
Process the input dataset and probe related information for each OP in
the specified operator list.
For now, we support the following targets to probe:
"resource": resource utilization for each OP.
"speed": average processing speed for each OP.
The probe result is a li... | execute_and_probe | python | modelscope/data-juicer | data_juicer/core/adapter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/adapter.py | Apache-2.0 |
def take_batch(dataset, config):
"""
Split the dataset into batches based on configuration and load factor.
:param dataset: The dataset to be split
:param config: Configuration settings, including batch size
:return: An iterator of batches
"""
# get initial batch... |
Split the dataset into batches based on configuration and load factor.
:param dataset: The dataset to be split
:param config: Configuration settings, including batch size
:return: An iterator of batches
| take_batch | python | modelscope/data-juicer | data_juicer/core/adapter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/adapter.py | Apache-2.0 |
def adapt_workloads(self, dataset, operators):
"""
Manage the scheduling and load balancing for the dataset processing.
:param dataset: The dataset that needs to be processed
:param operators: Operators in the data recipe
"""
# TODO: set batch size to 1 for all OPs for p... |
Manage the scheduling and load balancing for the dataset processing.
:param dataset: The dataset that needs to be processed
:param operators: Operators in the data recipe
| adapt_workloads | python | modelscope/data-juicer | data_juicer/core/adapter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/adapter.py | Apache-2.0 |
def probe_small_batch(self, dataset, operators):
"""
Perform small batch pre-execution to probe available resources,
current load and estimated OP speed, returning load factors and speed
ranks for each OP.
Notice: the probe should be run with cache enabled to avoid removing
... |
Perform small batch pre-execution to probe available resources,
current load and estimated OP speed, returning load factors and speed
ranks for each OP.
Notice: the probe should be run with cache enabled to avoid removing
the cache files of the input dataset.
:param da... | probe_small_batch | python | modelscope/data-juicer | data_juicer/core/adapter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/adapter.py | Apache-2.0 |
def batch_size_strategy(self, load_analysis_res, base_bs=1, util_th=0.9):
"""
Decide the batch size for each op according to their workload analysis
result and expected utilization threshold. We need to guarantee that
the resource utilization won't exceed the threshold. Now we only
... |
Decide the batch size for each op according to their workload analysis
result and expected utilization threshold. We need to guarantee that
the resource utilization won't exceed the threshold. Now we only
consider the buckets effect, which means the max batch size is decided
by ... | batch_size_strategy | python | modelscope/data-juicer | data_juicer/core/adapter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/adapter.py | Apache-2.0 |
def analyze_small_batch(self, dataset, current_state):
"""
Perform small batch analysis to probe the current OP-wise stats/meta
distributions. The analyzed results will be stored in the directory
`{work_dir}/insight_mining`.
Notice: the probe should be run with cache enabled to ... |
Perform small batch analysis to probe the current OP-wise stats/meta
distributions. The analyzed results will be stored in the directory
`{work_dir}/insight_mining`.
Notice: the probe should be run with cache enabled to avoid removing
the cache files of the input dataset.
... | analyze_small_batch | python | modelscope/data-juicer | data_juicer/core/adapter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/adapter.py | Apache-2.0 |
def insight_mining(self, pval_th=0.05):
"""
Mining the insights from the OP-wise analysis results. For now, we use
T-Test to check the significance of stats/meta changes before and after
each OP processing. If the p-value is less than a given threshold
(usually 0.05), we think th... |
Mining the insights from the OP-wise analysis results. For now, we use
T-Test to check the significance of stats/meta changes before and after
each OP processing. If the p-value is less than a given threshold
(usually 0.05), we think the stats/meta changes are significant. The
i... | insight_mining | python | modelscope/data-juicer | data_juicer/core/adapter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/adapter.py | Apache-2.0 |
def __init__(self, cfg: Optional[Namespace] = None):
"""
Initialization method.
:param cfg: optional jsonargparse Namespace dict.
"""
self.cfg = init_configs(which_entry=self) if cfg is None else cfg
self.work_dir = self.cfg.work_dir
if self.cfg.use_cache:
... |
Initialization method.
:param cfg: optional jsonargparse Namespace dict.
| __init__ | python | modelscope/data-juicer | data_juicer/core/analyzer.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/analyzer.py | Apache-2.0 |
def run(self,
dataset: Union[Dataset, NestedDataset] = None,
load_data_np: Optional[PositiveInt] = None,
skip_export: bool = False,
skip_return: bool = False):
"""
Running the dataset analysis pipeline.
:param dataset: a Dataset object to be analy... |
Running the dataset analysis pipeline.
:param dataset: a Dataset object to be analyzed.
:param load_data_np: number of workers when loading the dataset.
:param skip_export: whether export the results into disk
:param skip_return: skip return for API called.
:return: ana... | run | python | modelscope/data-juicer | data_juicer/core/analyzer.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/analyzer.py | Apache-2.0 |
def __init__(self,
export_path,
export_shard_size=0,
export_in_parallel=True,
num_proc=1,
export_ds=True,
keep_stats_in_res_ds=False,
keep_hashes_in_res_ds=False,
export_stats=True):
... |
Initialization method.
:param export_path: the path to export datasets.
:param export_shard_size: the size of each shard of exported
dataset. In default, it's 0, which means export the dataset
to a single file.
:param num_proc: number of process to export the da... | __init__ | python | modelscope/data-juicer | data_juicer/core/exporter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/exporter.py | Apache-2.0 |
def _get_suffix(self, export_path):
"""
Get the suffix of export path and check if it's supported.
We only support ["jsonl", "json", "parquet"] for now.
:param export_path: the path to export datasets.
:return: the suffix of export_path.
"""
suffix = export_path... |
Get the suffix of export path and check if it's supported.
We only support ["jsonl", "json", "parquet"] for now.
:param export_path: the path to export datasets.
:return: the suffix of export_path.
| _get_suffix | python | modelscope/data-juicer | data_juicer/core/exporter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/exporter.py | Apache-2.0 |
def _export_impl(self, dataset, export_path, suffix, export_stats=True):
"""
Export a dataset to specific path.
:param dataset: the dataset to export.
:param export_path: the path to export the dataset.
:param suffix: suffix of export path.
:param export_stats: whether t... |
Export a dataset to specific path.
:param dataset: the dataset to export.
:param export_path: the path to export the dataset.
:param suffix: suffix of export path.
:param export_stats: whether to export stats of dataset.
:return:
| _export_impl | python | modelscope/data-juicer | data_juicer/core/exporter.py | https://github.com/modelscope/data-juicer/blob/master/data_juicer/core/exporter.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.