code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def set_oumi_install_editable(setup: str) -> str:
"""Tries to replace oumi PyPi installs with editable installation from source.
For example, the following line:
`pip install uv && uv pip -q install oumi[gpu,dev] vllm`
will be replaced with:
`pip install uv && uv pip -q install -e '.[gpu,de... | Tries to replace oumi PyPi installs with editable installation from source.
For example, the following line:
`pip install uv && uv pip -q install oumi[gpu,dev] vllm`
will be replaced with:
`pip install uv && uv pip -q install -e '.[gpu,dev]' vllm`
Args:
setup (str): The bash setup ... | set_oumi_install_editable | python | oumi-ai/oumi | src/oumi/utils/str_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/str_utils.py | Apache-2.0 |
def truncate_to_max_tokens_limit(
text: str,
tokenizer: BaseTokenizer,
*,
max_tokens: int,
truncation_side: str = "right",
) -> tuple[str, int]:
"""Truncates text to `max_length` in tokens.
Args:
text: A text prompt.
tokenizer: The tokenizer used for encoding the data.
... | Truncates text to `max_length` in tokens.
Args:
text: A text prompt.
tokenizer: The tokenizer used for encoding the data.
max_tokens: Maximum number of tokens to keep.
truncation_side: The side to truncate the tokens ("right" or "left").
Returns:
A tuple containing trun... | truncate_to_max_tokens_limit | python | oumi-ai/oumi | src/oumi/utils/str_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/str_utils.py | Apache-2.0 |
def truncate_text_pieces_to_max_tokens_limit(
text_pieces: list[str],
tokenizer: BaseTokenizer,
*,
max_tokens: int,
truncation_side: str = "right",
) -> list[str]:
"""Truncates text pieces to total length not exceeding `max_length`.
Args:
text_pieces: A list of text prompts.
... | Truncates text pieces to total length not exceeding `max_length`.
Args:
text_pieces: A list of text prompts.
tokenizer: The tokenizer used for encoding the data.
max_tokens: Maximum number of tokens to keep in all text pieces combined.
truncation_side: The side to truncate the token... | truncate_text_pieces_to_max_tokens_limit | python | oumi-ai/oumi | src/oumi/utils/str_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/str_utils.py | Apache-2.0 |
def disable_dropout(hf_config: transformers.PretrainedConfig) -> None:
"""Detects dropout probabilities in config and sets them to 0.0.
This essentially removes the dropout layer, which can aid the compiled model's
speed. Dropout is normally not used for LLM training, and also hinders the
effectiveness... | Detects dropout probabilities in config and sets them to 0.0.
This essentially removes the dropout layer, which can aid the compiled model's
speed. Dropout is normally not used for LLM training, and also hinders the
effectiveness of model compilation. We assume any attribute with "drop" in the name
and... | disable_dropout | python | oumi-ai/oumi | src/oumi/utils/torch_naming_heuristics.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_naming_heuristics.py | Apache-2.0 |
def group_trainable_params(
model: torch.nn.Module, weight_decay: float
) -> list[dict[str, Any]]:
"""Groups trainable params by weight decay for optimization.
As a rule of thumb, we generally want to weight decay all 2d matrices, i.e.
weight tensors for matmuls/embeddings, and not biases/layernorms.
... | Groups trainable params by weight decay for optimization.
As a rule of thumb, we generally want to weight decay all 2d matrices, i.e.
weight tensors for matmuls/embeddings, and not biases/layernorms.
Args:
model: The model whose parameters will be optimized.
weight_decay: The weight decay ... | group_trainable_params | python | oumi-ai/oumi | src/oumi/utils/torch_naming_heuristics.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_naming_heuristics.py | Apache-2.0 |
def guess_transformer_layer_cls(model: nn.Module) -> type[nn.Module]:
"""Guess the transformer layer class based on the model architecture."""
for module in model.modules():
for layer_pattern in ["layer", "block", "transformerlayer"]:
layer_name = str(type(module)).lower()
if la... | Guess the transformer layer class based on the model architecture. | guess_transformer_layer_cls | python | oumi-ai/oumi | src/oumi/utils/torch_naming_heuristics.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_naming_heuristics.py | Apache-2.0 |
def resolve_transformer_layer_cls_string_as_module_set(
class_names: str,
) -> set[type[nn.Module]]:
"""Get a module class from its string name."""
result: set[type[nn.Module]] = set()
for class_name in _parse_transformer_layer_cls_string(class_names):
parts = class_name.rsplit(".", maxsplit=1)
... | Get a module class from its string name. | resolve_transformer_layer_cls_string_as_module_set | python | oumi-ai/oumi | src/oumi/utils/torch_naming_heuristics.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_naming_heuristics.py | Apache-2.0 |
def simplify_transformer_layer_cls_string(class_names: str) -> str:
"""Replaces fully-qualified class names with pure class names.
For example, converts 'foo.Block,foo.util.Decoder' to 'Block,Decoder'.
The `accelerate` library expects the simplified format, while OUMI trainer requires
fully-qualified ... | Replaces fully-qualified class names with pure class names.
For example, converts 'foo.Block,foo.util.Decoder' to 'Block,Decoder'.
The `accelerate` library expects the simplified format, while OUMI trainer requires
fully-qualified class names.
| simplify_transformer_layer_cls_string | python | oumi-ai/oumi | src/oumi/utils/torch_naming_heuristics.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_naming_heuristics.py | Apache-2.0 |
def device_cleanup() -> None:
"""Empties gpu cache, good to do before and after training for cleanup."""
logger.debug("Running garbage collection.")
gc.collect()
if torch.cuda.is_available():
logger.debug("Cleaning up GPU memory.")
logger.debug(
"GPU memory occupied before c... | Empties gpu cache, good to do before and after training for cleanup. | device_cleanup | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def format_cudnn_version(v: Optional[int]) -> str:
"""Formats the cuDNN version number.
Args:
v: The cuDNN version number.
Returns:
A formatted string.
"""
if v is None:
return ""
return ".".join(map(str, (v // 1000, v // 100 % 10, v % 100))) | Formats the cuDNN version number.
Args:
v: The cuDNN version number.
Returns:
A formatted string.
| format_cudnn_version | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def log_devices_info(filepath: Optional[Path] = None) -> None:
"""Logs high-level info about all available accelerator devices."""
if not torch.cuda.is_available():
return
ncpus = os.cpu_count()
num_devices = torch.cuda.device_count()
log_lines = [f"CPU cores: {ncpus} CUDA devices: {num_dev... | Logs high-level info about all available accelerator devices. | log_devices_info | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def log_peak_gpu_memory():
"""Log the peak GPU memory usage."""
if torch.cuda.is_available():
peak_memory = torch.cuda.max_memory_allocated() / 1024**3 # Convert to GB
logger.info(f"Peak GPU memory usage: {peak_memory:.2f} GB") | Log the peak GPU memory usage. | log_peak_gpu_memory | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def create_model_summary(model: Any) -> str:
"""Creates a model summary as a free-formed string."""
lines = ["Model summary:", repr(model), ""]
module_lines = [f"{name} ({type(layer)})" for name, layer in model.named_modules()]
lines.append(f"Modules ({len(module_lines)}):")
lines.extend(module_li... | Creates a model summary as a free-formed string. | create_model_summary | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def get_device_name() -> str:
"""Returns the name of the device, assuming all are identical."""
device_name = "CPU"
if torch.cuda.is_available():
# Assume all devices are identical
device_name = torch.cuda.get_device_name(0)
elif torch.backends.mps.is_available():
device_name = "... | Returns the name of the device, assuming all are identical. | get_device_name | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def __post_init__(self):
"""Ensure that the parameters are valid."""
for name, value in [
("all_params", self.all_params),
("trainable_params", self.trainable_params),
("embedding_params", self.embedding_params),
]:
if value < 0:
ra... | Ensure that the parameters are valid. | __post_init__ | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def _get_parameter_names(
model: torch.nn.Module, forbidden_layer_types: list[Any]
) -> list[str]:
"""Returns the names of the model parameters that are not inside a forbidden layer.
Borrowed from
https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py.
"""
result = []... | Returns the names of the model parameters that are not inside a forbidden layer.
Borrowed from
https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py.
| _get_parameter_names | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def count_model_parameters(model: torch.nn.Module) -> ModelParameterCount:
"""Creates a basic counter of the parameters in a neural model.
Args:
model: The torch-implemented neural network.
Returns:
ModelParameterCount: A ModelParameterCount for the underlying model.
"""
trainable_... | Creates a basic counter of the parameters in a neural model.
Args:
model: The torch-implemented neural network.
Returns:
ModelParameterCount: A ModelParameterCount for the underlying model.
| count_model_parameters | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def get_torch_dtype(torch_dtype_str: str) -> torch.dtype:
"""Converts string dtype to torch.dtype."""
torch_dtype_str = torch_dtype_str.lower()
if torch_dtype_str in ["f64", "float64", "double"]:
return torch.float64
elif torch_dtype_str in ["f32", "float32", "float"]:
return torch.float... | Converts string dtype to torch.dtype. | get_torch_dtype | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def get_dtype_size_in_bytes(
dtype: Union[str, torch.dtype, npt.DTypeLike],
) -> int:
"""Returns size of this dtype in bytes."""
if isinstance(dtype, torch.dtype):
return dtype.itemsize
elif isinstance(dtype, str):
if not dtype:
raise ValueError("Empty string is not a valid d... | Returns size of this dtype in bytes. | get_dtype_size_in_bytes | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def estimate_sample_dict_size_in_bytes(sample: dict[str, Any]) -> int:
"""Estimates the approximate total number of bytes in a provided sample.
Training sample is expected to be a dictionary, where a value is a list,
tensor, or a numpy array.
The function works in best effort mode i.e., 100% accuaracy... | Estimates the approximate total number of bytes in a provided sample.
Training sample is expected to be a dictionary, where a value is a list,
tensor, or a numpy array.
The function works in best effort mode i.e., 100% accuaracy isn't guaranteed.
The implementation is slow, and shouldn't be called in ... | estimate_sample_dict_size_in_bytes | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def coerce_model_to_dtype(model: torch.nn.Module, dtype: torch.dtype) -> None:
"""Coerces the model to the desired dtype.
This is needed as a temporary workaround to support QLoRA FSDP training. See:
https://github.com/huggingface/accelerate/issues/1620#issuecomment-2407102051
"""
for name, module ... | Coerces the model to the desired dtype.
This is needed as a temporary workaround to support QLoRA FSDP training. See:
https://github.com/huggingface/accelerate/issues/1620#issuecomment-2407102051
| coerce_model_to_dtype | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def convert_to_list_of_tensors(values: list[T]) -> list[torch.Tensor]:
"""Converts a list of array-like objects into alist of torch tensors."""
if len(values) == 0:
return []
first_item = values[0]
if isinstance(first_item, torch.Tensor):
return [cast(torch.Tensor, item) for item in val... | Converts a list of array-like objects into alist of torch tensors. | convert_to_list_of_tensors | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def pad_sequences_right_side(
sequences: list[T], *, padding_value: float = 0
) -> torch.Tensor:
"""Pads a list of variable-length tensors to a single tensor.
Appends `padding_value` to the right side of each sequence
to expand to the longest length.
Args:
sequences: list of variable lengt... | Pads a list of variable-length tensors to a single tensor.
Appends `padding_value` to the right side of each sequence
to expand to the longest length.
Args:
sequences: list of variable length sequences.
padding_value: value for padded elements. Default: 0.
Returns:
A tensor wi... | pad_sequences_right_side | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def pad_sequences_left_side(
sequences: list[T], *, padding_value: float = 0
) -> torch.Tensor:
"""Pads a list of variable-length tensors to a single tensor.
Prepends `padding_value` to the left side of each sequence
to expand to the longest length.
Args:
sequences: list of variable length... | Pads a list of variable-length tensors to a single tensor.
Prepends `padding_value` to the left side of each sequence
to expand to the longest length.
Args:
sequences: list of variable length sequences.
padding_value: value for padded elements. Default: 0.
Returns:
A tensor wi... | pad_sequences_left_side | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def pad_sequences(
sequences: list[T], *, padding_value: float = 0, padding_side: Optional[str] = None
) -> torch.Tensor:
"""Pads a list of variable-length tensors to a single tensor.
Args:
sequences: list of variable length sequences.
padding_value: value for padded elements. Default: 0.
... | Pads a list of variable-length tensors to a single tensor.
Args:
sequences: list of variable length sequences.
padding_value: value for padded elements. Default: 0.
padding_side: side to apply padding to. Valid values: 'right', 'left'.
If unspecified (`None`), defaults to `righ... | pad_sequences | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def create_ones_like(
values: T,
) -> T:
"""Converts an array-like object into an object of the same type filled with 1-s.
Supports nested lists, in which case all elements must be of the same type.
"""
if isinstance(values, torch.Tensor):
return torch.ones_like(values)
elif isinstance(... | Converts an array-like object into an object of the same type filled with 1-s.
Supports nested lists, in which case all elements must be of the same type.
| create_ones_like | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def get_first_dim_len(x: Any) -> int:
"""Returns length of the first dimension."""
if isinstance(x, (torch.Tensor, np.ndarray)):
return int(x.shape[0])
elif isinstance(x, list):
return len(x)
raise ValueError(
f"Unsupported type: {type(x)}. "
"Must be numpy array, torch ... | Returns length of the first dimension. | get_first_dim_len | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def get_shape_as_list(x: Any) -> list[int]:
"""Returns shape of an object (tensor or numpy array) as Python list."""
if isinstance(x, (torch.Tensor, np.ndarray)):
return list(x.shape)
raise ValueError(f"Unsupported type: {type(x)}. Must be numpy array, torch tensor.") | Returns shape of an object (tensor or numpy array) as Python list. | get_shape_as_list | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def freeze_model_layers(model: torch.nn.Module, freeze_layers: list[str]) -> int:
"""Recursively freezes model layers.
Args:
model: A model to freeze layers in.
freeze_layers: A list of layer names to freeze.
Nested layers can be specified using a dot ('.') separator.
Fo... | Recursively freezes model layers.
Args:
model: A model to freeze layers in.
freeze_layers: A list of layer names to freeze.
Nested layers can be specified using a dot ('.') separator.
For example, "visual.child.grandchild".
Layer names not found in the model are ... | freeze_model_layers | python | oumi-ai/oumi | src/oumi/utils/torch_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/torch_utils.py | Apache-2.0 |
def patch_model_generation_config(self, model):
"""The generation_config created from model config may be different to the pretrained model,
this may lead to error when generating: https://github.com/volcengine/verl/issues/1246
This function patch the generation_config created from model config... | The generation_config created from model config may be different to the pretrained model,
this may lead to error when generating: https://github.com/volcengine/verl/issues/1246
This function patch the generation_config created from model config to the pretrained model.
| patch_model_generation_config | python | oumi-ai/oumi | src/oumi/utils/verl_model_merger.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/verl_model_merger.py | Apache-2.0 |
def _get_world_size(self) -> int:
"""Extracts the FSDP world_size from checkpoint filenames (e.g., 'model_world_size_8_rank_0.pt')."""
for filename in os.listdir(self.config.local_dir):
match = re.match(r"model_world_size_(\d+)_rank_0\.pt", filename)
if match:
ret... | Extracts the FSDP world_size from checkpoint filenames (e.g., 'model_world_size_8_rank_0.pt'). | _get_world_size | python | oumi-ai/oumi | src/oumi/utils/verl_model_merger.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/verl_model_merger.py | Apache-2.0 |
def _extract_device_mesh_info(
self, state_dict: dict, world_size: int
) -> tuple[np.ndarray, tuple[str, ...]]:
"""Retrieves sharding information (device_mesh, mesh_dim_names) from a DTensor in the state_dict.
If no DTensor is found, infers a simple FSDP mesh based on world_size.
"""... | Retrieves sharding information (device_mesh, mesh_dim_names) from a DTensor in the state_dict.
If no DTensor is found, infers a simple FSDP mesh based on world_size.
| _extract_device_mesh_info | python | oumi-ai/oumi | src/oumi/utils/verl_model_merger.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/verl_model_merger.py | Apache-2.0 |
def _calculate_shard_configuration(
self, mesh: np.ndarray, mesh_dim_names: tuple[str, ...]
) -> tuple[int, tuple[int, ...]]:
"""Calculates the total number of shards and the shape of the device mesh."""
assert mesh_dim_names in (
("fsdp",),
("ddp", "fsdp"),
)... | Calculates the total number of shards and the shape of the device mesh. | _calculate_shard_configuration | python | oumi-ai/oumi | src/oumi/utils/verl_model_merger.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/verl_model_merger.py | Apache-2.0 |
def _merge_by_placement(
self, tensors: list[torch.Tensor], placement: Placement
) -> torch.Tensor:
"""Merges a list of tensors based on their DTensor placement"""
if placement.is_replicate():
return tensors[0]
elif placement.is_partial():
raise NotImplemented... | Merges a list of tensors based on their DTensor placement | _merge_by_placement | python | oumi-ai/oumi | src/oumi/utils/verl_model_merger.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/verl_model_merger.py | Apache-2.0 |
def _check_megatron_checkpoint_path(
self, model_path: str
) -> tuple[list[str], int, int]:
"""Validates the Megatron checkpoint structure (presence of 'model.pt' in sharded directories).
Determines TP and PP sizes from directory names.
"""
tp_size = 0
pp_size = 0
... | Validates the Megatron checkpoint structure (presence of 'model.pt' in sharded directories).
Determines TP and PP sizes from directory names.
| _check_megatron_checkpoint_path | python | oumi-ai/oumi | src/oumi/utils/verl_model_merger.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/verl_model_merger.py | Apache-2.0 |
def _test_state_dict(self, state_dict: dict[str, torch.Tensor]):
"""Compares the merged Megatron state_dict against a reference safetensors model.
Applies necessary name mappings from Megatron to Hugging Face conventions using _replace_name.
"""
ref_state_dict = load_file(Path(self.confi... | Compares the merged Megatron state_dict against a reference safetensors model.
Applies necessary name mappings from Megatron to Hugging Face conventions using _replace_name.
| _test_state_dict | python | oumi-ai/oumi | src/oumi/utils/verl_model_merger.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/verl_model_merger.py | Apache-2.0 |
def get_python_package_versions() -> dict[str, str]:
"""Returns a dictionary of the installed package names and their versions."""
packages = {}
for distribution in metadata.distributions():
package_name = distribution.metadata["Name"]
package_version = distribution.version
packages[... | Returns a dictionary of the installed package names and their versions. | get_python_package_versions | python | oumi-ai/oumi | src/oumi/utils/version_utils.py | https://github.com/oumi-ai/oumi/blob/master/src/oumi/utils/version_utils.py | Apache-2.0 |
def setup_logging():
"""Fixture to set up logging for all tests.
We want to propagate to the root logger so that
pytest caplog can capture logs, and we can test
logging for the default oumi logger.
"""
logger = get_logger("oumi")
logger.propagate = True
return logger | Fixture to set up logging for all tests.
We want to propagate to the root logger so that
pytest caplog can capture logs, and we can test
logging for the default oumi logger.
| setup_logging | python | oumi-ai/oumi | tests/conftest.py | https://github.com/oumi-ai/oumi/blob/master/tests/conftest.py | Apache-2.0 |
def retain_logging_level():
"""Fixture to preserve the logging level between tests."""
logger = get_logger("oumi")
# Store the current log level
log_level = logger.level
yield
# Rehydrate the log level
logger.setLevel(log_level) | Fixture to preserve the logging level between tests. | retain_logging_level | python | oumi-ai/oumi | tests/conftest.py | https://github.com/oumi-ai/oumi/blob/master/tests/conftest.py | Apache-2.0 |
def requires_gpus(count: int = 1, min_gb: float = 0.0) -> pytest.MarkDecorator:
"""Decorator to skip a test if the required number of GPUs is not available.
Args:
count (int): The number of GPUs required for the test. Defaults to 1.
min_gb: Min required GPU VRAM in GB-s. Has no effect if zero o... | Decorator to skip a test if the required number of GPUs is not available.
Args:
count (int): The number of GPUs required for the test. Defaults to 1.
min_gb: Min required GPU VRAM in GB-s. Has no effect if zero or negative.
Returns:
pytest.MarkDecorator: A decorator that skips the test... | requires_gpus | python | oumi-ai/oumi | tests/markers.py | https://github.com/oumi-ai/oumi/blob/master/tests/markers.py | Apache-2.0 |
def get_notebooks():
"""Get all notebooks in the notebooks directory."""
notebooks_dir = get_notebooks_dir()
notebooks_to_skip = _NOTEBOOKS_TO_SKIP.copy()
notebooks_to_test = []
for notebook_path in notebooks_dir.glob("*.ipynb"):
if notebook_path.name in notebooks_to_skip:
notebo... | Get all notebooks in the notebooks directory. | get_notebooks | python | oumi-ai/oumi | tests/e2e/test_notebooks.py | https://github.com/oumi-ai/oumi/blob/master/tests/e2e/test_notebooks.py | Apache-2.0 |
def perform_inference(engine, conversations, config):
"""Perform inference using the SambaNova engine."""
try:
generations = engine.infer(
input=conversations,
inference_config=config,
)
return generations
except Exception as e:
print("An error occurre... | Perform inference using the SambaNova engine. | perform_inference | python | oumi-ai/oumi | tests/e2e/test_sambanova_inference.py | https://github.com/oumi-ai/oumi/blob/master/tests/e2e/test_sambanova_inference.py | Apache-2.0 |
def _check_checkpoint_dir(
dir_path: Path, *, is_lora: bool, validate_extra_files: bool = False
):
"""Helper to verify model directory structure."""
# Check essential model files
essential_files = [
"special_tokens_map.json",
"tokenizer_config.json",
"tokenizer.json",
"tr... | Helper to verify model directory structure. | _check_checkpoint_dir | python | oumi-ai/oumi | tests/e2e/test_train_e2e.py | https://github.com/oumi-ai/oumi/blob/master/tests/e2e/test_train_e2e.py | Apache-2.0 |
def _backtrack_on_path(path, n):
"""Goes up n directories in the current path."""
output_path = path
for _ in range(n):
output_path = os.path.dirname(output_path)
return output_path | Goes up n directories in the current path. | _backtrack_on_path | python | oumi-ai/oumi | tests/e2e/deps/test_circular_deps.py | https://github.com/oumi-ai/oumi/blob/master/tests/e2e/deps/test_circular_deps.py | Apache-2.0 |
def _get_oumi_path_recursively(path: Path) -> str:
"""Recursively goes up the path until it finds the oumi dir."""
if len(path.name) == 0:
raise FileNotFoundError("Could not find oumi dir.")
if path.name == "oumi":
return path.name
return f"{_get_oumi_path_recursively(path.parent)}.{path... | Recursively goes up the path until it finds the oumi dir. | _get_oumi_path_recursively | python | oumi-ai/oumi | tests/e2e/deps/test_circular_deps.py | https://github.com/oumi-ai/oumi/blob/master/tests/e2e/deps/test_circular_deps.py | Apache-2.0 |
def _get_all_py_paths(exclude_patterns: Optional[set[str]]) -> list[str]:
"""Recursively returns all py files in the /src/oumi/ dir of the repo."""
path_to_current_file = os.path.realpath(__file__)
repo_root = _backtrack_on_path(path_to_current_file, 4)
py_pattern = str(Path(repo_root) / "src" / "oumi" ... | Recursively returns all py files in the /src/oumi/ dir of the repo. | _get_all_py_paths | python | oumi-ai/oumi | tests/e2e/deps/test_circular_deps.py | https://github.com/oumi-ai/oumi/blob/master/tests/e2e/deps/test_circular_deps.py | Apache-2.0 |
def is_known_dataset_issue(dataset_name: str, idx: int) -> bool:
"""Check if the issue at the given index is a known issue."""
known_issues = {
"mlabonne/orpo-dpo-mix-40k": [
15438, # identical chosen and rejected responses
16135, # empty rejected key
16798, # iden... | Check if the issue at the given index is a known issue. | is_known_dataset_issue | python | oumi-ai/oumi | tests/integration/datasets/test_preference_tuning_datasets_full_epoch.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/datasets/test_preference_tuning_datasets_full_epoch.py | Apache-2.0 |
def is_content_empty_expected(dataset_name, conversation_idx, message_idx):
"""Determine if the content of a message is expected to be empty.
In 99.999% of cases, no message should have empty content. However there are
some known cases where the content is expected to be empty. This function
contains a... | Determine if the content of a message is expected to be empty.
In 99.999% of cases, no message should have empty content. However there are
some known cases where the content is expected to be empty. This function
contains a hard-coded list of such known cases.
| is_content_empty_expected | python | oumi-ai/oumi | tests/integration/datasets/test_sft_datasets_full_epoch.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/datasets/test_sft_datasets_full_epoch.py | Apache-2.0 |
def _get_all_sft_datasets_private_key() -> list[str]:
"""List all SFT datasets in the registry."""
_EXCLUDED_DATASETS = set({"coco_captions", "vision_language_jsonl", "vl_sft"})
datasets = []
for key, value in REGISTRY.get_all(RegistryType.DATASET).items():
if issubclass(value, BaseSftDataset) ... | List all SFT datasets in the registry. | _get_all_sft_datasets_private_key | python | oumi-ai/oumi | tests/integration/datasets/test_sft_datasets_load_datasets.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/datasets/test_sft_datasets_load_datasets.py | Apache-2.0 |
def _get_all_sft_vision_dataset_names() -> list[str]:
"""List all SFT datasets in the registry."""
datasets = []
for key, value in REGISTRY.get_all(RegistryType.DATASET).items():
if issubclass(value, VisionLanguageSftDataset):
datasets.append(key)
return datasets | List all SFT datasets in the registry. | _get_all_sft_vision_dataset_names | python | oumi-ai/oumi | tests/integration/datasets/test_sft_vision_datasets_load_datasets.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/datasets/test_sft_vision_datasets_load_datasets.py | Apache-2.0 |
def test_phi3_tokenization(phi3_tokenizer):
"""Test that we understand Phi-3's tokenization behavior correctly."""
# Known tokenization from our analysis
response_template = "<|assistant|>"
instruction_template = "<|user|>"
response_tokens = phi3_tokenizer.encode(response_template, add_special_toke... | Test that we understand Phi-3's tokenization behavior correctly. | test_phi3_tokenization | python | oumi-ai/oumi | tests/integration/datasets/test_vision_language_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/datasets/test_vision_language_completions_only.py | Apache-2.0 |
def test_vision_language_completions_only(phi3_tokenizer, sample_conversation):
"""Test vision language collator with exact token-level validation."""
# Create collator with completions-only training
collator = build_data_collator(
collator_name="vision_language_sft",
tokenizer=phi3_tokenize... | Test vision language collator with exact token-level validation. | test_vision_language_completions_only | python | oumi-ai/oumi | tests/integration/datasets/test_vision_language_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/datasets/test_vision_language_completions_only.py | Apache-2.0 |
def test_vision_language_completions_only_wrong_template(
phi3_tokenizer, sample_conversation
):
"""Test exact behavior when response template is not found."""
# Create collator with a non-existent response template
collator = build_data_collator(
collator_name="vision_language_sft",
tok... | Test exact behavior when response template is not found. | test_vision_language_completions_only_wrong_template | python | oumi-ai/oumi | tests/integration/datasets/test_vision_language_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/datasets/test_vision_language_completions_only.py | Apache-2.0 |
def __init__(
self,
*,
dataset_name: Optional[str] = None,
dataset_path: Optional[Union[str, Path]] = None,
split: Optional[str] = None,
npz_split_col: Optional[str] = None,
npz_allow_pickle: bool = False,
**kwargs,
) -> None:
"""Initializes a ... | Initializes a new instance of the NpzDataset class.
Args:
dataset_name: Dataset name.
dataset_path: Path to .npz file.
split: Dataset split.
npz_split_col: Name of '.npz' array containing dataset split info.
If unspecified, then the name "split" i... | __init__ | python | oumi-ai/oumi | tests/integration/models/test_integration_cnn_classifier.py | https://github.com/oumi-ai/oumi/blob/master/tests/integration/models/test_integration_cnn_classifier.py | Apache-2.0 |
def _backtrack_on_path(path, n):
"""Goes up n directories in the current path."""
output_path = path
for _ in range(n):
output_path = os.path.dirname(output_path)
return output_path | Goes up n directories in the current path. | _backtrack_on_path | python | oumi-ai/oumi | tests/unit/test_apache_license_header.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/test_apache_license_header.py | Apache-2.0 |
def _get_all_source_file_paths(exclude_prefixes: list[str] = []) -> list[str]:
"""Recursively returns all configs in the src/oumi/ dir of the repo.
Args:
exclude_prefixes (list[str]): List of prefixes to exclude from the search.
These prefixes should be specified relative to the repo root.
... | Recursively returns all configs in the src/oumi/ dir of the repo.
Args:
exclude_prefixes (list[str]): List of prefixes to exclude from the search.
These prefixes should be specified relative to the repo root.
Returns:
list[str]: List of all Python source files in the repo minus the... | _get_all_source_file_paths | python | oumi-ai/oumi | tests/unit/test_apache_license_header.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/test_apache_license_header.py | Apache-2.0 |
def sample_conversations_jsonl(single_turn_conversation):
"""Creates a temporary JSONL file with sample conversations."""
conversations = [
single_turn_conversation,
single_turn_conversation,
]
with tempfile.NamedTemporaryFile(suffix=".jsonl", delete=False) as f:
import jsonline... | Creates a temporary JSONL file with sample conversations. | sample_conversations_jsonl | python | oumi-ai/oumi | tests/unit/builders/test_build_data.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_build_data.py | Apache-2.0 |
def test_build_dataset_conversations(
sample_conversations_jsonl, gpt2_tokenizer, stream: bool
):
"""Test building dataset from conversations format JSONL."""
dataset = build_dataset(
dataset_name="text_sft_jsonl",
tokenizer=gpt2_tokenizer,
dataset_path=str(sample_conversations_jsonl... | Test building dataset from conversations format JSONL. | test_build_dataset_conversations | python | oumi-ai/oumi | tests/unit/builders/test_build_data.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_build_data.py | Apache-2.0 |
def test_build_dataset_invalid_path():
"""Test building dataset with invalid file path."""
with pytest.raises(FileNotFoundError):
build_dataset(
dataset_name="text_sft_jsonl",
tokenizer=None,
dataset_path="nonexistent.jsonl",
) | Test building dataset with invalid file path. | test_build_dataset_invalid_path | python | oumi-ai/oumi | tests/unit/builders/test_build_data.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_build_data.py | Apache-2.0 |
def test_build_dataset_mixture(
sample_conversations_jsonl, gpt2_tokenizer, stream: bool
):
"""Test building a mixture of datasets with specified proportions."""
# Create config with dataset mixture
data_params = DataParams(
train=DatasetSplitParams(
datasets=[
Datase... | Test building a mixture of datasets with specified proportions. | test_build_dataset_mixture | python | oumi-ai/oumi | tests/unit/builders/test_build_data.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_build_data.py | Apache-2.0 |
def test_packing_without_streaming_with_sft_dataset(stream: bool):
"""Test that packing works regardless of streaming flag"""
config = TrainingConfig(
data=DataParams(
train=DatasetSplitParams(
datasets=[
DatasetParams(
dataset_name... | Test that packing works regardless of streaming flag | test_packing_without_streaming_with_sft_dataset | python | oumi-ai/oumi | tests/unit/builders/test_data_mixtures.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_data_mixtures.py | Apache-2.0 |
def test_packing_without_streaming_with_pretraining_dataset(stream: bool):
"""Test that packing works regardless of streaming flag"""
if not stream:
pytest.skip("Iterable datasets must be streamed")
config = TrainingConfig(
data=DataParams(
train=DatasetSplitParams(
... | Test that packing works regardless of streaming flag | test_packing_without_streaming_with_pretraining_dataset | python | oumi-ai/oumi | tests/unit/builders/test_data_mixtures.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_data_mixtures.py | Apache-2.0 |
def test_find_model_hf_config_logs_unused_kwargs():
"""Test that find_model_hf_config logs a warning for unused kwargs."""
mock_config = Mock()
mock_config.model_type = "test_model"
unused_kwargs = {"unsupported_param": "value"}
with (
patch(
"oumi.core.configs.internal.supporte... | Test that find_model_hf_config logs a warning for unused kwargs. | test_find_model_hf_config_logs_unused_kwargs | python | oumi-ai/oumi | tests/unit/builders/test_models.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_models.py | Apache-2.0 |
def _compute_reward(num_tokens, target_tokens=20):
"""Returns maximum reward for inputs that are `target_tokens` long"""
x = float(num_tokens) / target_tokens
return x * math.exp(-x) | Returns maximum reward for inputs that are `target_tokens` long | _compute_reward | python | oumi-ai/oumi | tests/unit/builders/test_rewards.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/builders/test_rewards.py | Apache-2.0 |
def _verify_no_extra_import(extra_module: str):
"""Verifies that extra modules are not imported."""
import sys
import oumi.cli.main # noqa
assert extra_module not in sys.modules, f"{extra_module} was imported." | Verifies that extra modules are not imported. | _verify_no_extra_import | python | oumi-ai/oumi | tests/unit/cli/test_cli_speed_regression.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/cli/test_cli_speed_regression.py | Apache-2.0 |
def test_parse_rank_invalid_non_digit():
"""Test that _parse_rank raises ValueError for non-digit strings."""
with pytest.raises(ValueError, match=r"Rank must be a number\. Actual: abc\."):
_parse_rank("abc")
with pytest.raises(ValueError, match=r"Rank must be a number\. Actual: 1a\."):
_pa... | Test that _parse_rank raises ValueError for non-digit strings. | test_parse_rank_invalid_non_digit | python | oumi-ai/oumi | tests/unit/core/test_distributed.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/test_distributed.py | Apache-2.0 |
def test_parse_rank_invalid_negative():
"""Test that _parse_rank raises ValueError for negative numbers (except -1)."""
with pytest.raises(ValueError, match=r"Rank must be a number\. Actual: -2\."):
_parse_rank("-2")
with pytest.raises(ValueError, match=r"Rank must be a number\. Actual: -10\."):
... | Test that _parse_rank raises ValueError for negative numbers (except -1). | test_parse_rank_invalid_negative | python | oumi-ai/oumi | tests/unit/core/test_distributed.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/test_distributed.py | Apache-2.0 |
def oumi_test_evaluation_fn(
task_params: EvaluationTaskParams,
config: EvaluationConfig,
optional_param: str,
) -> EvaluationResult:
"""Dummy evaluation function for unit testing."""
assert task_params.evaluation_backend == EvaluationBackend.CUSTOM.value
assert task_... | Dummy evaluation function for unit testing. | oumi_test_evaluation_fn | python | oumi-ai/oumi | tests/unit/core/test_registry.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/test_registry.py | Apache-2.0 |
def oumi_test_evaluation_fn(task_params, config, optional_param):
"""Dummy evaluation function for unit testing."""
assert task_params.evaluation_backend == EvaluationBackend.CUSTOM.value
assert task_params.task_name == "test_evaluation_fn"
assert config.run_name == "run_name_for_test_ev... | Dummy evaluation function for unit testing. | oumi_test_evaluation_fn | python | oumi-ai/oumi | tests/unit/core/test_registry.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/test_registry.py | Apache-2.0 |
def oumi_test_evaluation_fn():
"""Dummy evaluation function for unit testing."""
return EvaluationResult(
task_name="unknown_task",
task_result={"result": "dummy_result"},
backend_config={"config": "dummy_config"},
) | Dummy evaluation function for unit testing. | oumi_test_evaluation_fn | python | oumi-ai/oumi | tests/unit/core/test_registry.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/test_registry.py | Apache-2.0 |
def test_debug_logging(caplog):
"""Test that example debugging logs are correctly generated when debug=True."""
# Set the logging level to DEBUG for both caplog and the oumi logger
caplog.set_level("DEBUG")
# Get and configure the oumi logger to ensure debug messages are captured
oumi_logger = logg... | Test that example debugging logs are correctly generated when debug=True. | test_debug_logging | python | oumi-ai/oumi | tests/unit/core/collators/test_text_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_text_collator_with_padding.py | Apache-2.0 |
def test_debug_logging(caplog):
"""Test that example debugging logs are correctly generated when debug=True."""
# Set the logging level to DEBUG for both caplog and the oumi logger
caplog.set_level("DEBUG")
# Get and configure the oumi logger to ensure debug messages are captured
oumi_logger = logg... | Test that example debugging logs are correctly generated when debug=True. | test_debug_logging | python | oumi-ai/oumi | tests/unit/core/collators/test_text_completions_collator_with_padding.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_text_completions_collator_with_padding.py | Apache-2.0 |
def test_basic_masking_no_user_template():
"""Test basic masking without user template (last assistant turn only strategy)."""
labels = np.array([1, 2, 3, 4, 5, 6, 7, 8])
response_tokens = [3, 4]
mask_labels_without_user_template(labels, response_tokens)
# Should mask everything except the last as... | Test basic masking without user template (last assistant turn only strategy). | test_basic_masking_no_user_template | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_masking_with_user_template():
"""Test masking with both user and assistant templates."""
# Conversation: User: [200, 201, 10] Assistant: [100, 101, 20, 21]
# User: [200, 201, 30] Assistant: [100, 101, 40, 41]
labels = np.array([200, 201, 10, 100, 101, 20, 21, 200, 201, 30, 100, 101, 40, 41])
... | Test masking with both user and assistant templates. | test_masking_with_user_template | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_no_response_template_found():
"""Test when response template is not found."""
labels = np.array([1, 2, 3, 4, 5])
response_tokens = [9, 10]
mask_labels_without_user_template(labels, response_tokens)
# Should mask everything
expected = np.array([-100, -100, -100, -100, -100])
np.tes... | Test when response template is not found. | test_no_response_template_found | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_response_at_start():
"""Test when response template is at the beginning."""
labels = np.array([1, 2, 3, 4, 5])
response_tokens = [1, 2]
mask_labels_without_user_template(labels, response_tokens)
# Should mask the template [1, 2] and keep only the last response content [3, 4, 5]
expect... | Test when response template is at the beginning. | test_response_at_start | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_multiple_responses_no_user_template():
"""Test multiple response templates without user template."""
labels = np.array([1, 2, 3, 4, 5, 3, 4, 6, 7])
response_tokens = [3, 4]
mask_labels_without_user_template(labels, response_tokens)
# Should mask everything except the last assistant respon... | Test multiple response templates without user template. | test_multiple_responses_no_user_template | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_single_turn_conversation():
"""Test single-turn conversation with user and assistant templates."""
# User: [200, 201, 10, 11] Assistant: [100, 101, 20, 21, 22]
labels = np.array([200, 201, 10, 11, 100, 101, 20, 21, 22])
response_tokens = [100, 101]
instruction_tokens = [200, 201]
mask_... | Test single-turn conversation with user and assistant templates. | test_single_turn_conversation | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def simple_feature_generator():
"""Simple mock feature generator for testing completion-only masking."""
fg = Mock()
fg._response_token_ids = [100, 101] # "Assistant:"
fg._instruction_token_ids = [200, 201] # "User:"
# Mock the special tokens
special_tokens = Mock()
special_tokens.label_i... | Simple mock feature generator for testing completion-only masking. | simple_feature_generator | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_find_all_template_positions(simple_feature_generator):
"""Test finding all template positions in sequence."""
from oumi.core.tokenizers.utils import find_all_sequences
input_ids = np.array([1, 100, 101, 2, 3, 100, 101, 4, 5])
positions = find_all_sequences(input_ids, [100, 101])
assert pos... | Test finding all template positions in sequence. | test_find_all_template_positions | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_mask_single_conversation_with_user_template(simple_feature_generator):
"""Test masking single conversation with user template."""
# User: [200, 201, 10] Assistant: [100, 101, 20]
# User: [200, 201, 30] Assistant: [100, 101, 40]
input_ids = np.array([200, 201, 10, 100, 101, 20, 200, 201, 30, 100... | Test masking single conversation with user template. | test_mask_single_conversation_with_user_template | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_mask_single_conversation_no_user_template(simple_feature_generator):
"""Test masking single conversation without user template."""
# Remove user template info
simple_feature_generator._instruction_token_ids = None
input_ids = np.array([1, 2, 100, 101, 3, 4, 5])
labels = np.array([1, 2, 100... | Test masking single conversation without user template. | test_mask_single_conversation_no_user_template | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_apply_completion_only_masking_list(simple_feature_generator):
"""Test applying completion-only masking to list inputs."""
inputs = {
"labels": [[1, 2, 100, 101, 3, 4, 5], [10, 11, 100, 101, 20, 30, 40]],
"input_ids": [[1, 2, 100, 101, 3, 4, 5], [10, 11, 100, 101, 20, 30, 40]],
}
... | Test applying completion-only masking to list inputs. | test_apply_completion_only_masking_list | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_apply_completion_only_masking_numpy(simple_feature_generator):
"""Test applying completion-only masking to numpy inputs."""
inputs = {
"labels": np.array([[1, 2, 100, 101, 3, 4, 5]]),
"input_ids": np.array([[1, 2, 100, 101, 3, 4, 5]]),
}
simple_feature_generator._apply_completi... | Test applying completion-only masking to numpy inputs. | test_apply_completion_only_masking_numpy | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_response_template_longer_than_sequence():
"""Test when response template is longer than the entire sequence."""
labels = np.array([1, 2])
response_tokens = [1, 2, 3, 4, 5]
mask_labels_without_user_template(labels, response_tokens)
# Should mask everything since template not found
expe... | Test when response template is longer than the entire sequence. | test_response_template_longer_than_sequence | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_no_assistant_response_with_user_template():
"""Test conversation with user template but no assistant response."""
labels = np.array([200, 201, 10, 11, 12]) # Only user message
response_tokens = [100, 101] # Assistant template
instruction_tokens = [200, 201] # User template
mask_labels_f... | Test conversation with user template but no assistant response. | test_no_assistant_response_with_user_template | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_assistant_response_at_end():
"""Test when assistant response is at the very end."""
labels = np.array([200, 201, 10, 100, 101])
response_tokens = [100, 101]
instruction_tokens = [200, 201]
mask_labels_for_completions_only(labels, response_tokens, instruction_tokens)
# Should mask ever... | Test when assistant response is at the very end. | test_assistant_response_at_end | python | oumi-ai/oumi | tests/unit/core/collators/test_vision_completions_only.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/collators/test_vision_completions_only.py | Apache-2.0 |
def test_guided_decoding_params_mutually_exclusive():
"""Test that json, regex, and choice parameters are mutually exclusive."""
# Valid cases - only one or none specified
GuidedDecodingParams(json={"type": "object"})
GuidedDecodingParams(regex=r"\d+")
GuidedDecodingParams(choice=["option1", "option... | Test that json, regex, and choice parameters are mutually exclusive. | test_guided_decoding_params_mutually_exclusive | python | oumi-ai/oumi | tests/unit/core/configs/test_guided_params.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/configs/test_guided_params.py | Apache-2.0 |
def _backtrack_on_path(path, n):
"""Goes up n directories in the current path."""
output_path = path
for _ in range(n):
output_path = os.path.dirname(output_path)
return output_path | Goes up n directories in the current path. | _backtrack_on_path | python | oumi-ai/oumi | tests/unit/core/configs/test_parse_configs.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/configs/test_parse_configs.py | Apache-2.0 |
def _get_all_config_paths(exclude_yaml_suffixes: Optional[set[str]]) -> list[str]:
"""Recursively returns all configs in the /configs/ dir of the repo."""
path_to_current_file = os.path.realpath(__file__)
repo_root = _backtrack_on_path(path_to_current_file, 5)
yaml_pattern = os.path.join(repo_root, "con... | Recursively returns all configs in the /configs/ dir of the repo. | _get_all_config_paths | python | oumi-ai/oumi | tests/unit/core/configs/test_parse_configs.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/configs/test_parse_configs.py | Apache-2.0 |
def test_invalid_strategy():
"""Test that invalid strategy raises ValueError."""
config = SynthesisConfig()
config.strategy = "invalid_strategy" # type: ignore
with pytest.raises(ValueError, match="Unsupported synthesis strategy"):
config.__post_init__() | Test that invalid strategy raises ValueError. | test_invalid_strategy | python | oumi-ai/oumi | tests/unit/core/configs/test_synthesis_config.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/configs/test_synthesis_config.py | Apache-2.0 |
def test_training_config_processor_kwargs():
"""Test that json, regex, and choice parameters are mutually exclusive."""
config = TrainingConfig(
model=ModelParams(
model_name="llava-hf/llava-1.5-7b-hf",
processor_kwargs={"num_patches": 16},
),
data=DataParams(
... | Test that json, regex, and choice parameters are mutually exclusive. | test_training_config_processor_kwargs | python | oumi-ai/oumi | tests/unit/core/configs/test_training_config.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/configs/test_training_config.py | Apache-2.0 |
def test_remote_params_validates_backoff_max():
"""Test that retry_backoff_max is be greater than or equal to retry_backoff_base."""
with pytest.raises(
ValueError,
match="Retry backoff max must be greater than or equal to retry backoff base",
):
params = RemoteParams(retry_backoff_b... | Test that retry_backoff_max is be greater than or equal to retry_backoff_base. | test_remote_params_validates_backoff_max | python | oumi-ai/oumi | tests/unit/core/configs/params/test_remote_params.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/configs/params/test_remote_params.py | Apache-2.0 |
def test_remote_params_accepts_valid_backoff():
"""Test that valid backoff parameters are accepted."""
params = RemoteParams(retry_backoff_base=1, retry_backoff_max=30)
params.finalize_and_validate()
# No exception should be raised
params = RemoteParams(retry_backoff_base=0.5, retry_backoff_max=0.5... | Test that valid backoff parameters are accepted. | test_remote_params_accepts_valid_backoff | python | oumi-ai/oumi | tests/unit/core/configs/params/test_remote_params.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/configs/params/test_remote_params.py | Apache-2.0 |
def test_packed_dataset_with_long_sample(mock_base_dataset, split_samples):
"""Test handling of samples longer than max_seq_len."""
long_sample = {
"input_ids": [10] * 10,
"labels": [10] * 10,
}
mock_base_dataset._data.append(long_sample)
dataset = PackedSftDataset(
base_dat... | Test handling of samples longer than max_seq_len. | test_packed_dataset_with_long_sample | python | oumi-ai/oumi | tests/unit/core/datasets/test_packed_sft_dataset.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/datasets/test_packed_sft_dataset.py | Apache-2.0 |
def test_packed_dataset_oob():
"""Test handling of out of bounds index."""
base_dataset = MockBaseSftDataset(
dataset_name="mock",
tokenizer=Mock(),
)
base_dataset._data = [{"input_ids": [], "labels": []}] # type: ignore
dataset = PackedSftDataset(
base_dataset=base_dataset... | Test handling of out of bounds index. | test_packed_dataset_oob | python | oumi-ai/oumi | tests/unit/core/datasets/test_packed_sft_dataset.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/datasets/test_packed_sft_dataset.py | Apache-2.0 |
def test_packed_dataset_empty_base_dataset():
"""Test handling of empty base dataset."""
base_dataset = MockBaseSftDataset(
dataset_name="mock",
tokenizer=Mock(),
)
base_dataset._data = [] # type: ignore
with pytest.raises(ValueError, match="Cannot pack empty dataset."):
Pa... | Test handling of empty base dataset. | test_packed_dataset_empty_base_dataset | python | oumi-ai/oumi | tests/unit/core/datasets/test_packed_sft_dataset.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/datasets/test_packed_sft_dataset.py | Apache-2.0 |
def test_packed_dataset_validation(invalid_data):
"""Test validation of required keys in base dataset."""
class InvalidMockDataset(MockBaseSftDataset):
def _load_data(self):
return [invalid_data]
with pytest.raises(ValueError, match="must contain"):
PackedSftDataset(
... | Test validation of required keys in base dataset. | test_packed_dataset_validation | python | oumi-ai/oumi | tests/unit/core/datasets/test_packed_sft_dataset.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/core/datasets/test_packed_sft_dataset.py | Apache-2.0 |
def test_data_format_loading():
"""Tests demo examples are correctly loaded in both json and jsonl formats."""
current_dir = Path(__file__).resolve().parent
data_top_dir = current_dir / "../../../data/dataset_examples"
for format in ["alpaca", "oumi"]:
all_data = []
for ending in ["json... | Tests demo examples are correctly loaded in both json and jsonl formats. | test_data_format_loading | python | oumi-ai/oumi | tests/unit/datasets/test_datasets_demo_examples.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/datasets/test_datasets_demo_examples.py | Apache-2.0 |
def test_transform_conversation_with_static_system_prompt(
mock_load_data, mock_tokenizer, mock_processor, sample_dataset_example
):
"""Test conversation transformation with static system prompt."""
mock_load_data.return_value = pd.DataFrame()
dataset = HuggingFaceVisionDataset(
hf_dataset_path=... | Test conversation transformation with static system prompt. | test_transform_conversation_with_static_system_prompt | python | oumi-ai/oumi | tests/unit/datasets/test_huggingface_vision_dataset.py | https://github.com/oumi-ai/oumi/blob/master/tests/unit/datasets/test_huggingface_vision_dataset.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.